Portrait of Shalaleh Rismani is unavailable

Shalaleh Rismani

Postdoctorate - McGill University
Supervisor
Research Topics
AI Ethics
AI Safety
Creativity
Human-AI interaction
Human-Centered AI
Human-Computer Interaction (HCI)
Responsible AI
Risk Analysis
Robot Ethics
Safety Engineering

Publications

What does it mean to be an AI Ethicist: An ontology of existing roles
With the increasing adoption of Artificial Intelligence systems (AIS) in various application and the growing efforts to regulate such system… (see more)s, a new set of occupations has emerged in the industry. This new set of roles take different titles and hold varying responsibilities. However, the individuals in these roles are tasked with interpreting and operationalizing best practices for developing ethical and safe AI systems. We will broadly refer to this new set of occupations as AI ethicists and recognize that they often hold a specific role in the intersection of technology development, business needs, and societal implications. In this work, we examine what it means to be an AI ethicist in the industry and propose an ontology of existing roles under this broad title along with their required competencies. We create this ontology by examining the job postings for such roles over the past two years and conduct expert interviews with fourteen individuals who currently hold such a role in the industry. The proposed ontology will inform executives and leaders who are looking to build responsible AI teams and provide educators the necessary information for creating new learning objectives and curriculum.
What does it mean to be an AI Ethicist: An ontology of existing roles
With the increasing adoption of Artificial Intelligence systems (AIS) in various application and the growing efforts to regulate such system… (see more)s, a new set of occupations has emerged in the industry. This new set of roles take different titles and hold varying responsibilities. However, the individuals in these roles are tasked with interpreting and operationalizing best practices for developing ethical and safe AI systems. We will broadly refer to this new set of occupations as AI ethicists and recognize that they often hold a specific role in the intersection of technology development, business needs, and societal implications. In this work, we examine what it means to be an AI ethicist in the industry and propose an ontology of existing roles under this broad title along with their required competencies. We create this ontology by examining the job postings for such roles over the past two years and conduct expert interviews with fourteen individuals who currently hold such a role in the industry. The proposed ontology will inform executives and leaders who are looking to build responsible AI teams and provide educators the necessary information for creating new learning objectives and curriculum.
How do AI systems fail socially?: an engineering risk analysis approach
Failure Mode and Effect Analysis (FMEA) has been used as an engineering risk assessment tool since 1949. FMEAs are effective in preemptively… (see more) identifying and addressing how a device or process might fail in operation and are often used in the design of high-risk technology applications such as military, automotive industry and medical devices. In this work, we explore whether FMEAs can serve as a risk assessment tool for machine learning practitioners, especially in deploying systems for high-risk applications (e.g. algorithms for recidivism assessment). In particular, we discuss how FMEAs can be used to identify social and ethical failures of Artificial Intelligent Systemss (AISs), recognizing that FMEAs have the potential to uncover a broader range of failures. We first propose a process for developing a Social FMEAs (So-FMEAs) by building on the existing FMEAs framework and a recently published definition of Social Failure Modes by Millar. We then demonstrate a simple proof-of-concept, So-FMEAs for the COMPAS algorithm, a risk assessment tool used by judges to make recidivism-related decisions for convicted individuals. Through this preliminary investigation, we illustrate how a traditional engineering risk management tool could be adapted for analyzing social and ethical failures of AIS. Engineers and designers of AISs can use this new approach to improve their system's design and perform due diligence with respect to potential ethical and social failures.
Ethics of Corporeal, Co-present Robots as Agents of Influence: a Review
H. V. D. Van der Loos
Driver perceptions of advanced driver assistance systems and safety
Sophie Le Page
Jason Millar
Kelly Selina Bronson
Advanced driver assistance systems (ADAS) are often used in the automotive industry to highlight innovative improvements in vehicle safety. … (see more)However, today it is unclear whether certain automation (e.g., adaptive cruise control, lane keeping, parking assist) increases safety of our roads. In this paper, we investigate driver awareness, use, perceived safety, knowledge, training, and attitudes toward ADAS with different automation systems/features. Results of our online survey (n=1018) reveal that there is a significant difference in frequency of use and perceived safety for different ADAS features. Furthermore, we find that at least 70% of drivers activate an ADAS feature"most or all of the time"when driving, yet we find that at least 40% of drivers report feeling that ADAS often compromises their safety when activated. We also find that most respondents learn how to use ADAS in their vehicles by trying it out on the road by themselves, rather than through any formal driver education and training. These results may mirror how certain ADAS features are often activated by default resulting in high usage rates. These results also suggest a lack of driver training and education for safely interacting with, and operating, ADAS, such as turning off systems/features. These findings contribute to a critical discussion about the overall safety implications of current ADAS, especially as they enable higher-level automation features to creep into personal vehicles without a lockstep response in training, regulation, and policy.
Drivers' Awareness, Knowledge, and Use of Autonomous Driving Assistance Systems (ADAS) and Vehicle Automation
Kelly Selina Bronson
Sophie Le Page
Katherine M. Robinson
Jason Millar
Advanced driver assistance systems (ADAS) technologies in vehicles (e.g. park assist, lane change assist, emergency braking, etc.), which ta… (see more)ke over parts of the driving task of human drivers, are advancing at a disruptive pace and hold the potential to deliver many benefits to society. However, public understanding of ADAS systems, and driver training and licensing for using them, are lagging behind the fast-paced technological development, which could raise safety issues or slow the deployment of ADAS, thus offsetting their potential benefits. There is, therefore, a need to investigate issues related to public perception of ADAS in order to develop appropriate policies and governance structures which support innovation, and result in the smooth deployment and acceptance of appropriate ADAS for society. In this work we perform a quantitative public survey to better understand how the public's awareness and knowledge of ADAS technologies in their vehicles correlate to their use or engagement of those technologies. We find that up to 67% of participants never or rarely use optional ADAS in their vehicles (e.g. adaptive cruise control), where women were less likely than men to use ADAS even though women reported more awareness of ADAS in their vehicles, better training, and more willingness to pay for ADAS. By performing this analysis we hope to raise awareness around the public perception of current state-of-the-art in ADAS technologies. We also hope to flag concerns that answers to these questions might raise for the regulatory agencies, and manufacturers, responsible for bringing these technologies to market.