Découvrez le dernier rapport d'impact de Mila, qui met en lumière les réalisations exceptionnelles des membres de notre communauté au cours de la dernière année.
Rapport et guide politique GPAI: Vers une réelle égalité en IA
Rejoignez-nous à Mila le 26 novembre pour le lancement du rapport et du guide politique qui présente des recommandations concrètes pour construire des écosystèmes d'IA inclusifs.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development
Identifying potential social and ethical risks in emerging machine learning (ML) models and their applications remains challenging. In this … (voir plus)work, we applied two well-established safety engineering frameworks (FMEA, STPA) to a case study involving text-to-image models at three stages of the ML product development pipeline: data processing, integration of a T2I model with other models, and use. Results of our analysis demonstrate the safety frameworks – both of which are not designed explicitly examine social and ethical risks – can uncover failure and hazards that pose social and ethical risks. We discovered a broad range of failures and hazards (i.e., functional, social, and ethical) by analyzing interactions (i.e., between different ML models in the product, between the ML product and user, and between development teams) and processes (i.e., preparation of training data or workflows for using an ML service/product). Our findings underscore the value and importance of examining beyond an ML model in examining social and ethical risks, especially when we have minimal information about an ML model.
2023-08-29
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (publié)
We enable reinforcement learning agents to learn successful behavior policies by utilizing relevant pre-existing teacher policies. The teach… (voir plus)er policies are introduced as objectives, in addition to the task objective, in a multi-objective policy optimization setting. Using the Multi-Objective Maximum a Posteriori Policy Optimization algorithm (Abdolmaleki et al. 2020), we show that teacher policies can help speed up learning, particularly in the absence of shaping rewards. In two domains with continuous observation and action spaces, our agents successfully compose teacher policies in sequence and in parallel, and are also able to further extend the policies of the teachers in order to solve the task. Depending on the specified combination of task and teacher(s), teacher(s) may naturally act to limit the final performance of an agent. The extent to which agents are required to adhere to teacher policies are determined by hyperparameters which determine both the effect of teachers on learning speed and the eventual performance of the agent on the task. In the humanoid domain (Tassa et al. 2018), we also equip agents with the ability to control the selection of teachers. With this ability, agents are able to meaningfully compose from the teacher policies to achieve a superior task reward on the walk task than in cases without access to the teacher policies. We show the resemblance of composed task policies with the corresponding teacher policies through videos.
With the growing need to regulate AI systems across a wide variety of application domains, a new set of occupations has emerged in the indus… (voir plus)try. The so-called responsible Artificial Intelligence (AI) practitioners or AI ethicists are generally tasked with interpreting and operationalizing best practices for ethical and safe design of AI systems. Due to the nascent nature of these roles, however, it is unclear to future employers and aspiring AI ethicists what specific function these roles serve and what skills are necessary to serve the functions. Without clarity on these, we cannot train future AI ethicists with meaningful learning objectives. In this work, we examine what responsible AI practitioners do in the industry and what skills they employ on the job. We propose an ontology of existing roles alongside skills and competencies that serve each role. We created this ontology by examining the job postings for such roles over a two-year period (2020-2022) and conducting expert interviews with fourteen individuals who currently hold such a role in the industry. Our ontology contributes to business leaders looking to build responsible AI teams and provides educators with a set of competencies that an AI ethics curriculum can prioritize.
2023-08-29
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (publié)
This paper provides an overview of the latest trends in robotics research and development, with a particular focus on applications in manufa… (voir plus)cturing and industrial settings. We highlight recent advances in robot design, including cutting-edge collaborative robot mechanics and advanced safety features, as well as exciting developments in perception and human-swarm interaction. By examining recent contributions from Kinova, a leading robotics company, we illustrate the differences between industry and academia in their approaches to developing innovative robotic systems and technologies that enhance productivity and safety in the workplace. Ultimately, this paper demonstrates the tremendous potential of robotics to revolutionize manufacturing and industrial operations, and underscores the crucial role of companies like Kinova in driving this transformation forward.
2023-08-28
IEEE International Symposium on Robot and Human Interactive Communication (publié)
Motion In-Betweening via Deep <inline-formula><tex-math notation="LaTeX">$\Delta$</tex-math><alternatives><mml:math><mml:mi>Δ</mml:mi></mml:math><inline-graphic xlink:href="oreshkin-ieq1-3309107.gif"/></alternatives></inline-formula>-Interpolator
We show that the task of synthesizing human motion conditioned on a set of key frames can be solved more accurately and effectively if a dee… (voir plus)p learning based interpolator operates in the delta mode using the spherical linear interpolator as a baseline. We empirically demonstrate the strength of our approach on publicly available datasets achieving state-of-the-art performance. We further generalize these results by showing that the
2023-08-28
IEEE Transactions on Visualization and Computer Graphics (publié)
This work introduces an efficient novel approach for epistemic uncertainty estimation for ensemble models for regression tasks using pairwis… (voir plus)e-distance estimators (PaiDEs). Utilizing the pairwise-distance between model components, these estimators establish bounds on entropy. We leverage this capability to enhance the performance of Bayesian Active Learning by Disagreement (BALD). Notably, unlike sample-based Monte Carlo estimators, PaiDEs exhibit a remarkable capability to estimate epistemic uncertainty at speeds up to 100 times faster while covering a significantly larger number of inputs at once and demonstrating superior performance in higher dimensions. To validate our approach, we conducted a varied series of regression experiments on commonly used benchmarks: 1D sinusoidal data,