Mila AI Policy Fellowship

Leveraging artificial intelligence (AI) expertise for policy impact.

Logo of Mila AI Policy Fellowship with a picture of a hand sticking notes on a board

Overview

AI is rapidly reshaping society. Yet a significant gap remains between research and policy, as policymakers struggle to process the vast information needed to make decisions that serve societal needs. 

Mila’s AI Policy Fellowship bridges this gap, connecting research and policy through a socio-technical approach. This six-month program brings together experts from AI and other fields, including the social sciences and humanities, to collaborate on real-world challenges. 

Through public events and policy briefs, Fellows translate research into practical, evidence-based guidance for local and global decision-makers. Each year, a new cohort works with Mila AI Advisors and the Mila AI Policy Secretariat to develop policy insights that address societal challenges and opportunities related to AI development, deployment, and governance, within the framework of annual thematic areas.

Thematic Areas

The societal impacts of AI are multi-dimensional, and the Mila AI Policy Fellowship reflects this reality. Effective policy insights will take into account the interplay between technical design, infrastructure, and social dynamics and incentives. 

The Mila AI Policy Fellowship invites contributions within annual thematic thematic areas. Cross-cutting priorities include integrating safety perspectives for risk mitigation and harm prevention, adopting responsible and ethical approaches to AI deployment, and advancing AI for the benefit of all. The 2026-2027 call for applications invite proposals within AI, information integrity and democratic governance, health and wellbeing, sovereignty and security, Indigenous AI, education, climate and the natural world. 

AI, Information Integrity and Democratic Governance

Information integrity—the ability to access, verify, and trust knowledge—is a cornerstone of democratic decision-making, economic activity, and scientific progress. The rapid proliferation of large generative models and agentic AI systems is fundamentally transforming our information ecosystems. This is offering unprecedented opportunities for knowledge creation while introducing systemic risks of large-scale manipulation, automated misinformation, and the erosion of institutional trust. This thematic area focuses on developing actionable strategies to advance trustworthy knowledge ecosystems and safeguard the social contract in AI-mediated environments. Proposals will align with principles of democratic governance, such as ensuring the quality of civic interactions and opportunities for political participation alongside fair and integral democratic elections, establishing institutional accountability and upholding the rule of law, as well as promoting human rights and substantive equality. These policy challenges require a clear understanding of the capabilities and limitations embedded in the technical design of models, platforms, and digital infrastructures.

Example Topics:

  • Trust and public discourse in the age of synthetic media: Developing policy standards for watermarking, provenance, and auditing in multimodal models to sustain editorial standards and public-interest media.
  • Electoral integrity and agentic systems: Addressing the policy challenges posed by the use of generative AI in campaigns, including targeted persuasion, coordinated manipulation, and the rise of autonomous misinformation bots.
  • Algorithmic amplification and information flows: Investigating regulatory approaches to recommender systems and their role in shaping public discourse, mitigating "engagement-at-all-costs" models.
  • Institutional accountability and contextual liability: Charting pathways for effective legal and accountability frameworks that account for AI opacity, autonomous behavior, and the limited predictability of continuously adapting systems.
  • Rights-centered AI policy: Revisiting legal frameworks to ensure the protection of freedom of expression, privacy, and the right to information (access to data) within AI-driven systems.
  • Economic information and financial integrity: Addressing the reliability of corporate and market information in the face of AI-driven financial fraud, market manipulation, and automated economic decision-making.
  • Democratic alternatives to centralized AI: Exploring "public-interest AI" models and constitutional AI frameworks that prioritize collective input and democratic alignment over centralized private ownership and control.
  • Health and science communication: Delivering policy responses to large-scale AI-generated misinformation in health and science communication, including the credibility, accessibility, and governance of reliable public knowledge.
AI, Health and Wellbeing

As AI systems become deeply integrated into our lives, acting at times as the primary delivery of healthcare services, digital social spaces, and professional environments, they present unique opportunities for innovation alongside significant risks to mental health and psychosocial wellbeing. This thematic area explores the profound interplay between AI systems and the physical, mental, and social dimensions of human health, with a primary focus on protecting the rights and wellbeing of youth and workers. Safeguarding the cognitive and social development of younger generations while ensuring that the modern workplace remains adapted to human needs, this area focuses on ensuring that technological adoption promotes holistic wellbeing, psychological safety, and meaningful human-centric growth.

Example Topics:

  • Mental health and human-AI interaction: Designing interventions to prevent harm and promote psychological safety in environments where AI mediated communication or decision-making is prevalent.
  • Youth development and children’s rights: Addressing the long-term cognitive and developmental impacts of human-AI interactions on younger generations and establishing protections for children’s digital rights.
  • Biologically responsive and responsible AI (policy): Exploring regulatory standards or obligations that reflect intentional behavioral and cognitive impacts of human-AI interaction (engagement, dependency, cognitive “off-loading”).
  • Modernizing labour health and safety standards: Revising existing policies to address psychosocial harms that arise alongside AI integration in work environments, such as shifts in job control, erosion of social support, and impacts on job satisfaction.
  • Institutional participation and worker voice: Developing policy suggestions for enabling organized labour and worker representatives to be active participants in the design, adoption, and governance of workplace AI. Can equally apply to health and mental health care professionals employing AI for health care delivery. 
AI Sovereignty and Security

Amidst global geopolitical shifts, the centrality of AI sovereignty and security have emerged in national and international fora. This thematic area navigates the complexities of competing narratives and definitions of AI sovereignty, and real-world security implications inherent in the convergence of global innovation and critical infrastructure. As AI integration scales across public and private sectors, new risk vectors emerge - from AI supply-chain vulnerabilities, to interoperability issues, and large-scale cybercrime. This track focuses on shedding light and finding solutions to maintain integrity and stewardship over their data and resources while enhancing systemic resilience in an increasingly digital and interconnected world. 

Example Topics:

  • Research security and supply chain integrity: Developing guidance for safe AI development that considers the origin of source code and the security of key supply chain inputs.
  • Cyber-resilience and critical infrastructure: Ensuring AI contributes to the security of essential services rather than introducing new vulnerabilities, for example through procurement.
  • Data commons and shared infrastructure: Exploring ways for nations to combine compute, talent, and data resources to challenge centralized ownership models.
  • Energy and compute sovereignty: Addressing unequal and volatile access to the energy and computing power that underpins AI advances.
  • Interoperability and mutual recognition: Identifying cross-jurisdictions assurance mechanisms that promote meaningful international collaboration in academia, R&D and trade.
  • Data sovereignty and stewardship: Balancing national priorities and across multiple jurisdictions and global innovation supply chains, ensuring integrity of data ownership and stewardship.
Indigenous AI

Indigenous AI centers Indigenous sovereignty and self-determination at the intersection of artificial intelligence, focusing on how communities steward and advance their own cultures, languages, and knowledge systems. As AI increasingly reshapes global governance, economic development, and digital infrastructure, there is an urgent need to ensure that Indigenous Peoples—in the Canadian context and beyond—are AI policy leaders and decision makers. This thematic area prioritizes the development of policy frameworks that move beyond simple harm mitigation toward the realization of Indigenous data sovereignty and stewardship. 

Suggested Topics:

  • Indigenous data sovereignty and governance: Developing policy mechanisms for the ownership, control, and stewardship of data related to Indigenous peoples, lands, and resources within national and international AI strategies.
  • Collective privacy and consent in Machine Learning: Addressing limitations of individualistic privacy laws by designing frameworks for collective data privacy and community-based consent in the training of large-scale models.
  • AI for language and cultural revitalization: Proposing guidelines and technical standards for the use of AI in preserving Indigenous languages while safeguarding against the unauthorized extraction or commercialization of traditional knowledge.
  • Ethical AI adoption and impact mitigation: Developing community-led impact assessment tools to evaluate the socio-technical effects of AI deployment within Indigenous territories and governance structures.
  • Infrastructure and economic inclusion: Policy pathways for strengthening Indigenous leadership in the digital economy through investments in community-owned digital infrastructure, entrepreneurship, and specialized talent development.
  • Indigenous AI knowledge systems: Integrating Indigenous epistemologies into the design of AI ethics and safety protocols to foster a more diverse and representative global AI landscape.
AI, Education and Workforce Transformation

As AI enables hyper-personalized learning and necessitates large-scale reskilling, policy guidance must evolve to manage systemic transitions while preventing harm and ensuring inclusive access. This thematic area addresses the transformative role of AI in education  and learning environments, as well as its profound implications for literacy, equity, and the future of the global workforce. The focus is on fostering beneficial change management and organizational transformation—from primary education to professional development—to support lifelong learning. This track prioritizes the protection of student and educator data rights while exploring how AI can be leveraged to enhance human-AI co-reasoning and epistemic skills, ensuring that no community is left behind in the transition to an AI-driven economy.

Suggested Topics:

  • Equity and harm prevention in personalized learning: Designing governance frameworks to ensure AI-driven educational tools do not exacerbate existing socio-economic inequalities or introduce biased pedagogical outcomes.
  • Privacy and data protection in education: Establishing rigorous standards for the collection, use, and long-term protection of student and educator data within AI-enabled learning platforms.
  • Workforce reskilling and career transitions: Developing policy insights for interventions to address displacement and ensure labor market readiness through inclusive, large-scale reskilling and upskilling pathways.
  • AI literacy: Proposing roadmaps for enhancing and fostering critical AI literacy skills within relevant tiers of education, higher education, and continued learning.
  • Worker voice in educational technology: Ensuring that educators, organized labor, and student representatives are active participants in the design and adoption of AI systems within schools and workplaces.
  • Human-AI co-reasoning and epistemic skills: Exploring policy supports for pedagogical shifts that emphasize critical thinking, media literacy, and cognitive resilience in an era of automated knowledge production.
  • Inclusive institutional participation: Developing strategies for systemic transformation that empower marginalized communities to lead in the adoption of safe and responsible AI within educational institutions.
AI, Climate and the Natural World

AI offers powerful tools for climate action, yet it bears its own environmental footprint. This thematic area focuses on policy solutions at the intersection of AI, climate, and the natural world. It encompasses policy initiatives aimed at the management of AI’s own environmental footprint, alongside the promotion of high-impact applications that leverage AI for environmental protection, climate adaptation, and disaster mitigation. Proposals may also focus recommendations for AI utilizations to support efficiency, innovation, and stewardship across natural resource sectors, including forestry, mining, and agriculture, fisheries, water management, and energy systems, and more. 

Example Topics:

  • Environmental footprint and e-waste: Transparency standards for AI's energy, water, and material use, alongside management guidelines for sustainable hardware lifecycles (procurement, circular economy, disposal practices etc.).
  • Fit-for-Purpose AI ("Small AI" vs. "Big AI"): Guidance on selecting appropriately scaled, resource-efficient models for specific industrial or environmental tasks.
  • Natural resource management: Identifying strategic support for AI applications for productivity and stewardship in critical mining, forestry, agriculture, and fisheries, etc.
  • Climate adaptation & disaster mitigation: Insights for AI deployment enabling early warning systems, emergency response planning, and infrastructure resilience.
  • Energy systems: AI deployment for more efficient electrical grids, load balancing, and broader energy savings.
  • Climate misinformation: Interventions to safeguard scientific consensus in AI-mediated information ecosystems against climate-related deception. 

Who Can Apply

The Mila AI Policy Fellowship welcomes junior and senior researchers and professionals from public policy, social sciences, humanities, or related fields who aim to apply AI expertise for policy impact. Designed for those active in academia, civil society, public service or private sector with relevant policy experience, the Fellowship allows participants to pursue a focused, time-bound project alongside existing commitments, with the aim of bridging research, practice, and policy impact. 

 Specifically, we are looking for researchers and practitioners with:

  • A graduate degree (MSc, MA, or equivalent) in a relevant field such as public policy, law, ethics, AI, sociology, economics or related disciplines.
  • At least 3 years of professional or academic experience.
  • Demonstrated policy related experience or expertise, either through roles interfacing with policy-making processes or through subject matter expertise.
  • An original, relevant, and feasible proposal demonstrating strong subject knowledge, commitment, and the potential for real-world policy impact.
  • Experience with interdisciplinary or multi-stakeholder initiatives is an asset.


Information Session

Register for the virtual information session on April 8, from 10 AM to 11 AM.

Register here

Elyas Felfoul, Fellow of the 2025 cohort presenting his work during the Mila AI Policy Conference.
Elyas Felfoul, Fellow of the 2025 cohort presenting his work during the Mila AI Policy Conference.
Picture of Isadora Hellengren
Isadora Hellegren, Senior Project Manager, leading the Mila AI Policy Fellowship and AI policy research at Mila during the inaugural Mila AI Policy Conference 2026. The event gathered leading researchers, policymakers, government officials, and industry experts to address the most critical challenges and opportunities at the intersection of Artificial Intelligence and public policy today.
Picture of Helen Hayes
Fellow Helen Hayes presenting her work on policy pathways to designing safer AI chatbots to protect young people in conversational AI ecosystems.

Activities and Deliverables

Fellows are expected to: 
  • Commit 15 hours/week to the Fellowship from September to February (in-person or virtual)
  • Participate in monthly meetings with Mila AI Advisors and Mila AI Policy Secretariat (in-person or virtual)
  • Participate in monthly cohort core sessions (virtual)
  • Participate in in-person components of the Fellowship such as the Mila AI Policy Week
  • Complete all deliverables by the end of Fellowship

 

Final deliverables:
  • Policy Brief – Produce a 6–8 page policy brief on selected topic with clear and actionable policy insights or recommendations for a specific target audience; can be co-authored with Mila AI Advisor or members of Mila AI Policy Secretariat
  • Policy Roundtable – Organize an expert event (roundtable, workshop, etc.) for external stakeholders in Quebec, Canada, or internationally (in-person, virtual, or hybrid)
  • Mila Policy Talk  – Present your work to the Mila research community (hybrid)
  • Dissemination Strategy – Submit a plan for engagement and dissemination of the policy brief and roundtable or other activities
  • Research Report Summary (internal) – 10-page summary covering process, methodology, findings, and key references

How to Apply

To apply to Mila’s AI Policy Fellowship Program, you must:

  • Create your application on the Awards platform, where you can save your application and return to it before submission
  • Ensure you meet listed eligibility requirements
  • Complete the application form
  • Upload all required documents

For the application to be considered complete, you must submit: 

  • A fully completed application form that meets the program requirements
  • A project proposal (approximately 2 pages, maximum 1,000 words)
  • One published writing sample (e.g., academic publication, policy brief, blog post, or other professional writing)
  • Contact information to one reference
  • A CV

The project proposal must:

  • Clearly identify key objectives with specific policy issue(s) or gap(s), target audiece(s), and intended policy impact
  • Align with the selected thematic area (or other relevant area) and the Fellowship’s objectives, contributing to interdisciplinary collaborations and the translation of research into actionable policy insights
  • Demonstrate feasibility within the six-month fellowship period
  • Describe your policy relevant experience or expertise
  • Represent original, independent work and must not be heavily AI-generated
Register for the virtual information session on April 8, from 10 AM to 11 AM. Register here

Applications for the 2026 cohort are open until April 16, midnight (anywhere on Earth).
Apply here

AI Policy Fellowship Publications

The Mila AI Policy Fellowship translates deep AI expertise into rigorous, public-interest policy work. Discover publications from past cohorts and meet the Fellows behind them. 

Browse the publications

FAQ

What kind of projects are you looking for?

We welcome well-developed, thoroughly considered,  interdisciplinary, and policy-relevant proposals aligned with one or more of the thematic areas and the objectives of the Fellowship. The proposal must:

  • Clearly demonstrate what you seek to achieve during the Fellowship. This includes describing the project’s key objectives with specific policy issue(s) or gap(s), target audience, and intended policy impact;
  • Align with the selected thematic area, or clearly demonstrate the relevance of other proposed topics; 
  • Demonstrate how you seek to translate research into actionable policy insights.
  • Introduce original, independent work; heavily AI-generated proposals will make it difficult to assess your fit;
  • Demonstrate feasibility within the six-month fellowship period.

Maximum Length: 1,000 words / corresponding 2 pages (excluding references, if applicable).

Can I submit a project proposal outside the listed themes?

Yes. Applicants may submit proposals outside the predefined areas, but should clearly explain the relevance to AI policy, feasability, and potential for impact. 

Can I collaborate with a Mila researcher or external partner?

Yes. The Fellowship will pair you with a Mila advisor and a member of the policy secretariat. However, you may also propose other collaborators in your application, including funders and institutional partners.

How are fellows selected?

Fellows will be selected based on a number of criteria. These include a thorough evaluation of the quality and potential for impact of submitted project proposals, alignment with thematic areas or relevance of topics other than listed, pairings with advisors and policy secretariat, as well as an overall consideration of diversity of cohort. Shortlisted candidates may be invited for a brief interview before final decisions are made.

What language is the Fellowship conducted in?

The working languages of the fellowship are English and French.

Do I need to be based in Canada to apply?

No. The program is open to applicants world-wide. The Fellowship is delivered in a hybrid format, with in-person activities scheduled in Montreal, Quebec. Visa support can be provided for selected fellows who need it.

Can I get help with my visa?

Yes. If you are selected and require a visa to attend the in-person component, Mila can support your application process.

Is this a paid Fellowship?

Yes. Compensation is based on an hourly rate for a dedicated 15 hours per week. The rate is variable within a set range based on profile and experience.

Is there a cost to apply or participate?

No. There are no application fees. Participation is fully funded, including travel and event costs for  in-person components

Are travel and event costs covered?

Yes. All travel and event-related costs associated with the required in-person components of the fellowship are covered by the program.

Do I need to attend live sessions across time zones?

We strongly encourage live participation in key sessions. We aim to accommodate fellows in different time zones and core sessions will be scheduled with input from the cohort to maximize inclusivity.

Have questions about the program?