
2026 Junior Research Fellows
On the back of a rigorous application and interview process, we selected 9 Junior Research Fellows this year. 7 of them will work on projects with our core governance team while 2 of them will do technical-governance projects. You can read more about each of them below.

2026 Junior Research Fellows
On the back of a rigorous application and interview process, we selected 9 Junior Research Fellows this year. 7 of them will work on projects with our core governance team while 2 of them will do technical-governance projects. You can read more about each of them below.

Christen Rao
Christen is an AI governance researcher working at the intersection of law, policy, and technology. Her research explores how emerging technologies in Africa can be governed to protect fundamental rights and strengthen democratic institutions. She organises and hosts the AI Governance Club, an initiative sponsored by BlueDot Impact, and serves as a Circle Starter with the School of Moral Ambition, connecting Africans pursuing high-impact careers. She holds an undergraduate law degree from the University of KwaZulu-Natal and is currently pursuing a Master’s in Human Rights and Democratisation in Africa at the University of Pretoria.

Christen Rao
Christen is an AI governance researcher working at the intersection of law, policy, and technology. Her research explores how emerging technologies in Africa can be governed to protect fundamental rights and strengthen democratic institutions. She organises and hosts the AI Governance Club, an initiative sponsored by BlueDot Impact, and serves as a Circle Starter with the School of Moral Ambition, connecting Africans pursuing high-impact careers. She holds an undergraduate law degree from the University of KwaZulu-Natal and is currently pursuing a Master’s in Human Rights and Democratisation in Africa at the University of Pretoria.

Christen Rao
Christen is an AI governance researcher working at the intersection of law, policy, and technology. Her research explores how emerging technologies in Africa can be governed to protect fundamental rights and strengthen democratic institutions. She organises and hosts the AI Governance Club, an initiative sponsored by BlueDot Impact, and serves as a Circle Starter with the School of Moral Ambition, connecting Africans pursuing high-impact careers. She holds an undergraduate law degree from the University of KwaZulu-Natal and is currently pursuing a Master’s in Human Rights and Democratisation in Africa at the University of Pretoria.

Christen Rao
Christen is an AI governance researcher working at the intersection of law, policy, and technology. Her research explores how emerging technologies in Africa can be governed to protect fundamental rights and strengthen democratic institutions. She organises and hosts the AI Governance Club, an initiative sponsored by BlueDot Impact, and serves as a Circle Starter with the School of Moral Ambition, connecting Africans pursuing high-impact careers. She holds an undergraduate law degree from the University of KwaZulu-Natal and is currently pursuing a Master’s in Human Rights and Democratisation in Africa at the University of Pretoria.

Christine Ng'ang'a
Christine is broadly interested in AI governance in Africa, particularly how geopolitical dynamics and state capacity shape regulatory development across the continent. She also examines how structural properties of AI systems, such as statefulness and agency, may create distinct legal accountability obligations, independent of specific use cases. Her work further explores whether risk-based governance frameworks, such as the EU AI Act, adequately capture these architectural differences. She holds an undergraduate law degree from Strathmore University

Christine Ng'ang'a
Christine is broadly interested in AI governance in Africa, particularly how geopolitical dynamics and state capacity shape regulatory development across the continent. She also examines how structural properties of AI systems, such as statefulness and agency, may create distinct legal accountability obligations, independent of specific use cases. Her work further explores whether risk-based governance frameworks, such as the EU AI Act, adequately capture these architectural differences. She holds an undergraduate law degree from Strathmore University

Christine Ng'ang'a
Christine is broadly interested in AI governance in Africa, particularly how geopolitical dynamics and state capacity shape regulatory development across the continent. She also examines how structural properties of AI systems, such as statefulness and agency, may create distinct legal accountability obligations, independent of specific use cases. Her work further explores whether risk-based governance frameworks, such as the EU AI Act, adequately capture these architectural differences. She holds an undergraduate law degree from Strathmore University

Christine Ng'ang'a
Christine is broadly interested in AI governance in Africa, particularly how geopolitical dynamics and state capacity shape regulatory development across the continent. She also examines how structural properties of AI systems, such as statefulness and agency, may create distinct legal accountability obligations, independent of specific use cases. Her work further explores whether risk-based governance frameworks, such as the EU AI Act, adequately capture these architectural differences. She holds an undergraduate law degree from Strathmore University

Delar Makonnen
Delar’s research focuses on threat modelling and the evaluation of frontier AI systems, with a particular emphasis on incorporating Global South perspectives. She is interested in developing AI safety evaluation frameworks that are more globally representative and context-sensitive. Previously, she served as a Pre-Fellow at ILINA and contributed to some projects as a Research Assistant. She holds an undergraduate law degree from Strathmore University.

Delar Makonnen
Delar’s research focuses on threat modelling and the evaluation of frontier AI systems, with a particular emphasis on incorporating Global South perspectives. She is interested in developing AI safety evaluation frameworks that are more globally representative and context-sensitive. Previously, she served as a Pre-Fellow at ILINA and contributed to some projects as a Research Assistant. She holds an undergraduate law degree from Strathmore University.

Delar Makonnen
Delar’s research focuses on threat modelling and the evaluation of frontier AI systems, with a particular emphasis on incorporating Global South perspectives. She is interested in developing AI safety evaluation frameworks that are more globally representative and context-sensitive. Previously, she served as a Pre-Fellow at ILINA and contributed to some projects as a Research Assistant. She holds an undergraduate law degree from Strathmore University.

Delar Makonnen
Delar’s research focuses on threat modelling and the evaluation of frontier AI systems, with a particular emphasis on incorporating Global South perspectives. She is interested in developing AI safety evaluation frameworks that are more globally representative and context-sensitive. Previously, she served as a Pre-Fellow at ILINA and contributed to some projects as a Research Assistant. She holds an undergraduate law degree from Strathmore University.

Diana Kirui
Diana’s research examines US whistleblower protection frameworks for frontier AI, with a focus on how these protections can both enable individuals within AI companies to safely raise concerns and strengthen accountability in AI development. Previously, she served as a Pre-Fellow at ILINA. She holds an undergraduate law degree from the University of Nairobi and a postgraduate diploma in law from the Kenya School of Law.

Diana Kirui
Diana’s research examines US whistleblower protection frameworks for frontier AI, with a focus on how these protections can both enable individuals within AI companies to safely raise concerns and strengthen accountability in AI development. Previously, she served as a Pre-Fellow at ILINA. She holds an undergraduate law degree from the University of Nairobi and a postgraduate diploma in law from the Kenya School of Law.

Diana Kirui
Diana’s research examines US whistleblower protection frameworks for frontier AI, with a focus on how these protections can both enable individuals within AI companies to safely raise concerns and strengthen accountability in AI development. Previously, she served as a Pre-Fellow at ILINA. She holds an undergraduate law degree from the University of Nairobi and a postgraduate diploma in law from the Kenya School of Law.

Diana Kirui
Diana’s research examines US whistleblower protection frameworks for frontier AI, with a focus on how these protections can both enable individuals within AI companies to safely raise concerns and strengthen accountability in AI development. Previously, she served as a Pre-Fellow at ILINA. She holds an undergraduate law degree from the University of Nairobi and a postgraduate diploma in law from the Kenya School of Law.

Fitahiana Razafimahenina
Fitahiana is researcher working at the intersection of applied mathematics, AI, and social impact. His recent work has focused on the mathematical foundations of AI safety, especially mechanistic interpretability. He is also interested in developing technology-driven solutions to challenges across Africa, particularly in health, education, and economic development. At ILINA, he works with Sienka Dounia, applying graph theory and spectral analysis to better understand the inner workings of large language models.

Fitahiana Razafimahenina
Fitahiana is researcher working at the intersection of applied mathematics, AI, and social impact. His recent work has focused on the mathematical foundations of AI safety, especially mechanistic interpretability. He is also interested in developing technology-driven solutions to challenges across Africa, particularly in health, education, and economic development. At ILINA, he works with Sienka Dounia, applying graph theory and spectral analysis to better understand the inner workings of large language models.

Fitahiana Razafimahenina
Fitahiana is researcher working at the intersection of applied mathematics, AI, and social impact. His recent work has focused on the mathematical foundations of AI safety, especially mechanistic interpretability. He is also interested in developing technology-driven solutions to challenges across Africa, particularly in health, education, and economic development. At ILINA, he works with Sienka Dounia, applying graph theory and spectral analysis to better understand the inner workings of large language models.

Fitahiana Razafimahenina
Fitahiana is researcher working at the intersection of applied mathematics, AI, and social impact. His recent work has focused on the mathematical foundations of AI safety, especially mechanistic interpretability. He is also interested in developing technology-driven solutions to challenges across Africa, particularly in health, education, and economic development. At ILINA, he works with Sienka Dounia, applying graph theory and spectral analysis to better understand the inner workings of large language models.

Laureen Mukami
Laureen is an AI safety and governance researcher. Her research examines how independent third-party auditing mechanisms can be designed to generate credible evidence of AI safety, and how that evidence can be used to shape good industry practices. This includes proposing institutional models for auditor access, independence, and enforcement. She holds an undergraduate law degree from Kabarak University, where she served as Editor-in-Chief of the Kabarak Law Review (2022-2023).

Laureen Mukami
Laureen is an AI safety and governance researcher. Her research examines how independent third-party auditing mechanisms can be designed to generate credible evidence of AI safety, and how that evidence can be used to shape good industry practices. This includes proposing institutional models for auditor access, independence, and enforcement. She holds an undergraduate law degree from Kabarak University, where she served as Editor-in-Chief of the Kabarak Law Review (2022-2023).

Laureen Mukami
Laureen is an AI safety and governance researcher. Her research examines how independent third-party auditing mechanisms can be designed to generate credible evidence of AI safety, and how that evidence can be used to shape good industry practices. This includes proposing institutional models for auditor access, independence, and enforcement. She holds an undergraduate law degree from Kabarak University, where she served as Editor-in-Chief of the Kabarak Law Review (2022-2023).

Laureen Mukami
Laureen is an AI safety and governance researcher. Her research examines how independent third-party auditing mechanisms can be designed to generate credible evidence of AI safety, and how that evidence can be used to shape good industry practices. This includes proposing institutional models for auditor access, independence, and enforcement. She holds an undergraduate law degree from Kabarak University, where she served as Editor-in-Chief of the Kabarak Law Review (2022-2023).

Leticia Kiptum
Leticia is a legal researcher and tech policy analyst working on digital public infrastructure, internet governance, and emerging technologies. Her research focuses on threat modeling in the integration of AI into digital public infrastructure, examining risks and proposing governance frameworks for rights-respecting public service delivery. She currently serves on the advisory group for the ITU/UNDP SDG Digital Track and has represented Kenya at the Africa Internet Governance Forum, contributing to regional digital policy discussions.

Leticia Kiptum
Leticia is a legal researcher and tech policy analyst working on digital public infrastructure, internet governance, and emerging technologies. Her research focuses on threat modeling in the integration of AI into digital public infrastructure, examining risks and proposing governance frameworks for rights-respecting public service delivery. She currently serves on the advisory group for the ITU/UNDP SDG Digital Track and has represented Kenya at the Africa Internet Governance Forum, contributing to regional digital policy discussions.

Leticia Kiptum
Leticia is a legal researcher and tech policy analyst working on digital public infrastructure, internet governance, and emerging technologies. Her research focuses on threat modeling in the integration of AI into digital public infrastructure, examining risks and proposing governance frameworks for rights-respecting public service delivery. She currently serves on the advisory group for the ITU/UNDP SDG Digital Track and has represented Kenya at the Africa Internet Governance Forum, contributing to regional digital policy discussions.

Leticia Kiptum
Leticia is a legal researcher and tech policy analyst working on digital public infrastructure, internet governance, and emerging technologies. Her research focuses on threat modeling in the integration of AI into digital public infrastructure, examining risks and proposing governance frameworks for rights-respecting public service delivery. She currently serves on the advisory group for the ITU/UNDP SDG Digital Track and has represented Kenya at the Africa Internet Governance Forum, contributing to regional digital policy discussions.

Tolulope Adebayo
Tolulope’s research focuses on how African legal systems, through tort liability frameworks, can respond to harms arising from frontier AI. More broadly, she is interested in AI governance in countries with low institutional capacity, and Global South participation in AI policymaking. She holds an undergraduate law degree from the University of Ibadan and recently completed her legal training at the Nigerian Law School.

Tolulope Adebayo
Tolulope’s research focuses on how African legal systems, through tort liability frameworks, can respond to harms arising from frontier AI. More broadly, she is interested in AI governance in countries with low institutional capacity, and Global South participation in AI policymaking. She holds an undergraduate law degree from the University of Ibadan and recently completed her legal training at the Nigerian Law School.

Tolulope Adebayo
Tolulope’s research focuses on how African legal systems, through tort liability frameworks, can respond to harms arising from frontier AI. More broadly, she is interested in AI governance in countries with low institutional capacity, and Global South participation in AI policymaking. She holds an undergraduate law degree from the University of Ibadan and recently completed her legal training at the Nigerian Law School.

Tolulope Adebayo
Tolulope’s research focuses on how African legal systems, through tort liability frameworks, can respond to harms arising from frontier AI. More broadly, she is interested in AI governance in countries with low institutional capacity, and Global South participation in AI policymaking. She holds an undergraduate law degree from the University of Ibadan and recently completed her legal training at the Nigerian Law School.

Victor Ashioya
Victor is a technical track fellow at ILINA, where he designs runtime safety governors through activation steering. Specifically, his research examines how internal model representations relate to deceptive reasoning, and whether safety control vectors generalise to low-resource languages such as Swahili. Previously, Victor conducted adversarial testing on frontier models with OpenAI's Red Teaming Network and researched digital rights and repression with Amnesty International. He is a Google Developer Expert in AI, has completed the Bluedot Technical AI Safety Program, and leads partnerships at GDG Pwani. He holds a BSc in Telecommunications from Kabarak University.

Victor Ashioya
Victor is a technical track fellow at ILINA, where he designs runtime safety governors through activation steering. Specifically, his research examines how internal model representations relate to deceptive reasoning, and whether safety control vectors generalise to low-resource languages such as Swahili. Previously, Victor conducted adversarial testing on frontier models with OpenAI's Red Teaming Network and researched digital rights and repression with Amnesty International. He is a Google Developer Expert in AI, has completed the Bluedot Technical AI Safety Program, and leads partnerships at GDG Pwani. He holds a BSc in Telecommunications from Kabarak University.

Victor Ashioya
Victor is a technical track fellow at ILINA, where he designs runtime safety governors through activation steering. Specifically, his research examines how internal model representations relate to deceptive reasoning, and whether safety control vectors generalise to low-resource languages such as Swahili. Previously, Victor conducted adversarial testing on frontier models with OpenAI's Red Teaming Network and researched digital rights and repression with Amnesty International. He is a Google Developer Expert in AI, has completed the Bluedot Technical AI Safety Program, and leads partnerships at GDG Pwani. He holds a BSc in Telecommunications from Kabarak University.

Victor Ashioya
Victor is a technical track fellow at ILINA, where he designs runtime safety governors through activation steering. Specifically, his research examines how internal model representations relate to deceptive reasoning, and whether safety control vectors generalise to low-resource languages such as Swahili. Previously, Victor conducted adversarial testing on frontier models with OpenAI's Red Teaming Network and researched digital rights and repression with Amnesty International. He is a Google Developer Expert in AI, has completed the Bluedot Technical AI Safety Program, and leads partnerships at GDG Pwani. He holds a BSc in Telecommunications from Kabarak University.
