
2025 Junior Research Fellows
On the back of a rigorous application and interview process, we selected 6 Junior Research Fellows this year. 4 of them will work on projects with our core governance team while 2 of them will do technical-governance projects. You can read more about each of them below.

2025 Junior Research Fellows
On the back of a rigorous application and interview process, we selected 6 Junior Research Fellows this year. 4 of them will work on projects with our core governance team while 2 of them will do technical-governance projects. You can read more about each of them below.

2025 Junior Research Fellows
On the back of a rigorous application and interview process, we selected 6 Junior Research Fellows this year. 4 of them will work on projects with our core governance team while 2 of them will do technical-governance projects. You can read more about each of them below.

Babra Chepkorir
Under the supervision of Dr. Lalitha Sundaram (Centre for the Study of Existential Risk, University of Cambridge), Babra’s research examines AI-driven biorisk, focusing on the dual-use nature of emerging technologies like AI-powered biological design tools. She investigates the biosecurity implications of these tools, focusing on how their potential to democratize access to advanced scientific knowledge and processes could lead to potentially existential threats to humanity and significant challenges for global biosecurity and pandemic preparedness. Her work aims to facilitate the development of governance frameworks that ensure safer and more ethical advancements at the intersection of AI and biology. She holds a BSc in Biochemistry from the University of Nairobi.

Babra Chepkorir
Under the supervision of Dr. Lalitha Sundaram (Centre for the Study of Existential Risk, University of Cambridge), Babra’s research examines AI-driven biorisk, focusing on the dual-use nature of emerging technologies like AI-powered biological design tools. She investigates the biosecurity implications of these tools, focusing on how their potential to democratize access to advanced scientific knowledge and processes could lead to potentially existential threats to humanity and significant challenges for global biosecurity and pandemic preparedness. Her work aims to facilitate the development of governance frameworks that ensure safer and more ethical advancements at the intersection of AI and biology. She holds a BSc in Biochemistry from the University of Nairobi.

Babra Chepkorir
Under the supervision of Dr. Lalitha Sundaram (Centre for the Study of Existential Risk, University of Cambridge), Babra’s research examines AI-driven biorisk, focusing on the dual-use nature of emerging technologies like AI-powered biological design tools. She investigates the biosecurity implications of these tools, focusing on how their potential to democratize access to advanced scientific knowledge and processes could lead to potentially existential threats to humanity and significant challenges for global biosecurity and pandemic preparedness. Her work aims to facilitate the development of governance frameworks that ensure safer and more ethical advancements at the intersection of AI and biology. She holds a BSc in Biochemistry from the University of Nairobi.

Babra Chepkorir
Under the supervision of Dr. Lalitha Sundaram (Centre for the Study of Existential Risk, University of Cambridge), Babra’s research examines AI-driven biorisk, focusing on the dual-use nature of emerging technologies like AI-powered biological design tools. She investigates the biosecurity implications of these tools, focusing on how their potential to democratize access to advanced scientific knowledge and processes could lead to potentially existential threats to humanity and significant challenges for global biosecurity and pandemic preparedness. Her work aims to facilitate the development of governance frameworks that ensure safer and more ethical advancements at the intersection of AI and biology. She holds a BSc in Biochemistry from the University of Nairobi.

Babra Chepkorir
Under the supervision of Dr. Lalitha Sundaram (Centre for the Study of Existential Risk, University of Cambridge), Babra’s research examines AI-driven biorisk, focusing on the dual-use nature of emerging technologies like AI-powered biological design tools. She investigates the biosecurity implications of these tools, focusing on how their potential to democratize access to advanced scientific knowledge and processes could lead to potentially existential threats to humanity and significant challenges for global biosecurity and pandemic preparedness. Her work aims to facilitate the development of governance frameworks that ensure safer and more ethical advancements at the intersection of AI and biology. She holds a BSc in Biochemistry from the University of Nairobi.

Babra Chepkorir
Under the supervision of Dr. Lalitha Sundaram (Centre for the Study of Existential Risk, University of Cambridge), Babra’s research examines AI-driven biorisk, focusing on the dual-use nature of emerging technologies like AI-powered biological design tools. She investigates the biosecurity implications of these tools, focusing on how their potential to democratize access to advanced scientific knowledge and processes could lead to potentially existential threats to humanity and significant challenges for global biosecurity and pandemic preparedness. Her work aims to facilitate the development of governance frameworks that ensure safer and more ethical advancements at the intersection of AI and biology. She holds a BSc in Biochemistry from the University of Nairobi.

Babra Chepkorir
Under the supervision of Dr. Lalitha Sundaram (Centre for the Study of Existential Risk, University of Cambridge), Babra’s research examines AI-driven biorisk, focusing on the dual-use nature of emerging technologies like AI-powered biological design tools. She investigates the biosecurity implications of these tools, focusing on how their potential to democratize access to advanced scientific knowledge and processes could lead to potentially existential threats to humanity and significant challenges for global biosecurity and pandemic preparedness. Her work aims to facilitate the development of governance frameworks that ensure safer and more ethical advancements at the intersection of AI and biology. She holds a BSc in Biochemistry from the University of Nairobi.

Babra Chepkorir
Under the supervision of Dr. Lalitha Sundaram (Centre for the Study of Existential Risk, University of Cambridge), Babra’s research examines AI-driven biorisk, focusing on the dual-use nature of emerging technologies like AI-powered biological design tools. She investigates the biosecurity implications of these tools, focusing on how their potential to democratize access to advanced scientific knowledge and processes could lead to potentially existential threats to humanity and significant challenges for global biosecurity and pandemic preparedness. Her work aims to facilitate the development of governance frameworks that ensure safer and more ethical advancements at the intersection of AI and biology. She holds a BSc in Biochemistry from the University of Nairobi.

Carringtone Kinyanjui
Carringtone is a final year PhD student at the University of Manchester, where he studies the intersection of the history of science diplomacy, scientific data, and quantitative approaches to bibliometric data under a European Research Council scholarship. Carringtone has been interested in AI safety discourse for years - In 2022, he was part of a team that won 4th place in the Future of Life Institute’s Worldbuilding Contest. He holds a BSc (Astronomy and Astrophysics) and MSc (Theoretical and Mathematical Physics) from the University of Nairobi, and a Master’s degree in History of Science, Technology and Medicine from the University of Manchester. At ILINA, Carringtone will develop and work on a technical AI safety project.

Carringtone Kinyanjui
Carringtone is a final year PhD student at the University of Manchester, where he studies the intersection of the history of science diplomacy, scientific data, and quantitative approaches to bibliometric data under a European Research Council scholarship. Carringtone has been interested in AI safety discourse for years - In 2022, he was part of a team that won 4th place in the Future of Life Institute’s Worldbuilding Contest. He holds a BSc (Astronomy and Astrophysics) and MSc (Theoretical and Mathematical Physics) from the University of Nairobi, and a Master’s degree in History of Science, Technology and Medicine from the University of Manchester. At ILINA, Carringtone will develop and work on a technical AI safety project.

Carringtone Kinyanjui
Carringtone is a final year PhD student at the University of Manchester, where he studies the intersection of the history of science diplomacy, scientific data, and quantitative approaches to bibliometric data under a European Research Council scholarship. Carringtone has been interested in AI safety discourse for years - In 2022, he was part of a team that won 4th place in the Future of Life Institute’s Worldbuilding Contest. He holds a BSc (Astronomy and Astrophysics) and MSc (Theoretical and Mathematical Physics) from the University of Nairobi, and a Master’s degree in History of Science, Technology and Medicine from the University of Manchester. At ILINA, Carringtone will develop and work on a technical AI safety project.

Carringtone Kinyanjui
Carringtone is a final year PhD student at the University of Manchester, where he studies the intersection of the history of science diplomacy, scientific data, and quantitative approaches to bibliometric data under a European Research Council scholarship. Carringtone has been interested in AI safety discourse for years - In 2022, he was part of a team that won 4th place in the Future of Life Institute’s Worldbuilding Contest. He holds a BSc (Astronomy and Astrophysics) and MSc (Theoretical and Mathematical Physics) from the University of Nairobi, and a Master’s degree in History of Science, Technology and Medicine from the University of Manchester. At ILINA, Carringtone will develop and work on a technical AI safety project.

Carringtone Kinyanjui
Carringtone is a final year PhD student at the University of Manchester, where he studies the intersection of the history of science diplomacy, scientific data, and quantitative approaches to bibliometric data under a European Research Council scholarship. Carringtone has been interested in AI safety discourse for years - In 2022, he was part of a team that won 4th place in the Future of Life Institute’s Worldbuilding Contest. He holds a BSc (Astronomy and Astrophysics) and MSc (Theoretical and Mathematical Physics) from the University of Nairobi, and a Master’s degree in History of Science, Technology and Medicine from the University of Manchester. At ILINA, Carringtone will develop and work on a technical AI safety project.

Carringtone Kinyanjui
Carringtone is a final year PhD student at the University of Manchester, where he studies the intersection of the history of science diplomacy, scientific data, and quantitative approaches to bibliometric data under a European Research Council scholarship. Carringtone has been interested in AI safety discourse for years - In 2022, he was part of a team that won 4th place in the Future of Life Institute’s Worldbuilding Contest. He holds a BSc (Astronomy and Astrophysics) and MSc (Theoretical and Mathematical Physics) from the University of Nairobi, and a Master’s degree in History of Science, Technology and Medicine from the University of Manchester. At ILINA, Carringtone will develop and work on a technical AI safety project.

Carringtone Kinyanjui
Carringtone is a final year PhD student at the University of Manchester, where he studies the intersection of the history of science diplomacy, scientific data, and quantitative approaches to bibliometric data under a European Research Council scholarship. Carringtone has been interested in AI safety discourse for years - In 2022, he was part of a team that won 4th place in the Future of Life Institute’s Worldbuilding Contest. He holds a BSc (Astronomy and Astrophysics) and MSc (Theoretical and Mathematical Physics) from the University of Nairobi, and a Master’s degree in History of Science, Technology and Medicine from the University of Manchester. At ILINA, Carringtone will develop and work on a technical AI safety project.

Carringtone Kinyanjui
Carringtone is a final year PhD student at the University of Manchester, where he studies the intersection of the history of science diplomacy, scientific data, and quantitative approaches to bibliometric data under a European Research Council scholarship. Carringtone has been interested in AI safety discourse for years - In 2022, he was part of a team that won 4th place in the Future of Life Institute’s Worldbuilding Contest. He holds a BSc (Astronomy and Astrophysics) and MSc (Theoretical and Mathematical Physics) from the University of Nairobi, and a Master’s degree in History of Science, Technology and Medicine from the University of Manchester. At ILINA, Carringtone will develop and work on a technical AI safety project.

Daniel Wachira
Prior to starting this position, Daniel was a Pre-Fellow in the ILINA Program, focusing on Mathematics and Machine Learning concepts. He has also completed the AI Safety Fundamentals Fellowship, where he studied AI safety concepts like mechanistic interpretability. As a Junior Research Fellow, his work will be focused on frontier AI evaluations. He holds an undergraduate law degree from Kabarak University.

Daniel Wachira
Prior to starting this position, Daniel was a Pre-Fellow in the ILINA Program, focusing on Mathematics and Machine Learning concepts. He has also completed the AI Safety Fundamentals Fellowship, where he studied AI safety concepts like mechanistic interpretability. As a Junior Research Fellow, his work will be focused on frontier AI evaluations. He holds an undergraduate law degree from Kabarak University.

Daniel Wachira
Prior to starting this position, Daniel was a Pre-Fellow in the ILINA Program, focusing on Mathematics and Machine Learning concepts. He has also completed the AI Safety Fundamentals Fellowship, where he studied AI safety concepts like mechanistic interpretability. As a Junior Research Fellow, his work will be focused on frontier AI evaluations. He holds an undergraduate law degree from Kabarak University.

Daniel Wachira
Prior to starting this position, Daniel was a Pre-Fellow in the ILINA Program, focusing on Mathematics and Machine Learning concepts. He has also completed the AI Safety Fundamentals Fellowship, where he studied AI safety concepts like mechanistic interpretability. As a Junior Research Fellow, his work will be focused on frontier AI evaluations. He holds an undergraduate law degree from Kabarak University.

Daniel Wachira
Prior to starting this position, Daniel was a Pre-Fellow in the ILINA Program, focusing on Mathematics and Machine Learning concepts. He has also completed the AI Safety Fundamentals Fellowship, where he studied AI safety concepts like mechanistic interpretability. As a Junior Research Fellow, his work will be focused on frontier AI evaluations. He holds an undergraduate law degree from Kabarak University.

Daniel Wachira
Prior to starting this position, Daniel was a Pre-Fellow in the ILINA Program, focusing on Mathematics and Machine Learning concepts. He has also completed the AI Safety Fundamentals Fellowship, where he studied AI safety concepts like mechanistic interpretability. As a Junior Research Fellow, his work will be focused on frontier AI evaluations. He holds an undergraduate law degree from Kabarak University.

Daniel Wachira
Prior to starting this position, Daniel was a Pre-Fellow in the ILINA Program, focusing on Mathematics and Machine Learning concepts. He has also completed the AI Safety Fundamentals Fellowship, where he studied AI safety concepts like mechanistic interpretability. As a Junior Research Fellow, his work will be focused on frontier AI evaluations. He holds an undergraduate law degree from Kabarak University.

Daniel Wachira
Prior to starting this position, Daniel was a Pre-Fellow in the ILINA Program, focusing on Mathematics and Machine Learning concepts. He has also completed the AI Safety Fundamentals Fellowship, where he studied AI safety concepts like mechanistic interpretability. As a Junior Research Fellow, his work will be focused on frontier AI evaluations. He holds an undergraduate law degree from Kabarak University.

Elsy Jemutai
Elsy’s research interests include liability frameworks for frontier AI developers and the role of the Global South countries in AI safety governance. She holds an undergraduate law degree from Kabarak University and is currently completing a Postgraduate Diploma in Law at the Kenya School of Law.

Elsy Jemutai
Elsy’s research interests include liability frameworks for frontier AI developers and the role of the Global South countries in AI safety governance. She holds an undergraduate law degree from Kabarak University and is currently completing a Postgraduate Diploma in Law at the Kenya School of Law.

Elsy Jemutai
Elsy’s research interests include liability frameworks for frontier AI developers and the role of the Global South countries in AI safety governance. She holds an undergraduate law degree from Kabarak University and is currently completing a Postgraduate Diploma in Law at the Kenya School of Law.

Elsy Jemutai
Elsy’s research interests include liability frameworks for frontier AI developers and the role of the Global South countries in AI safety governance. She holds an undergraduate law degree from Kabarak University and is currently completing a Postgraduate Diploma in Law at the Kenya School of Law.

Elsy Jemutai
Elsy’s research interests include liability frameworks for frontier AI developers and the role of the Global South countries in AI safety governance. She holds an undergraduate law degree from Kabarak University and is currently completing a Postgraduate Diploma in Law at the Kenya School of Law.

Elsy Jemutai
Elsy’s research interests include liability frameworks for frontier AI developers and the role of the Global South countries in AI safety governance. She holds an undergraduate law degree from Kabarak University and is currently completing a Postgraduate Diploma in Law at the Kenya School of Law.

Elsy Jemutai
Elsy’s research interests include liability frameworks for frontier AI developers and the role of the Global South countries in AI safety governance. She holds an undergraduate law degree from Kabarak University and is currently completing a Postgraduate Diploma in Law at the Kenya School of Law.

Elsy Jemutai
Elsy’s research interests include liability frameworks for frontier AI developers and the role of the Global South countries in AI safety governance. She holds an undergraduate law degree from Kabarak University and is currently completing a Postgraduate Diploma in Law at the Kenya School of Law.

Melissa Ninsiima
Melissa’s current research interests are centered around explainable AI and liability regimes for harms caused by AI. She previously served as a Research Assistant in an AI Safety Camp project, focusing on the intersection of AI and the Criminal Justice System. She holds an undergraduate law degree from Uganda Christian University.

Melissa Ninsiima
Melissa’s current research interests are centered around explainable AI and liability regimes for harms caused by AI. She previously served as a Research Assistant in an AI Safety Camp project, focusing on the intersection of AI and the Criminal Justice System. She holds an undergraduate law degree from Uganda Christian University.

Melissa Ninsiima
Melissa’s current research interests are centered around explainable AI and liability regimes for harms caused by AI. She previously served as a Research Assistant in an AI Safety Camp project, focusing on the intersection of AI and the Criminal Justice System. She holds an undergraduate law degree from Uganda Christian University.

Melissa Ninsiima
Melissa’s current research interests are centered around explainable AI and liability regimes for harms caused by AI. She previously served as a Research Assistant in an AI Safety Camp project, focusing on the intersection of AI and the Criminal Justice System. She holds an undergraduate law degree from Uganda Christian University.

Melissa Ninsiima
Melissa’s current research interests are centered around explainable AI and liability regimes for harms caused by AI. She previously served as a Research Assistant in an AI Safety Camp project, focusing on the intersection of AI and the Criminal Justice System. She holds an undergraduate law degree from Uganda Christian University.

Melissa Ninsiima
Melissa’s current research interests are centered around explainable AI and liability regimes for harms caused by AI. She previously served as a Research Assistant in an AI Safety Camp project, focusing on the intersection of AI and the Criminal Justice System. She holds an undergraduate law degree from Uganda Christian University.

Melissa Ninsiima
Melissa’s current research interests are centered around explainable AI and liability regimes for harms caused by AI. She previously served as a Research Assistant in an AI Safety Camp project, focusing on the intersection of AI and the Criminal Justice System. She holds an undergraduate law degree from Uganda Christian University.

Melissa Ninsiima
Melissa’s current research interests are centered around explainable AI and liability regimes for harms caused by AI. She previously served as a Research Assistant in an AI Safety Camp project, focusing on the intersection of AI and the Criminal Justice System. She holds an undergraduate law degree from Uganda Christian University.

Panashe Zowa
Panashe’s current work focuses on advancing African technological sovereignty and developing innovative legal and policy responses to mitigate potential risks from future AI advances and global catastrophic biological risks. His background includes extensive experience as a tech lawyer and policy analyst across various African jurisdictions. He holds an undergraduate law degree from the University of Zimbabwe, where he served as Editor-in-Chief of the University of Zimbabwe Student Law Review.

Panashe Zowa
Panashe’s current work focuses on advancing African technological sovereignty and developing innovative legal and policy responses to mitigate potential risks from future AI advances and global catastrophic biological risks. His background includes extensive experience as a tech lawyer and policy analyst across various African jurisdictions. He holds an undergraduate law degree from the University of Zimbabwe, where he served as Editor-in-Chief of the University of Zimbabwe Student Law Review.

Panashe Zowa
Panashe’s current work focuses on advancing African technological sovereignty and developing innovative legal and policy responses to mitigate potential risks from future AI advances and global catastrophic biological risks. His background includes extensive experience as a tech lawyer and policy analyst across various African jurisdictions. He holds an undergraduate law degree from the University of Zimbabwe, where he served as Editor-in-Chief of the University of Zimbabwe Student Law Review.

Panashe Zowa
Panashe’s current work focuses on advancing African technological sovereignty and developing innovative legal and policy responses to mitigate potential risks from future AI advances and global catastrophic biological risks. His background includes extensive experience as a tech lawyer and policy analyst across various African jurisdictions. He holds an undergraduate law degree from the University of Zimbabwe, where he served as Editor-in-Chief of the University of Zimbabwe Student Law Review.

Panashe Zowa
Panashe’s current work focuses on advancing African technological sovereignty and developing innovative legal and policy responses to mitigate potential risks from future AI advances and global catastrophic biological risks. His background includes extensive experience as a tech lawyer and policy analyst across various African jurisdictions. He holds an undergraduate law degree from the University of Zimbabwe, where he served as Editor-in-Chief of the University of Zimbabwe Student Law Review.

Panashe Zowa
Panashe’s current work focuses on advancing African technological sovereignty and developing innovative legal and policy responses to mitigate potential risks from future AI advances and global catastrophic biological risks. His background includes extensive experience as a tech lawyer and policy analyst across various African jurisdictions. He holds an undergraduate law degree from the University of Zimbabwe, where he served as Editor-in-Chief of the University of Zimbabwe Student Law Review.

Panashe Zowa
Panashe’s current work focuses on advancing African technological sovereignty and developing innovative legal and policy responses to mitigate potential risks from future AI advances and global catastrophic biological risks. His background includes extensive experience as a tech lawyer and policy analyst across various African jurisdictions. He holds an undergraduate law degree from the University of Zimbabwe, where he served as Editor-in-Chief of the University of Zimbabwe Student Law Review.

Panashe Zowa
Panashe’s current work focuses on advancing African technological sovereignty and developing innovative legal and policy responses to mitigate potential risks from future AI advances and global catastrophic biological risks. His background includes extensive experience as a tech lawyer and policy analyst across various African jurisdictions. He holds an undergraduate law degree from the University of Zimbabwe, where he served as Editor-in-Chief of the University of Zimbabwe Student Law Review.