
Sienka Dounia
Research Affiliate
Sienka is a Research Affiliate at ILINA and an AI Safety Content Associate at Succesif. His AI safety research has so far focused on deception detection in large language models, model evaluations, technical AI governance, and interpretability. He was previously a Fellow at ILINA (technical alignment track), the AI Futures Fellowship (Mexico City), and Apart Lab. He was also in a team that came second in Apart Research’s deception hackathon. At Successif, he develops AI safety content and supports mid-career professionals transitioning into the field.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Research Affiliate
Sienka is a Research Affiliate at ILINA and an AI Safety Content Associate at Succesif. His AI safety research has so far focused on deception detection in large language models, model evaluations, technical AI governance, and interpretability. He was previously a Fellow at ILINA (technical alignment track), the AI Futures Fellowship (Mexico City), and Apart Lab. He was also in a team that came second in Apart Research’s deception hackathon. At Successif, he develops AI safety content and supports mid-career professionals transitioning into the field.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Research Affiliate
Sienka is a Research Affiliate at ILINA and an AI Safety Content Associate at Succesif. His AI safety research has so far focused on deception detection in large language models, model evaluations, technical AI governance, and interpretability. He was previously a Fellow at ILINA (technical alignment track), the AI Futures Fellowship (Mexico City), and Apart Lab. He was also in a team that came second in Apart Research’s deception hackathon. At Successif, he develops AI safety content and supports mid-career professionals transitioning into the field.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Research Affiliate
Sienka is a Research Affiliate at ILINA and an AI Safety Content Associate at Succesif. His AI safety research has so far focused on deception detection in large language models, model evaluations, technical AI governance, and interpretability. He was previously a Fellow at ILINA (technical alignment track), the AI Futures Fellowship (Mexico City), and Apart Lab. He was also in a team that came second in Apart Research’s deception hackathon. At Successif, he develops AI safety content and supports mid-career professionals transitioning into the field.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Research Affiliate
Sienka is a Research Affiliate at ILINA and an AI Safety Content Associate at Succesif. His AI safety research has so far focused on deception detection in large language models, model evaluations, technical AI governance, and interpretability. He was previously a Fellow at ILINA (technical alignment track), the AI Futures Fellowship (Mexico City), and Apart Lab. He was also in a team that came second in Apart Research’s deception hackathon. At Successif, he develops AI safety content and supports mid-career professionals transitioning into the field.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Research Affiliate
Sienka is a Research Affiliate at ILINA and an AI Safety Content Associate at Succesif. His AI safety research has so far focused on deception detection in large language models, model evaluations, technical AI governance, and interpretability. He was previously a Fellow at ILINA (technical alignment track), the AI Futures Fellowship (Mexico City), and Apart Lab. He was also in a team that came second in Apart Research’s deception hackathon. At Successif, he develops AI safety content and supports mid-career professionals transitioning into the field.
Contact: sienka.dounia@ilinaprogram.org
Team directory