
Sienka Dounia
Technical AI Safety Researcher
Sienka is a Technical AI Safety Researcher at ILINA and a knowledge and systems lead at Successif. His research focuses on model evaluation, model interpretability, and the technical governance of advanced AI systems. He has worked on AI deception as an Apart Fellow and on developmental interpretability through the AI Futures Fellowship. At Successif, he builds systems and content to support mid-career professionals transitioning into roles that reduce risks from AI systems.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Technical AI Safety Researcher
Sienka is a Technical AI Safety Researcher at ILINA and a knowledge and systems lead at Successif. His research focuses on model evaluation, model interpretability, and the technical governance of advanced AI systems. He has worked on AI deception as an Apart Fellow and on developmental interpretability through the AI Futures Fellowship. At Successif, he builds systems and content to support mid-career professionals transitioning into roles that reduce risks from AI systems.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Technical AI Safety Researcher
Sienka is a Technical AI Safety Researcher at ILINA and a knowledge and systems lead at Successif. His research focuses on model evaluation, model interpretability, and the technical governance of advanced AI systems. He has worked on AI deception as an Apart Fellow and on developmental interpretability through the AI Futures Fellowship. At Successif, he builds systems and content to support mid-career professionals transitioning into roles that reduce risks from AI systems.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Technical AI Safety Researcher
Sienka is a Technical AI Safety Researcher at ILINA and a knowledge and systems lead at Successif. His research focuses on model evaluation, model interpretability, and the technical governance of advanced AI systems. He has worked on AI deception as an Apart Fellow and on developmental interpretability through the AI Futures Fellowship. At Successif, he builds systems and content to support mid-career professionals transitioning into roles that reduce risks from AI systems.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Technical AI Safety Researcher
Sienka is a Technical AI Safety Researcher at ILINA and a knowledge and systems lead at Successif. His research focuses on model evaluation, model interpretability, and the technical governance of advanced AI systems. He has worked on AI deception as an Apart Fellow and on developmental interpretability through the AI Futures Fellowship. At Successif, he builds systems and content to support mid-career professionals transitioning into roles that reduce risks from AI systems.
Contact: sienka.dounia@ilinaprogram.org
Team directory

Sienka Dounia
Technical AI Safety Researcher
Sienka is a Technical AI Safety Researcher at ILINA and a knowledge and systems lead at Successif. His research focuses on model evaluation, model interpretability, and the technical governance of advanced AI systems. He has worked on AI deception as an Apart Fellow and on developmental interpretability through the AI Futures Fellowship. At Successif, he builds systems and content to support mid-career professionals transitioning into roles that reduce risks from AI systems.
Contact: sienka.dounia@ilinaprogram.org
Team directory