
Alexandra is the CEO of Catalyze Impact. She leads the organization's strategic direction, fundraising, program development, and ecosystem partnerships, while mentoring new founders in the field whenever possible.
Previously, Alexandra studied Governance, Economics and Development and founded a non-profit TEDx organization, which has been running successfully ever since. After conducting prioritization research on high-impact interventions in the AI Safety space, she founded Catalyze to build the crucial infrastructure needed for the field to grow.

Gábor is a business leader with over 15 years of experience in e-commerce and business management. He has founded several companies and has led teams of 5 to 8,200 in managerial, c-level, and board roles across Europe, Asia, and the Middle East.
Alongside his work at Catalyze, he is product lead at SaferAI, a non-profit organization aiming to incentivize the development and deployment of safer AI systems through better risk management.

Mick is the Program and Operations Manager at Catalyze. He leverages his experience in software development and process optimization to automate workflows, build scalable systems, and develop Catalyze’s organizational infrastructure.
Before joining Catalyze, he managed operations for theater venues and productions with teams ranging from 5 to 70 people, and provided guidance and operational support to 370 student societies and 2700 student leaders at the Edinburgh University Students’ Association.

Ryan is the co-director of ML Alignment & Theory Scholars (MATS), an AI safety-focused educational seminar and independent research program. He is also a board member and cofounder of the London Initiative for Safe AI (LISA), and a regrantor at Manifund. Previously, he completed a PhD in Physics at the University of Queensland (UQ) and conducted independent research in AI alignment for the Stanford Existential Risks Initiative.

Stan is the Director of Engineering of Timaeus, an AI safety research organization dedicated to making breakthrough fundamental progress on technical AI Alignment. Their research focus is on Developmental Interpretability Singular Learning Theory. Previously Stan was the CTO at an Algorithmic Trading Fund, an ML Researcher, and studied Theoretical Physics & Philosophy.

Magdalena is a researcher at the Fraunhofer SIT. She has a Master's degree in Machine Learning and is excited about supporting other alignment researchers. Previously, she was a scholar at MATS, a research fellow at Principles of Intelligence, an associate researcher at the Machine Intelligence Research Institute (MIRI), and was involved in founding the European Network for AI safety (ENAIS).

Jan-Willem is an impact-focused serial entrepreneur. He leverages his management consulting and for-profit experience to create positive societal change through innovative training programs. Before co-founding The School for Moral Ambition, he co-founded Training For Good whose work now continues as the Tarbell Center for AI Journalism and the AI policy training organization Talos Network.

Kay is a self-taught generalist with a background in business and technical AI Safety. He is currently the Operations Lead at AI Safety Connect. Before this, he co-founded Catalyze and Cadenza Labs and participated in MATS. He is deeply involved in the AI Safety ecosystem and has worked with PRISM Eval, ARENA, CAIS and BlueDot Impact.