Meet the people leading on AI safety

Reflections from 2 days with the Advanced Research + Invention Agency (ARIA)

Hi there, welcome to Superfli.

I never have been - or will be - a computer scientist, but helping those who are using their great minds and skills to improve our world is something I’m motivated to do.

I’ve just spent two days with the ARIA team and the brilliant TA1 creators at the University of York (UK) whose mission is to: ‘Power scientists to the edge of what is possible.’

As AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters – but only if it’s deployed wisely. What if we could use advanced AI to drastically improve our ability to model and control everything from the electricity grid to our immune systems?

The bonus: a fellow moderator is one of my favourite impact makers, David Erasmus 

A key takeaway? The need for robust AI safety research has never been greater.

ARIA is the UK’s new, high-risk, high-reward research agency, designed to fund breakthrough innovations that might not otherwise get support. With the UK aiming to be a global leader in AI safety, ARIA’s TA1 programme is backing cutting-edge projects to make AI systems more robust, transparent, and align with human values.

This work isn’t just theoretical—it’s foundational to making AI a force for good, for climate, nature, and all of us. Excited to see what this cohort will create.

Thank to Juliette Devillard from Climate Connection for the opportunity + to the impressive ARIA team Muji Ahmedi, Yasir Bakki , David ‘davidad’ Dalrymple &
co.

Kudos Matt Clifford 👏

Ben Keene

♻️ Share if you found this valuable and connect with me if you’re interested in AI for Good, impact startups, and community building.