Professor Dr Sandra Wachter is a leading authority on the legal and ethical challenges of artificial intelligence, big data, and emerging technologies. As Professor of Technology and Regulation at the University of Oxford’s Oxford Internet Institute, she leads research on AI governance, algorithmic bias, and the societal impact of machine learning. Her work informs global policy discussions and shapes regulation and ethical standards for AI and platform governance.
Professor Wachter's academic contributions are widely recognised. She has been published in journals such as Nature and Science and is frequently cited in major media outlets, including The New York Times, The Guardian, and the BBC. Her insights have been featured in documentaries and reports examining the broader impact of AI on society.
Professor Wachter has held positions at Harvard Law School and the Alan Turing Institute and is affiliated with the Oxford Martin School and the Berkman Klein Center at Harvard. Her work has earned several prestigious awards, including the Alexander von Humboldt Foundation Research Award and the Privacy Law Scholar Award.
Professor Wachter’s work has also made significant real-world impact. Her work on opening the ‘AI Blackbox’ to increase accountability, transparency, and explainability has been widely successful. Her explainability tool – Counterfactual Explanations – has been implemented by major tech companies such as Google, Accenture, IBM, Microsoft, Arthur, and Vodafone.
Professor Wachter has also contributed to major developments in combating bias. She developed a bias test (‘Conditional Demographic Disparity’ or CDD), which Amazon and IBM have since implemented in their cloud services. In 2024, CDD was used to uncover systemic bias in education in the Netherlands. The Dutch Minister for Education, Culture and Science apologised for indirect discrimination and is now working to improve the algorithmic system in question as a result.
Professor Wachter's paper The Unfairness of Fair Machine Learning also revealed the harmful impact of enforcing many ‘group fairness’ measures in practice by making everyone worse off, rather than helping disadvantaged groups. The NHS and the Medicines and Healthcare products Regulatory Agency (MHRA) are now using these findings internally to revise practices for licensing medical devices to ensure equal and safe access to medical care.
Want to book Sandra Wachter for your next event?
Email sandra.wachter@getapeptalk.com, and one of our speaker agents will contact you within hours to confirm availability and fees. If you can, please include your budget upfront – it helps us fast-track your request. It’s also helpful to know the date, format (virtual or in-person), location, and a bit about your audience.
Please note: we can only assist with speaking requests. For anything else, we recommend reaching out to Sandra Wachter directly.
What topics does Sandra Wachter specialise in?
Sandra Wachter specialises in talking about AI governance, algorithmic bias, explainable AI and more.
What is Sandra Wachter's speaking style like?
Professor Dr Sandra Wachter delivers insightful presentations on AI governance and ethics, engaging audiences with her expertise and clarity. Her impactful work shapes policy and drives meaningful discussions in technology regulation. __
Does Sandra Wachter offer virtual speaking engagements?
Yes, Sandra Wachter offers virtual speaker bookings for webinars, online conferences, and remote or distributed team engagements.