The increasingly widespread application of AI research has brought growing awareness of the risks posed by AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.

In order for AI developers to earn trust from users, civil society, governments, and other stakeholders, there is a need to move beyond principles to a focus on mechanisms for demonstrating responsible behavior. Making and assessing verifiable claims, to which developers can be held accountable, is one step in this direction.

About this report

This report suggests various steps that different stakeholders can take to make it easier to verify claims made about AI systems and their associated development processes. The authors believe the implementation of such mechanisms can help make progress on one component of the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion.

Some authors of this report held a workshop in April 2019, aimed at expanding the toolbox of mechanisms for making and assessing verifiable claims. This report lays out and builds on the ideas proposed at that workshop.

About the authors

Authors come from a variety of institutions and stakeholder groups involved in the development of AI, including academic researchers (in AI, policy, and other domains), industry scientists, policy experts, and others. For a full list of authors and their contact details, see the report.

Updates

November 18, 2020: The Partnership on AI launches a blog post and paper on their AI Incident Database

August 10, 2020: Publication of an industry research perspective on achieving trustworthy AI inference systems

June 30, 2020: A coalition of AI stakeholders argues for increasing government compute support for academia (coverage in the New York Times)

April 29, 2020: The Ada Lovelace Institute launches a related report on algorithmic auditing.

April 17, 2020: The report receives press coverage, e.g. from VentureBeat and the Financial Times.

April 16, 2020: The report is launched publicly, with several blog posts from participating organizations (PAI, OpenAI, CSER)