The increasingly widespread application of AI research has brought growing awareness of the risks posed by AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.
In order for AI developers to earn trust from users, civil society, governments, and other stakeholders, there is a need to move beyond principles to a focus on mechanisms for demonstrating responsible behavior. Making and assessing verifiable claims, to which developers can be held accountable, is one step in this direction.
About this report
This report suggests various steps that different stakeholders can take to make it easier to verify claims made about AI systems and their associated development processes. The authors believe the implementation of such mechanisms can help make progress on one component of the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion.
Some authors of this report held a workshop in April 2019, aimed at expanding the toolbox of mechanisms for making and assessing verifiable claims. This report lays out and builds on the ideas proposed at that workshop.
About the authors
Authors come from a variety of institutions and stakeholder groups involved in the development of AI, including academic researchers (in AI, policy, and other domains), industry scientists, policy experts, and others. For a full list of authors and their contact details, see the report.