← Back to Blog Index

OECD AI Principles for Trustworthy Artificial Intelligence

By Jim Venuto | Published: 01/22/2024

Artificial Intelligence (AI) is a transformative technology that significantly affects multiple aspects of our lives. From enhancing work efficiency to revolutionizing entertainment, AI has become integral to our existence. However, with the increasing presence of AI in society, it is essential to establish comprehensive governance frameworks and policy guidelines that ensure the technology benefits humanity. The Organization for Economic Cooperation and Development (OECD) has emerged as a significant player in this realm, measuring and analyzing the socio-economic impacts of AI technologies and applications.

OECD.AI Policy Observatory

The OECD AI Policy Observatory (OECD.AI) is a hub for resources across the OECD and its partners. It aims to foster dialogue and supply multidisciplinary policy analysis and data on AI’s areas of impact. The Observatory’s key offering includes country dashboards highlighting hundreds of AI policy initiatives in over sixty countries and territories.

For a closer look at the OECD.AI Policy Observatory, you can explore it here.

Towards a Human-Centric Approach to Trustworthy AI

The OECD developed the Principles on Artificial Intelligence to promote AI that is innovative, trustworthy, and respectful of human rights and democratic values. Adopted in May 2019, these principles provide concrete public policy and strategy recommendations with a general scope for global applicability in AI developments.

Further details on the OECD Council Recommendation on Artificial Intelligence

Understanding the OECD AI Principles

The OECD AI Principles are the first such principles endorsed by governments. They encompass five values-based principles for responsible stewardship of trustworthy AI and five recommendations to governments.

Principles for Responsible Stewardship of Trustworthy AI

  1. Inclusive Growth, Sustainable Development, and Well-being: AI should augment human capabilities, enhance creativity, advance the inclusion of underrepresented populations, reduce inequalities, and protect natural environments.
  2. Human-centered Values and Fairness: AI systems should respect the rule of law, human rights, democratic values, and diversity. They should also supply safeguards appropriate to the context, such as a capacity for human determination.
  3. Transparency and Explainability: AI systems should foster a general understanding and awareness of their interactions. Affected individuals should be able to understand and challenge the outcomes of an AI system.
  4. Robustness, Security, and Safety: AI systems should maintain robustness, security, and safety throughout their entire lifecycle. It is crucial to continuously evaluate and manage potential privacy, digital security, safety, and bias risks.
  5. Accountability: AI actors should be accountable for the proper functioning of AI systems in line with the above principles.

National Policies and International Cooperation for Trustworthy AI

  1. Investing in AI Research and Development: Governments should encourage public and private investment in research and development to spur innovation in trustworthy AI.
  2. Fostering a Digital Ecosystem for AI: Governments should ease the development of a digital ecosystem that supports safe, fair, legal, and ethical data sharing.
  3. Shaping an Enabling Policy Environment for AI: Governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems.
  4. Building Human Capacity and Preparing for Labor Market Transformation: Governments should collaborate closely with stakeholders to prepare to transform the world of work and society. They should also supply support for workers affected by AI deployment.
  5. International Cooperation for Trustworthy AI: Governments should cooperate across borders and sectors to advance responsible stewardship of trustworthy AI.

The OECD Framework for Classifying AI Systems

The OECD framework for classifying AI systems offers a multifaceted approach to evaluating AI technologies, covering five key dimensions: People & Planet, Economic Context, Data Input, AI Model, and Task & Output. Each dimension provides a unique perspective to assess AI’s implications, risks, and benefits, ensuring responsible, ethical, and effective development and use. This framework is crucial for organizations to understand AI systems’ societal and technical impact. 

  1. People and Planet: Focuses on AI’s impact on human rights, the environment, and societal concerns, emphasizing privacy and ethical issues.
  2. Economic Context: Evaluate AI systems within their financial environment, considering aspects like industry sector, business model, operational criticality, deployment impact, scale, and technological maturity.
  3. Data Input: Examines data types, collection methods, structure, and the inclusion of expert input in AI models, emphasizing data quality and reliability.
  4. AI Model: Concentrates on the technical construction and application of AI models, including algorithm use, design, and development methodologies.
  5. Task and Output: Assesses AI systems based on the tasks they perform, their outputs, resultant actions, and the effectiveness of these tasks and outputs.

The Future of AI with OECD

The OECD aims to supply a platform for exchanging information on AI policy and activities, fostering multi-stakeholder and interdisciplinary dialogue to promote trust in and adoption of AI. With the rapidly evolving landscape of AI, the OECD AI Principles serve as a beacon guiding the development and deployment of AI technologies worldwide.

 References

OECD Framework for the Classification of AI Systemshttps://www.oecd.org/publications/oecd-framework-for-the-classification-of-ai-systems-cb6d9eca-en.htm.

Artificial Intelligence – OECD. https://www.oecd.org/digital/artificial-intelligence/

OECD Recommendation on AI (PDF). https://www.fsmb.org/siteassets/artificial-intelligence/pdfs/oecd-recommendation-on-ai-en.pdf

OECD AI Principles Overview. https://oecd.ai/en/ai-principles

OECD Legal Instruments. https://legalinstruments.oecd.org/public/doc/648/1df51f15-53fc-43ef-9f13-ee9f957076bc.htm

Recommendation of the Council on Artificial Intelligence (OECD) – Cambridge University. https://www.cambridge.org/core/journals/international-legal-materials/article/recommendation-of-the-council-on-artificial-intelligence-oecd/EC74B60333EEB276393DB53307519B19

OECD Recommendation of the Council on Artificial Intelligence – American Planning Association. https://www.planning.org/knowledgebase/resource/9213663

The State of Implementation of the OECD AI Principles Four Years Onhttps://www.oecd.org/publications/the-state-of-implementation-of-the-oecd-ai-principles-four-years-on-835641c9-en.htm

What are the OECD Principles on AI? (PDF)https://www.oecd-ilibrary.org/what-are-the-oecd-principles-on-ai_6ff2a1c4-en.pdf

Recommendation – OECD Legal Instruments. https://legalinstruments.oecd.org/en/instruments