Artificial Intelligence (“AI”) is transforming the world by completing tasks that historically only humans could complete, and often with greater speed and accuracy. However, most humans cannot understand the deep learning algorithms underlying many of these AI technologies. In response to these concerns, the increasing power of AI systems and their greater role in society, and increased regulatory oversight (including the EU AI Act,1 the U.S. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,2 and China’s Generative AI Regulations discussed in our earlier Alerts here and here),3 many companies are now seeking to promote “Responsible AI.”
Responsible AI refers to an approach of developing and deploying AI, with the goal of creating AI systems that are safe, transparent, and ultimately beneficial.4 AI systems yield benefits if they can understand humans’ requests and are properly trained. This issue of creating AI systems that are aligned with human interests is known as the “alignment problem.”5 Misalignment can occur when AI systems are trained on biased data or irrelevant data.
We describe here some of the ways that companies are instituting Responsible AI, from developmental frameworks to governance measures, to foster Responsible AI and incorporate Responsible AI into their legal and business practice.
AI Companies’ Practices
Many companies that develop AI systems are taking actions, even in the absence of regulation, to promote responsible AI development. These actions range from integrating safety and ethics into AI development itself to implementing governance measures that promote Responsible AI.
Responsible Development
To implement Responsible AI, many companies are considering safety and fairness at every step of the design process as well as at deployment. Leading tech companies have formulated their own responsibility frameworks that shape their creation of AI models.
For example, Microsoft has six guiding principles for Responsible AI: (1) fairness, (2) reliability and safety, (3) privacy and security, (4) inclusiveness, (5) transparency, and (6) accountability.6 Microsoft provides considerations to achieve each of these principles. For example, to achieve fairness, it encourages companies to put systems in place to identify bias and train AI systems with representative data. With respect to reliability and safety, Microsoft encourages developing processes to audit AI systems, design for unintended circumstances, conduct rigorous testing, and develop feedback mechanisms from users to identify issues. To implement these considerations, Microsoft recommends that companies that develop and deploy AI to create internal AI governance systems. It also advocates for appropriate design principles and engineering guidelines, which should be translated into practical guidance for technical employees. Microsoft has published a “Responsible AI Standard,” which addresses how it will “operationalize” these six guidelines.7 For each guideline, Microsoft lists requirements for achieving such goal.
Microsoft also recently released its first “Responsible AI Transparency Report,” which shows how it builds responsible AI products, decides to release AI products, and supports customers in building responsible AI systems.8 Within this report, Microsoft details its iterative process of governing to promote risk management, mapping risks to identify potential issues, measuring risks to assess performance, and managing risks to mitigate problems.
Governance
Additionally, many AI companies have created teams focused on implementing Responsible AI. For example, OpenAI has several safety and policy teams focused on addressing AI risks, including Safety Systems, Preparedness, and Superalignment (i.e., foundations for the safety of super intelligent models) teams.9 These teams respectively focus on limiting misuse of current models and products, mapping emerging risks of frontier models, and engaging in research to ensure the alignment of super intelligent models. Other companies, from Anthropic10 to Microsoft,11 have created similar governance structures to promote responsibility.
Promoting Responsible AI within Your Organization
Both AI users and service providers can take steps to promote Responsible AI within their organizations. Companies that use others’ AI products can contractually promote safe and fair AI use as well as develop relevant internal policies. Meanwhile, AI service providers should look to regulation as well as practices employed by other AI companies.
Determine How AI Will Be Used
Before seeking to incorporate Responsible AI practices, businesses should first consider how they will use AI. For example, a business’s internal use of AI tools, such as using a chatbot to streamline research, may not trigger the same ethical considerations as external use cases that directly impact consumers. Moreover, different use cases trigger different safety and ethical considerations; it is more important that AI is acting “responsibly” when making sensitive decisions, such as approving loans, than when it is used to make relatively harmless decisions, such as recommending music.
Align AI with Organizational Values
After anticipating how it will use AI, a company looking to incorporate Responsible AI practices should consider its core values and how AI usage will interact them. Then, the company can create of list of principles that will inform its selection and use of AI tools, and translate these values into more formal guidance measures.12 While value statements are informative, companies should consider what concrete steps should be taken. This may involve defining relevant concepts such as “bias” and “fairness,” determining what values to prioritize over others, and deciding how these values will inform the business’s selection and collaboration with AI vendors and AI usage within the company. A company may also create a responsible AI framework that specifically addresses how it will use AI safely.13
Perform Due Diligence
To practice Responsible AI, companies must ensure that the AI systems that such companies use are themselves responsible. To do this, companies should investigate how the AI provider created its model. Did it train its models on reliable and unbiased data? Did the provider incorporate Responsible AI practices into AI development and training? Did it rigorously test the models for alignment?
While it may be easier to detect underlying bias and other issues in simpler models, companies should realize that there may be a tradeoff between a model’s explainability and interpretability,14 on the one hand, and its accuracy, on the other hand. While companies should generally seek models that they can understand, they should also consider how strict standards may be impractical and unnecessarily limit the list of models they consider.
Contractual Tools
After deciding to license an AI tool or otherwise engage with an AI vendor, a business can use contractual provisions that promote Responsible AI. First, companies should determine what regulations they must comply with. Governments are about to or have already passed regulations that in part seek to promote Responsible AI. These include the EU AI Act,15 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,16 and China’s Generative AI Regulations.17 Laws may not only place responsibilities on AI developers but also businesses that incorporate AI offerings into their products and services. Businesses should also determine if they are subject to non-AI specific regulations, such as privacy laws.
Companies implementing Responsible AI should also determine if there are any other standards that they want AI vendors to satisfy. A company can apply standards from their list of core values or other AI companies’ Responsible AI practices. Examples of such standards include requiring that the model be trained on non-biased data and ensuring that the AI will not develop harmful behaviors. Businesses may also refer to voluntary guidance released by governmental authorities. For example, in the United States, the White House released a “Blueprint for an AI Bill of Rights,”18 and the National Institute of Standards and Technology (“NIST”) released an AI Risk Management Framework (“AI RMF”).19
After determining applicable regulations and developing their own internal standards, companies should seek representations and warranties that AI models will satisfy these standards. However, for representations and warranties to have meaning, companies should be able to determine whether an AI model is satisfying a particular standard. Therefore, the stated performance obligations should reflect observable criteria, such as explainability and operability. Companies may also advocate for terms that require quality control, transparency, and explainability. When drafting contract terms, counsel may want to refer to any existing governmental guidance, such as the EU’s model contractual AI clauses.20 Companies should understand the tradeoffs of increasing specificity – specificity increases the ability to enforce, but may be burdensome for smaller companies.
Companies seeking to promote Responsible AI should also recognize that the AI regulatory space is constantly evolving and also that models may change over time. Therefore, they should seek covenants from AI providers requiring that their AI technologies continue to satisfy regulations and meet certain performance standards. In addition, such businesses should consider seeking indemnification in the event that an AI provider’s failure to create a Responsible AI system leads to third-party claims. In light of these potential liabilities, companies should also review their insurance policies to determine whether they offer protection.
Internal Practices and Standards and Regulatory Compliance
Companies seeking to promote Responsible AI should develop internal standards to promote Responsible AI use. Companies should create policies that directly address internal use of AI as well as implement training programs regarding potential issues that AI may cause. At a minimum, these policies should be designed to help the company comply with applicable AI regulations and implement the requirements of these regulations when developing models. Management should also consider actively monitoring the use of AI within the company to ensure that these models are aligned with the organization’s interests. For instance, a company should consider establishing processes for mitigating bias, such as conducting third-party audits.21
Testing
After development and training, companies implementing Responsible AI should rigorously test their models for not only safety and fairness but also accuracy. Efforts to make AI more responsible may cause models to display high levels of inaccuracy. Likewise, reinforcement learning techniques to avoid bias may inadvertently result in systems that display bias in the opposite direction. Companies may perform adversarial testing (“red teaming”) to increase their systems’ robustness.22
Business Organization
Companies looking to implement Responsible AI should also consider how to structure their internal organizations to promote responsible AI use. While it may not be practical for all businesses to create safety and alignment teams, they can still delegate to particular individuals within the company responsibility to focus on these issues and monitor how computer scientists are deploying the tenets of Responsible AI.
Conclusion
Companies must balance the business imperative to develop and implement AI systems with the legal and ethical obligation to do so in a way that is safe, transparent, and ultimately beneficial.23 The Responsible AI frameworks discussed above can be helpful tools to do so.
- https://artificialintelligenceact.eu/.
- https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.
- https://eastasiaforum.org/2023/09/27/the-future-of-ai-policy-in-china/.
- https://www.responsible.ai/ai-vs-responsible-ai-why-is-it-important/.
- https://techcrunch.com/2023/09/15/answering-ais-biggest-questions-requires-an-interdisciplinary-approach/.
- https://www.microsoft.com/cms/api/am/binary/RE4pKH5.
- https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5cmFl?culture=en-us&country=us.
- https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1l5BO.
- https://openai.com/safety/preparedness.
- https://www.anthropic.com/uk-government-internal-ai-safety-policy-response/prioritising-research-on-risks-posed-by-ai.
- https://www.microsoft.com/cms/api/am/binary/RE4pKH5.
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/leading-your-organization-to-responsible-ai.
- https://www.mckinsey.com/capabilities/quantumblack/how-we-help-clients/generative-ai/responsible-ai-principles.
- https://docs.aws.amazon.com/whitepapers/latest/model-explainability-aws-ai-ml/interpretability-versus-explainability.html.
- https://artificialintelligenceact.eu/.
- https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.
- https://eastasiaforum.org/2023/09/27/the-future-of-ai-policy-in-china/.
- https://www.whitehouse.gov/ostp/ai-bill-of-rights/.
- https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.
- https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/eu-model-contractual-ai-clauses-pilot-procurements-ai.
- https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.
- https://www.anthropic.com/news/frontier-threats-red-teaming-for-ai-safety.
- https://www.responsible.ai/ai-vs-responsible-ai-why-is-it-important/.
Stay Up To Date with Ropes & Gray
Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Stay in the loop with all things Ropes & Gray, and find out more about our people, culture, initiatives and everything that’s happening.
We regularly notify our clients and contacts of significant legal developments, news, webinars and teleconferences that affect their industries.