Artificial Intelligence

Emerging Approaches for Governing AI

We are now in a period of rapid innovation in the development and application of artificial intelligence (AI).

Advances in generative AI—which can produce images and text from brief verbal prompts—captured the public’s attention in late 2022. Rapidly advancing AI tools known as large language models (LLMs) enable deep analysis and the generation of new images, text, software code, and even ideas.

These technologies are entering the marketplace and are already being used by researchers to advance scientific discovery. Emerging AI technologies have the potential to accelerate research and development in medicine, agriculture, transportation, education, energy, and more.

But AI also raises understandable concerns that need to be addressed on several fronts. How can we ensure that these technologies are designed and used responsibly—and that the benefits of AI reach everyone? What principles and guardrails should guide the development and use of AI?

Initial Steps for AI Policymaking

Government has long played a role protecting consumers through laws and regulations. AI developers, the larger business community, organizations, institutions, government, and consumers would all benefit from public policies that foster innovation through responsible AI while providing meaningful protections.

The federal government has taken initial steps to develop AI policies. In October 2022, the White House’s Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights. In early 2023, the National Institute of Standards and Technology (NIST) released the voluntary AI Risk Management Framework and launched the Trustworthy and Responsible AI Resource Center in March. The White House took another significant step in October 2023, issuing a detailed AI Executive Order. (See also the White House’s Fact Sheet summary of the Executive Order.)

In November 2023, the Administration announced the creation of the U.S. AI Safety Institute (USAISI), which will, according to the Department of Commerce, “facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.” While NIST has taken initial steps to organize USAISI, additional funding is needed from Congress to sustain momentum and meet all operational goals.

The federal government has also taken initial steps to expand access to AI resources through the creation of a National Artificial Intelligence Research Resource (NAIRR). The most robust uses of AI require access to tremendous computational power and data resources, which may be unavailable due to budgetary and technological constraints. The aim of the NAIRR is to provide AI computing power and data access to qualifying organizations that would otherwise be unable to fully benefit from AI. The National Science Foundation (NSF) launched an interagency NAIRR pilot program in January 2024, but legislation and funding are needed to realize the full vision of the NAIRR.

In June 2023, Senate Majority Leader Chuck Schumer presented a framework for Congress to develop regulations for AI. The framework does not endorse any specific legislation, but it calls for prioritizing key goals, such as supporting security and innovation. Throughout the fall of 2023, the Senate also convened a series of AI Insight Forums to help educate lawmakers about AI.

In July 2023, seven leading AI companies including Microsoft made voluntary commitments developed by the Biden-Harris Administration to advance safe, secure, and trustworthy AI. Subsequently, several other companies agreed to the commitments as well. In addition, in February 2024, 20 leading AI and platform companies announced their participation in a new Tech Accord to Combat Deceptive Use of AI in 2024 Elections.

Microsoft’s Five-Point Blueprint for Governing AI

In May 2023, Microsoft released a white paper that includes a five-point blueprint for addressing several current and emerging AI issues through public policy, law, and regulation. The blueprint was presented with the recognition that it would benefit from a broader, multi-stakeholder discussion and require deeper development.

A five-point blueprint for governing AI 
1. Implement and build upon new government-led AI safety frameworks 
2. Require safety brakes for AI systems that control ciritical infrastructure
3. Develop a borader legal and regulatory framework based on the technology architecture for AI
4. Promote transparency and ensure academic and public access to AI
5. Pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology

We encourage you to access the white paper for a detailed discussion about each of the five points in the blueprint. The white paper also includes a section entitled, “Responsible by Design: Microsoft’s Approach to Building AI Systems that Benefit Society.”

Microsoft also committed to supporting innovation and competition through a set of AI Access Principles. Under these principles, the company will make AI tools and development models widely available as well as provide flexibility for developers to use Microsoft Azure for AI innovations. The principles also include commitments to support AI skilling and sustainability.

Microsoft Principles and Recommendations

Microsoft has acknowledged that while new laws and regulations are needed, tech companies need to act responsibly. To this end, the company has adopted six AI principles to guide its approach to this powerful technology:

  • Fairness
  • Reliability and Safety
  • Privacy and Security
  • Inclusiveness
  • Transparency
  • Accountability

Microsoft has also endorsed the development of policy guardrails built around the following goals:

  • Ensuring that AI is built and used responsibly and ethically.
  • Ensuring that AI advances international competitiveness and national security.
  • Ensuring that AI serves society broadly, not narrowly.

Voices for Innovation will support public policies that help achieve these goals. We encourage our members to explore this issue in greater detail using the resources below. Technology professionals, including VFI members, can bring their valuable expertise to discussions about AI—with family, friends, colleagues, and policymakers.

We will continue to listen closely to this discussion and will keep our membership informed about proposed AI policies as they are developed in Washington and state capitals.