European Union Moves Forward with AI Act

Last Friday, European Union officials reached a broad agreement on rules to regulate artificial intelligence. Once finalized, the AI Act will establish a wide range of guardrails aimed at protecting consumers and mitigating potential harms caused by this rapidly emerging technology. Coverage of this development can be found from the Associated Press (with additional reporting here), the New York Times, and the MIT Technology Review.

The U.S. has begun to develop AI policies as well, with the White House issuing an AI Executive Order in late October, and the Senate holding a series of AI Insight Forums throughout the fall. Congress has introduced several legislative proposals for AI—and we expect action on this issue in 2024. We encourage you to stay engaged with Voices for Innovation during this important period of tech policymaking.

We’ll be back next week to wrap up the year. Thank you for reading!

This Week in Washington 

  • The Hill: The head of the White House’s “cancer moonshot” initiative discussed how AI could be used to combat health misinformation and give patients and caregivers the trusted and accurate information needed to drive their care.
     
  • Lawfare and CNN: The U.S. Government Accountability Office released a report detailing how AI has been and may be implemented in the federal government. The report, focused on non-military agencies, went on to detail which programs can be disclosed and which are considered “sensitive.”
     
  • Semafor: The U.S. Department of Energy opened the Office of Critical and Emerging Technology, which will coordinate the government’s support for and use of AI and other emerging cutting-edge technologies to address crises like climate change, pandemics, and national security. 
     
  • Nextgov/FCW: The first CHIPS For America grant, part of a $52 billion program to revitalize semiconductor research, development, and manufacturing in the U.S., was made to BAE Systems Inc. The defense contractor will receive $35 million this week to modernize a microelectronics center in Nashua, New Hampshire, that produces chips for, among other defense projects, the F-35 fighter jet.
     
  • Axios: The U.S. Senate voted to confirm Harry Coker, Jr. as the national cyber security director in a 59-40 vote, making him the second person to hold the position permanently.
     
  • The Verge: The Federal Trade Commission (FTC) is warning the public against scanning old QR codes, stating that bad actors often put them in inconspicuous places and wait to collect money, logins, and other sensitive information.
     
  • Fierce Telecom: The Federal Communications Commission (FCC) voted to reform its pole attachment rules and policies to support faster resolution of disputes and provide attachers with more information about the poles they plan to use for broadband expansion.

Article Summary

  • Wired: Members of OpenAI’s research teams led by Ilya Sutskever, an OpenAI cofounder, chief scientist, and board member, talked with Wired about a new research paper focused on experiments designed to test letting an inferior AI model guide the behavior of a much smarter one, and how their efforts intend to keep hypothetical super-intelligent AI in check.
     
  • POLITICO: Shamaine Daniels, a Democratic congressional candidate running in Pennsylvania’s 10th district, is the first campaign to use a new AI tool that places automated phone calls to voters and offers to answer questions about Daniels, her policy positions and her opponent in a robotic, female voice.
     
  • The Associated Press: Pope Francis called for an international treaty on AI, with a focus on the papacy’s annual World Day of Peace document insisting that development and deployment of AI guarantee fundamental human rights, promote peace and guard against disinformation, discrimination and distortion.
     
  • Reuters: To protect minors online, the Royal Spanish Mint is developing a new piece of technology that will require Internet users to verify that they are the minimum required age to access social media without giving away any personal data.
     
  • The New York Times: Microsoft announced that the company will remain neutral if any of its employees decide to unionize, and will collaborate with the AFL-CIO to resolve issues that arise from the adoption of AI in the workplace.

Featured Podcast

Microsoft

  • Pivotal with Hayete Gallot
    Since the beginning of Natasha Crampton’s career in law, she’s had an interest in the areas where law intersects with technology and its effects on society at large. And as technology has evolved, so has her focus. Today Natasha is Microsoft’s Chief Responsible AI Officer, and no two days are the same. Internally she works shoulder-to-shoulder with engineering teams to build new AI technologies in a way that grounds AI’s awesome capabilities within Microsoft’s Responsible AI Standard. Outside of the company, Natasha contributes to a broad industry, community and government discussion about the laws and standards that will be needed to bring AI to life in a responsible way. (Building a responsible AI to partner with humankind – December 12, 2023)