Earlier this week, Microsoft published a detailed report, “Protecting the Public from Abusive AI-Generated Content,” that includes policy recommendations to combat deepfake fraud. In a blog about the issue, Microsoft Vice Chair and President Brad Smith notes that “AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation—especially to target kids and seniors.” While the tech sector and non-profits have taken steps to address this challenge, new laws are also needed to fight abusive “synthetic” content. Coverage of this development can be found at Bloomberg, The Washington Post, The Verge, and Redmond Magazine.
You’ll find additional tech policy news and a featured podcast below. Thank you for reading.
This Week in Washington
- New York Times: The U.S. Senate passed with an overwhelming 91-3 vote bipartisan legislation focused on improving safety and privacy requirements for children and teenagers on social media. The legislation would mandate social networking platforms protect minors’ mental health and keep them safe from abuse, including sexual exploitation, or the companies could be held liable for failing to filter out content or limit features that could lead to those adverse impacts. However, the legislation awaits an uncertain fate in the U.S. House, which is on recess until September—and where the bills will face continued pushback from tech companies and other groups concerned about free speech.
- Washington Post: Some members of the U.S. Senate are working with influencers to help push their policy initiatives and educate the public on their legislative efforts. The Post follows Hayley Paige, a wedding dress designer and influencer, who will be testifying at a Senate hearing on banning noncompete agreements. Paige’s online audience—more than 1.1 million followers—was part of the reason she was chosen to testify.
- StateScoop: The School and Libraries Cybersecurity Pilot Program was recently announced by the Federal Communications Commission (FCC). This three-year pilot program will provide up to $200 million in funding from the Universal Service Fund and applications open August 29.
- Nextgov: Earlier this week, the U.S. Senate Homeland Security Committee, in a 10 to 1 vote, advanced the Streamlining Federal Cybersecurity Regulations Act. The act aims to synchronize federal-level cybersecurity laws by creating an interagency group in the White House’s Office of the National Cyber Director that would harmonize cyber regulatory regimes and test new regulatory frameworks.
Article Summary
- CNBC: The European Union’s first laws governing artificial intelligence came into effect this week after being voted into law earlier this year. Organizations with customer or business connections to the EU will face the potential for large fines if they do not comply with rules designed to protect individual rights. CNBC rounds up the EU’s AI Act and its impact on American tech companies.
- Associated Press: The Associated Press provides analysis of Vice President Kamala Harris and former President Donald Trump’s past statements, actions, and potential future policy outlooks on artificial intelligence.
- The Hill: Misinformation and disinformation have spread rapidly after weeks of fast-paced news that included President Biden withdrawing from the 2024 general election and the attempted assassination of former President Trump.
- Bleeping Computer: Health savings account administrator HealthEquity fell victim to a cybersecurity incident that compromised the information of more than four million people.
- Axios: According to an IBM report, company recovery expenses following data breaches have increased in the last year. Companies are losing money due to post-breach investigations, potential lawsuits, and the general revenue loss experienced during operational downtime.
Featured Podcast
- Your Undivided Attention
AI has made it possible for new frontiers to open in public health and medicine at a rapid rate and has accelerated biological research. The tech has helped make great strides in these fields, but it also has made it easier for those with mal-intent to manufacture new biological threats. In this episode, the hosts are joined by Kevin Esvelt, an MIT professor and director of the Sculpting Evolution Group, to discuss this dichotomy of AI. (“Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt” – July 18, 2024)