Taking Steps to Stop Non-Consensual Intimate Imagery

Deepfake, photorealistic AI-generated imagery can be used for fraud, political manipulation, and other abuses, including the creation of non-consensual intimate imagery (NCII). Addressing these challenges requires coordinated action by government, the tech sector, and civil society organizations.

Yesterday, Microsoft announced that it is partnering with StopNCII to pilot a program to detect, remove, and prevent NCII on the company’s search engine Bing. Microsoft currently prevents more than 268,000 exploitive images from appearing on Bing using hashing or digital fingerprinting technology. You can learn more about this development and how to report NCII by visiting this Microsoft on the Issues blog

Thank you for reading. You’ll find our roundup of tech policy news and a featured podcast below.

This Week in Washington 

  • CyberScoop: A U.S. Senate bill meant to safeguard Americans’ healthcare data has a newly introduced companion in the House: The Healthcare Cybersecurity Act requires the Cybersecurity and Infrastructure Security Agency (CISA) to collaborate with the Department of Health and Human Services (HHS) on measures to strengthen cyber defenses and provide non-federal healthcare organizations with resources. 
     
  • Bleeping Computer: The FBI, CISA, the Multi-State Information Sharing and Analysis Center (MS-ISAC), and HHS have released a joint advisory regarding RansomHub ransomware. Since the group’s establishment seven months ago, they have breached over 200 victims in the U.S. and have claimed responsibility for hacking a variety of infrastructure sectors and companies, ranging from Rite Aid to Frontier Communications. 
     
  • Reuters: The U.S., United Kingdom, and the European Union are looking to sign the first legally binding international AI treaty. This treaty follows the adoption of an AI Convention negotiated by 57 countries earlier this summer. 
     
  • Nextgov: The federal Office of the National Cyber Director announced a hiring sprint of almost half a million cybersecurity jobs across the country. This recruitment effort is being referred to as “Service to America” to reflect the importance of cybersecurity to our nation.

Article Summary

  • Wall Street Journal: As the 2024 presidential election nears, U.S. voters are being targeted by Chinese government-backed trolls that have assumed fake identities of politically engaged voters on social media. A recent report says that this propaganda push is meant to undermine confidence in the election. 
     
  • Politico: Based on the last two elections, the most vulnerable targets for foreign hackers are voter registration databases. In fact, many states are missing a uniform or rigorous system to verify what goes into Election Day software and whether it is secure.
     
  • BBC: Researchers at the University of Leeds have trained an AI system to review the health records of over two million people. With the help of AI, researchers found that in many cases patients had undiagnosed conditions or lacked the medication needed to decrease their risk.
     
  • Harvard Business Review: Healthcare companies are using AI to help identify at-risk patients and decrease the gap in healthcare inequalities. Some primary care networks are utilizing AI to help make sense of complex patient data. Others are using AI as a way to minimize communication barriers by having it act as a translator and liaison. 
     
  • Mississippi Today: The government approved Mississippi’s proposal for the Broadband Equity, Access, and Deployment program, which means the state is now able to request $1.2 billion in federal money. There are an estimated 300,000 unserved homes and businesses in Mississippi that this broadband internet expansion can help reach. 
     
  • Washington Post: The state of New Mexico filed a lawsuit against Snap Inc., alleging that the company’s Snapchat app is a “breeding ground” for predators looking to collect sexually explicit images of children. In a month-long investigation, the New Mexico Department of Justice found evidence that the app suggests accounts owned by strangers to underage users who are then contacted and urged to trade explicit images. 

Featured Podcast