Want to engage in policy advocacy in 2025?
Major tech policies could advance in 2025 in Congress and state capitals. Want to be part of the conversation? Sign up for Voices for Innovation today!
Secure Future Initiative Update
Last week, Microsoft published a progress report on its Secure Future Initiative (SFI). Launched late last year, SFI now has the equivalent of 34,000 full-time engineers dedicated to the program—making it the largest cybersecurity engineering effort in history.
Microsoft is also supporting the U.S. Cybersecurity and Infrastructure Security Agency’s (CISA) Secure by Design Pledge and integrating recommendations from the Cyber Safety Review Board (CSRB) to strengthen the company’s cybersecurity approach and enhance resilience. This new Microsoft Security blog summarizes SFI, and you can find coverage of this development from The Verge.
AI Investments for the Global South
In conjunction with the UN General Assembly in New York, the U.S. State Department held an event focusing on the role of AI in advancing sustainable development. Secretary of State Antony Blinken was joined by eight leading AI companies, including Microsoft, OpenAI, Anthropic, and others, to announce a $100 million investment to bring AI to countries in the Global South.
Blinken underscored that supporting AI access in developing nations would help tackle major global challenges such as food insecurity, climate change, and the spread of infectious diseases. Coverage of the event from CNET can be found here and a transcript and video of the event here.
Senate Hearing Focuses on Election Security in the Era of AI
Last month, the U.S. Senate Select Committee on Intelligence held a hearing on election security with testimony from tech leaders, including Microsoft Vice Chair and President Brad Smith. His written testimony—including policy recommendations—appears in this blog, “Securing US Elections from Nation-State Adversaries.”
The hearing focused on attempts by several nations to interfere with U.S. elections through online deception. Smith urged Congress to enact a deepfake fraud statute; require AI system providers to use state-of-the-art provenance tooling to label synthetic content; and pass the bipartisan Protect Elections from Deceptive AI Act.
Taking Steps to Stop Non-Consensual Intimate Imagery
Deepfake, photorealistic AI-generated imagery can be used for fraud, political manipulation, and other abuses, including the creation of non-consensual intimate imagery (NCII). Addressing these challenges requires coordinated action by government, the tech sector, and civil society organizations.
Microsoft recently announced that it is partnering with StopNCII to pilot a program to detect, remove, and prevent NCII on the company’s search engine Bing. Microsoft currently prevents more than 268,000 exploitive images from appearing on Bing using hashing or digital fingerprinting technology. You can learn more about this development and how to report NCII by visiting this Microsoft on the Issues blog.


