Executive Briefing February 1, 2019


The Washington Post The Cybersecurity 202: This is the Senate Homeland Security Committee’s top cyber priority this year
Sen. Ron Johnson’s top cybersecurity goal as chair of the Senate’s Homeland Security Committee this Congress is to make it more attractive for cybersecurity workers to stay in government jobs rather than flee to the private sector. If the government can’t keep and recruit top workers, Johnson (R-Wis.) worries that the United States won’t be able to defend itself against sophisticated hackers from U.S. adversaries such as China or Russia — and might not be able to help critical industry sectors such as energy plants and airports secure their networks.

Yahoo Finance These people are trying to make Congress smarter about tech policy
Since 2016, a group called TechCongress has been trying to fix this issue by recruiting techies for one-year fellowships that place them in House and Senate offices, where they can help research and draft bills. The class of 2016 only had two fellows, but increased funding from the Ford Foundation, the Knight Foundation and the Democracy Fund helped the just-announced class of 2019 to expand to eight.


Seattle Times To protect your personal data, should Washington state restrict facial recognition and adopt European-style privacy laws?
Washington lawmakers have introduced a series of bills that would bring European-style privacy and transparency regulations to how personal data is collected, analyzed or sold by companies and the government. Last year, legislators passed a net-neutrality law in the face of the federal government’s rollback of rules to protect customers from changes by broadband companies offering internet service. Washington became the first state to approve such a law. But when it comes to the collection and use of personal data, Washington — and the United States — remains as ungoverned as the Wild West.

National Law Review New Washington State Privacy Bill Incorporates Some GDPR Concepts
A new bill, titled the “Washington Privacy Act,” was introduced in the Washington State Senate on January 18, 2019. If enacted, Washington would follow California to become the second state to adopt a comprehensive privacy law. Similar to the California Consumer Privacy Act (CCPA), the Washington bill applies to entities that conduct business in the state or produce products or services that are intentionally targeted to residents of Washington and includes similar, though not identical size triggers.

Broadcasting + Cable Microsoft, Others Combine to Push Rural Broadband Solutions
Microsoft, which has been pushing the use of TV white spaces to close the rural broadband divide, has joined with C Spire, Airspan Networks, Nokia and Siklu to form a coalition of the willing to come up with a “disruptive blueprint to close the adoption gap.” The consortium will use Mississippi and Alabama as testbeds for driving affordable internet access and drive adoption. The consortium is launching under something of a cone of silence. C Spire said that the rural broadband effort will launch with a workshop in New Orleans Jan. 29-31 that will be closed to press. Microsoft and the other members will be in attendance, however, according to C Spire.

MIT Technology Review Americans want to regulate AI but don’t trust anyone to do it
In 2018, several high-profile controversies involving AI served as a wake-up call for technologists, policymakers, and the public. The technology may have brought us welcome advances in many fields, but it can also fail catastrophically when built shoddily or applied carelessly. It’s hardly a surprise, then, that Americans have mixed support for the continued development of AI and overwhelmingly agree that it should be regulated, according to a new study from the Center for the Governance of AI and Oxford University’s Future of Humanity Institute.

Engadget Microsoft and MIT can detect AI ‘blind spots’ in self-driving cars
Self-driving cars are still prone to making mistakes, in part because the AI training can only account for so many situations. Microsoft and MIT might just fill in those gaps in knowledge — they’ve developed a model that can catch these virtual “blind spots,” as MIT describes them. The approach has the AI compare a human’s actions in a given situation to what it would have done, and alters its behavior based on how closely it matches the response. If an autonomous car doesn’t know how to pull over when an ambulance is racing down the road, it could learn by watching a flesh-and-bone driver moving to the side of the road.

CNBC IBM hopes 1 million faces will help fight bias in facial recognition
BM thinks the data being used to train facial recognition systems isn’t diverse enough. The tech giant released a trove of data containing 1 million images of faces taken from a Flickr dataset with 100 million photos and videos. The images are annotated with tags related to features including craniofacial measurements, facial symmetry, age and gender. Researchers at the company hope that these specific details will help developers train their artificial intelligence-powered facial recognition systems to identify faces more fairly and accurately.

WIRED San Francisco could be first to ban facial recognition tech
San Francisco could become the first US city to ban its agencies from using facial recognition technology. Aaron Peskin, a member of the city’s Board of Supervisors, proposed the ban Tuesday as part of a suite of rules to enhance surveillance oversight. In addition to the ban on facial recognition technology, the ordinance would require city agencies to gain the board’s approval before buying new surveillance technology, putting the burden on city agencies to publicly explain why they want the tools as well as the potential harms.

The Verge Gender and racial bias found in Amazon’s facial recognition technology (again)
As facial recognition systems become more common, Amazon has emerged as a frontrunner in the field, courting customers around the US, including police departments and Immigration and Customs Enforcement (ICE). But experts say the company is not doing enough to allay fears about bias in its algorithms, particularly when it comes to performance on faces with darker skin. The latest cause for concern is a study published this week by the MIT Media Lab, which found that Rekognition performed worse when identifying an individual’s gender if they were female or darker-skinned. In tests led by MIT’s Joy Buolamwini, Rekognition made no mistakes when identifying the gender of lighter-skinned men, but it mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time.

TechCrunch Facebook pays teens to install VPN that spies on them
Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms. Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits.

Fast Company This U.N. map tracks data protection laws around the world
To mark Data Privacy Day, the United Nations Conference on Trade and Development (UNCTAD) has released a map showing how the world is–or is not–protecting the online privacy rights of the human beings that populate the planet. While we prefer to mark holidays with celebratory food, an interactive map and cyberlaw tracker is a pretty cool way to do it, too.

World Economic Forum We have to fight for a fairer tech industry for women
More needs to be done to address gender inequality. This is especially important at a time when our workplaces are on the cusp of changing dramatically due to technological advancement in areas such as artificial intelligence (AI) and blockchain. While immensely transformational in many ways, the development of automation and machine learning still poses potential threats and risks to gender equality. Through hidden biases and the growth of a function which today severely lacks female representation, we risk impeding positive progress in gender equality across tech and AI-dependent industries.

Bloomberg Google Shareholders and Workers Call on Board to Fix ‘Diversity Crisis’
Over the past year, employees at Alphabet Inc.’s Google have protested over worker rights, a military contract, and the handling of sexual misconduct. Now, along with shareholders, they’ve written a resolution to Alphabet’s board, calling for reform in areas including racial and gender diversity, and asking the board to consider tying these metrics to executive bonuses.



  • Report on artificial intelligence and education: The way we use education to prepare our next generation of leaders will directly determine whether the U.S. retains its leadership in critical fields of relevance in the emerging digital environment. Without a sufficiently educated population and workforce, the U.S. likely will slip behind other states for whom AI/ET is not only meant for improved social organization, but for strategic superiority, and ultimately digital and physical conquest. (Brookings Reports – Why we need to rethink education in the artificial intelligence age, Jan. 31, 2019)
  • Report on artificial intelligence and consumer finance: From AI-driven chatbots to sophisticated wealth robo advisors, AI applications have clear potential to expand opportunities for consumers living at the margin. However, experts have yet to discuss the relevance of AI for consumer financial protection in earnest, including the implications of AI solutions that could better protect consumers. (Brookings Reports – How artificial intelligence affects financial consumers, Jan. 31, 2019)

Information Technology & Innovation Foundation

  • Blog on facial analysis versus facial recognition: According to multiple news sources, including The New York Times, Amazon is peddling racially biased facial recognition software to the unsuspecting public. But these allegations are more fiction than fact, as they confuse and conflate two similarly named, but otherwise very different technologies—facial recognition and facial analysis. A closer look at the details of the story shows that the headlines are not supported by the evidence. (ITIF Blog – Note to Press: Facial Analysis Is Not Facial Recognition, Jan. 27, 2019)

Note: Voices for Innovation regularly shares a range of opinion articles and press releases from organizations in and publications covering tech policy. These pieces are meant to educate our audience, not to endorse specific platforms or bills.