- The White House released an “AI Bill of Rights,” a set of principles for companies involved in artificial intelligence.
- The White House sought advice and comments from companies, researchers and civil rights groups.
- Google’s data collection practices have been a major concern among researchers and civil rights groups.
While the White House was mulling its new AI Bill of Rights, several organizations wrote to the administration with comments singling out Google as a company with troubling practices in artificial intelligence and personal data collection, newly released documents show.
The Artificial Intelligence Bill of Rights, released Tuesday by the White House Office of Science and Technology Policy, is not an enforceable law. Rather, it is a call for companies and institutions to create ethical rules for using automated systems and a set of guiding principles for doing so.
The bill says people should know when automated systems are being used, have the right to speak to a human, have the right to opt-in or opt-out, have control over their data and be protected from actual discrimination by those systems . To create this bill, OSTP collected comments from 130 researchers, companies, and civil rights organizations, all of which were published.
Since Google has vast amounts of people’s personal data, it creates troubling use cases and potential for abuse, several civil rights groups said. Companies like Google invest so much in targeted advertising that it leads to a “vicious cycle of collecting personal data,” said New America’s Open Technology Institute.
Google, meanwhile, said in its comments that it was aware of the risks and was responding accordingly through the company’s internal AI, privacy and security policies.
Google also said it has practical uses for biometric information, such as using facial recognition to unlock Pixel smartphones, connect to Nest Hub Max systems and confirm credit card information, or collect voice data to “enhance communication through interpretation facial or voice expressions’ in Android and home-sistant products.
The company said it uses strong security to protect biometric data, tests its technology on different demographics and carefully examines use cases.
Feedback collected by OSTP showed a clear divide. Civil rights organizations such as the American Civil Liberties Union, the National Association for the Advancement of Colored People, and the Electronic Frontier Foundation have raised concerns about data collection by companies, the opaque use of that data, and the potential for subsequent harm.
On the other hand, companies — including Google, Microsoft, Palantir and facial recognition companies like Clearview AI and CLEAR — have assured the White House that they value the responsible use of artificial intelligence and already operate ethically. (Several major companies, including Apple and Amazon, did not submit responses.)
Here’s a look at the concerns many researchers and groups have about Google’s data collection and targeted advertising.
Google declined to comment.
Data collection and surveillance
Google’s data collection gives companies little incentive “to rein in these invasive practices and give consumers agency over their data,” including biometric information, NAOTI said.
Others criticized Google’s data collection practices for advertising — the company’s biggest source of revenue.
“These services and algorithms that deliver content to users, whether it’s advertising or recommendations, have been found to put users in content bubbles and serve content that can be misleading and addictive,” wrote Brian Krupp, associate professor of science computers. at Baldwin Wallace University.
Some expressed concern specifically about biometric and audio surveillance. Google said it is “very cautious about deploying biometric identification in products and services” due to several risk factors:
- “Lower accuracy across genders, skin tones, ages.”
- “Privacy and security risks” if information is compromised or exposed.
- When “combined with tracking,” the information “can be used to track individuals’ movements over time.”
- “It could be used to profile people’s preferences and personal thoughts in ways that violate their privacy.”
- “Fake matches can lead to people being wrongly accused of crimes.”
- “Mismatch may result in denial of service.”
And the groups have raised concerns about how that technology powers its home-help products. The National Fair Housing Alliance said home devices like Google Home and Amazon’s Alexa can easily record voices “even when not asked to,” which is “worrying” given law enforcement’s interest in using biometrics information.
“In the US, tracking is concentrated among racial and ethnic minorities, particularly black and Latino men,” the NFHA said.
Stereotypes and discrimination
Many groups have expressed concern about Google’s use of technology to perpetuate harmful stereotypes, and sometimes, discrimination and harm in the real world.
Merve Hickok, a lecturer at the University of Michigan, noted a 2015 study that showed gender discrimination in Google’s ads for a career guidance service for high-paying jobs.
The NFHA also said that if biometric technology from companies like Google were incorporated into properties, landlords could use it to “discriminate against protected classes.”
Google itself noted that using automated systems to analyze biometric data and “extract criminal intent” could lead to “unfair interference or create escalating dynamics that lead to harm.”
Do you have a tip? Contact this reporter at [email protected] or [email protected] or via the secure messaging app Signal at +1 (785) 813-1084. Reach out using a broken device.