Google claims that users may apply its AI in "high-risk" fields as long as human oversight is present

Google has updated its terms to clarify that its generative AI tools can be used to make “automated decisions” in “high-risk” areas, such as healthcare, as long as human oversight is involved.

The company’s revised Generative AI Prohibited Use Policy, released on Tuesday, allows customers to use its AI for decisions that may have a “material detrimental impact on individual rights.” With human supervision, Google’s AI can assist in making decisions related to employment, housing, insurance, social welfare, and other sensitive domains.


Google


In the context of AI, automated decisions are those made by systems based on data analysis, both explicit and inferred. For instance, such decisions might include approving a loan or screening job candidates.

Previously, Google’s terms seemed to prohibit high-risk automated decision-making with its generative AI outright. However, Google clarified to TechCrunch that its tools have always been permitted for such uses, provided there was human oversight. A Google spokesperson noted that the requirement for human supervision has always been part of their policy for high-risk scenarios. The recent update simply reorganizes and specifies these terms to make them clearer for users.

By comparison, Google’s competitors, OpenAI and Anthropic, have stricter guidelines for high-risk automated decision-making. OpenAI bans its AI from being used for decisions involving credit, employment, housing, education, social scoring, and insurance. Anthropic permits its AI in areas like law, healthcare, and insurance but requires supervision by a “qualified professional” and mandates that customers disclose their use of AI for such purposes.

AI systems making critical decisions have faced scrutiny due to concerns about biased outcomes. Studies reveal that AI used in credit and mortgage approvals can reinforce historical discrimination. Human Rights Watch, for example, has urged the ban of social scoring systems, citing risks to privacy, access to social benefits, and potential profiling.

Regulatory oversight varies globally. The EU’s AI Act imposes stringent requirements on high-risk systems, such as mandatory registration, quality checks, human oversight, and incident reporting. In the U.S., Colorado has enacted legislation requiring developers to disclose details about high-risk AI systems and their limitations. Similarly, New York City requires bias audits for automated employment decision tools within the previous year before they can be used.

Post a Comment

Previous Post Next Post

ad4

ad3