The latest in technology, Marketing and Startups.

Microsoft prohibits US police from utilizing workplace AI for face recognition.

Through Azure OpenAI Service, the fully managed, enterprise-focused wrapper around OpenAI technology, Microsoft has reiterated its prohibition on U.S. police agencies employing generative AI for face recognition.

The Azure OpenAI Service terms of service specifically prohibit “by or for” police departments in the United States from using integrations with OpenAI’s current and possibly future image-analyzing models.

A further new bullet point addresses “any law enforcement globally,” and it expressly forbids using “real-time facial recognition technology” on mobile cameras, such as dashcams and body cameras, to try to identify someone in “uncontrolled, in-the-wild” settings.

The policy adjustments follow a week after Axon, a provider of technology and weaponry for law enforcement and the military, unveiled a new product that uses the GPT-4 generating text model from OpenAI to summarize body camera audio. Critics swiftly highlighted the potential drawbacks, including the introduction of racial biases from the training data (particularly concerning considering the higher likelihood of police stops for people of color compared to their white counterparts) and the possibility of hallucinations, as even the most advanced generative AI models today tend to fabricate facts.

It is unknown whether Axon employed GPT-4 via the Azure OpenAI Service, and if so, whether the revised policy was a reaction to Axon’s product introduction. Prior to this, OpenAI had limited the use of its models for face recognition through its APIs. We’ve contacted OpenAI, Microsoft, and Axon; if we hear back, we’ll update this page.

With the new terms, Microsoft has some flexibility.

The whole prohibition on using Azure OpenAI Services only applies to American police, not foreign police. And although the conditions forbid the use of face recognition by U.S. police, they do not include facial recognition done with fixed cameras in regulated settings, such as a back office.

That aligns with Microsoft’s current strategy for AI-related defense and law enforcement contracts with its close partner OpenAI.

Bloomberg reported in January that OpenAI, in contrast to its previous prohibition on selling its AI to military forces, is collaborating with the Pentagon on a variety of initiatives, including cybersecurity capabilities. Elsewhere, according to The Intercept, Microsoft has proposed to assist the Department of Defense (DoD) in developing software to carry out military operations by employing OpenAI’s picture-generating tool, DALL-E.

In February, Microsoft released the Azure OpenAI Service, which included more compliance and management tools targeted at law enforcement and other government organizations, into its Azure Government offering. Candice Ling, the SVP of Microsoft’s government-focused subsidiary, Microsoft Federal, pledged in a blog post that the DoD will “submit for additional authorization” Azure OpenAI Service for workloads supporting DoD objectives.

Juliet P.
Author: Juliet P.

Share this article
Shareable URL
Prev Post

Apple makes 10% more money. iPhone sales go down, but a huge buyback drives up stock prices.

Next Post

Google calls Epic’s antitrust victory demands ‘unnecessary’ and ‘far beyond the scope’ of the ruling.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Get notified about our latest news and insights