Through Azure OpenAI Service, the fully managed, enterprise-focused wrapper around OpenAI technology, Microsoft has reiterated its prohibition on U.S. police agencies employing generative AI for face recognition.
The Azure OpenAI Service terms of service specifically prohibit “by or for” police departments in the United States from using integrations with OpenAI’s current and possibly future image-analysing models.
A further new bullet point addresses “any law enforcement globally,” and it expressly forbids using “real-time facial recognition technology” on mobile cameras, such as dashcams and body cameras, to try to identify someone in “uncontrolled, in-the-wild” settings.
The policy adjustments follow a week after Axon, a provider of technology and weaponry for law enforcement and the military, unveiled a new product that uses the GPT-4 generating text model from OpenAI to summarise body camera audio. Critics swiftly highlighted the potential drawbacks, including the introduction of racial biases from the training data (particularly concerning considering the higher likelihood of police stops for people of colour compared to their white counterparts) and the possibility of hallucinations, as even the most advanced generative AI models today tend to fabricate facts.
It is unknown whether Axon employed GPT-4 via the Azure OpenAI Service, and if so, whether the revised policy was a reaction to Axon’s product introduction. Prior to this, OpenAI had limited the use of its models for face recognition through its APIs. We’ve contacted OpenAI, Microsoft, and Axon; if we hear back, we’ll update this page.
With the new terms, Microsoft has some flexibility.
The whole prohibition on using Azure OpenAI Services only applies to American police, not foreign police. And although the conditions forbid the use of face recognition by U.S. police, they do not include facial recognition done with fixed cameras in regulated settings, such as a back office.
That aligns with Microsoft’s current strategy for AI-related defence and law enforcement contracts with its close partner OpenAI.
Bloomberg reported in January that OpenAI, in contrast to its previous prohibition on selling its AI to military forces, is collaborating with the Pentagon on a variety of initiatives, including cybersecurity capabilities. Elsewhere, according to The Intercept, Microsoft has proposed to assist the Department of Defence (DoD) in developing software to carry out military operations by employing OpenAI’s picture-generating tool, DALL-E.
In February, Microsoft released the Azure OpenAI Service, which included more compliance and management tools targeted at law enforcement and other government organisations, into its Azure Government offering. Candice Ling, the SVP of Microsoft’s government-focused subsidiary, Microsoft Federal, pledged in a blog post that the DoD will “submit for additional authorisation” Azure OpenAI Service for workloads supporting DoD objectives.