Dark Mode Light Mode

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Follow Us
Follow Us
Login Login

OpenAI modifies its policy to permit military use.

OpenAI has verified in an additional statement that the language was modified to facilitate military clients and initiatives that the organization endorses.

Our policy prohibits the use of our tools for purposes such as weapon development, communications surveillance, harming others, or property destruction. However, there are use cases related to national security that are in line with our mission. For instance, we are currently working with DARPA to promote the creation of fresh cybersecurity tools to safeguard the critical infrastructure and industries’ use of open source software. It remained unclear whether our previous policies would have permitted these advantageous use cases under the “military” category. Consequently, the objective of our policy update is to facilitate these discussions and provide clarity.

The initial narrative proceeds as follows:

Advertisement

Through an unannounced modification to its utilization policy, OpenAI has permitted the implementation of its technologies in military contexts. The policy no longer explicitly forbade the use of its products for “military and warfare” purposes; OpenAI did not dispute the fact that the language prohibiting such uses has been removed.

The Intercept first noticed the change on January 10, when it appeared to have gone live.

Unannounced modifications to policy language are a common occurrence in the technology sector, mirroring the evolution and transformation of the products they regulate. OpenAI is undoubtedly not an exception to this rule. Indeed, the recent declaration by the organization regarding the public release of its user-customizable GPTs, accompanied by an ambiguously defined monetization policy, almost certainly prompted modifications.

However, it is unlikely that this particular novel product was to blame for the modification to the non-military policy. Furthermore, it is not plausible to assert that the omission of “military and warfare” merely makes the text “more comprehensible” or “clear,” as OpenAI’s statement concerning the update implies. It is a significant and far-reaching policy shift rather than a reiteration of the previous policy.

It goes without saying that the entire document has been rewritten; however, whether or not it is more legible is largely a matter of personal preference. A bulleted list of explicitly prohibited practices is, in my opinion, more readable than the more general guidelines that have supplanted them. However, it is evident that the policy writers at OpenAI hold a different viewpoint. If this grants them greater flexibility in evaluating a practice that was previously strictly prohibited, whether positively or negatively, that is merely an advantageous byproduct. The company stated in its statement that “don’t harm others” “is broad yet easily understood and applicable in a variety of contexts.” More adaptable as well.

However, as OpenAI representative Niko Felix clarified, the development and use of weapons remain strictly prohibited; this is evident from the fact that it was initially and separately classified under “military and warfare.” Indeed, the military does not manufacture all weapons; other organizations do.

Furthermore, it is in the areas where these categories do not intersect that OpenAI may be investigating potential avenues for expansion. The Defense Department’s operations extend beyond warfare-related matters. Scholars, engineers, and politicians are all aware that the military establishment is heavily invested in infrastructure support, small business funds, and fundamental research.

For instance, army engineers would find the GPT platforms of OpenAI extremely useful when compiling decades of documentation pertaining to the water infrastructure of a given region. Many businesses are truly perplexed as to how to define and navigate their relationship with military and government funds. While “Project Maven” at Google infamously exceeded reasonable limits, the multibillion-dollar JEDI cloud contract appeared to trouble fewer individuals. An academic researcher funded by an Air Force Research Lab grant may be permitted to use GPT-4, but an AFRL researcher working on the same project is not. Where is the boundary delineated? After a few removals, even a strict “no military” policy must come to an end.

However, the complete exclusion of “military and warfare” from OpenAI’s prohibited uses indicates that the organization is, at the very least, amenable to providing services to military clients. I cautioned them that the language of the new policy made it clear that any response short of a denial would be construed as confirmation, and they requested confirmation from the company.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

The autonomous vehicles of TuSimple are up for auction after the US withdrawal.

Next Post

LG inaugurates its first EV charging factory in the United States in Texas.

Advertisement