Summary
- OpenAI has updated its policy, removing the clause against "military and warfare" applications.
- This shift suggests a broader interpretation of acceptable uses, potentially opening doors to military collaborations.
- The implications are significant, especially considering the military's involvement in various non-combat activities.
A Surprising Change in Stance
In a move that might leave many scratching their heads or perhaps plotting conspiracy theories, OpenAI has recently adjusted its usage policy, subtly but significantly opening the door wide for military applications of its technologies. Previously, the organization had a clear stance against the use of its products for "military and warfare" purposes. However, this specific language has vanished quicker than a magician's rabbit, hinting at a new direction for the AI powerhouse.
The Curious Case of Disappearing Words
The Intercept was the first to spot this notable change, which slipped into the digital world silently on January 10. It's not unusual for tech companies to tweak policy wording as their products evolve. OpenAI is no exception, especially with its recent launch of user-customizable GPTs and a monetization policy that's as clear as mud.
Yet, this policy shift isn't just a mere semantic dance. It's not like swapping 'potato' for 'spud'. The removal of the "military and warfare" clause is a significant policy turnaround, not just a linguistic facelift. The current usage policy can be perused here, and for the nostalgics, the old one is archived here.
A Turn Towards Flexibility?
The new policy is less of a bullet-pointed list of no-nos and more of a broad-strokes approach. OpenAI argues that the new wording, including directives like "Don't harm others," is broad but easy to grasp, like the concept of not texting your ex at 2 AM. It's meant to be relevant in various contexts and, conveniently, more flexible.
Niko Felix, a representative from OpenAI, emphasized that while there's still a blanket prohibition on developing and using weapons, the distinction between "military and warfare" and weapon development is significant. After all, the military isn't just about creating weapons, and weapons aren't exclusively the domain of the military. This is a subtle but crucial distinction, like the difference between a latte and a cappuccino.
Exploring New Business Opportunities
Where this policy change gets particularly interesting (and potentially lucrative) is in the non-overlapping areas of military applications and weapon development. The U.S. Defense Department, known for its deep pockets, is heavily involved in various sectors like basic research, small business funds, and infrastructure support. OpenAI’s GPT platforms could, for instance, assist army engineers in summarizing decades of water infrastructure documentation - because who has time to read all that?
This conundrum about defining and navigating relationships with government and military funding is not unique to OpenAI. Remember Google's "Project Maven"? That stepped over a line, but few eyebrows were raised over the JEDI cloud contract. It's a fine line, thinner than the one between 'there', 'their', and 'they're'.
The removal of "military and warfare" from OpenAI's prohibited uses list suggests an openness to serving military customers. I reached out to OpenAI for confirmation, giving them the heads-up that anything short of a denial would be seen as a confirmation. As of this writing, they've responded with the same statement given to The Intercept and have not disputed their openness to military applications.