OpenAI’s policy changed quietly to remove the ban on using its tools and models for military and weapon development purposes.
OpenAI’s Usage Policy has removed the language expressing how its models should not be used for military use such as weapon development. There are reports that the Pentagon is eyeing the tool’s capabilities to aid in the research and development of weapon systems, among other military uses. In a way, OpenAI is paving the way to make that happen.
Over the week, OpenAI’s usage policy underwent several changes, including formatting changes, which the company says have been done to improve readability.
More specifically, the policies had a ban on “activity that has a high risk of physical harm, including weapons development and military and warfare.” Now, that part is out and only mentions not using the services to “harm yourself or others” while giving weapon development as just an example.
This can be construed as a complete removal of the ban on using OpenAI’s models, such as GPT-4, and its tools, such as ChatGPT, for purposes related to military and warfare. If you do use ChatGPT for these purposes now, you are not violating the usage policy anymore.