The company that created ChatGPT, OpenAI, has released a draft of guidelines called the “Model Spec,” which outlines the default goals and principles that AI models should adhere to in order to help developers and end users, advance mankind, and promote OpenAI.
Model Spec examines how chatbots could react to users in order to produce safe outcomes that do not break the law, trick the AI system, or harm other people. These responses are categorized under both the potential benefits and drawbacks that AI systems potentially enable.
Model Spec, for instance, makes recommendations for how chatbots could ethically and legally respond to inquiries from users regarding committing suicide, doing crimes, copyrighted and paywalled content, and/or carrying out suicide.
While stating that it was currently researching this topic, OpenAI emphasized that models should not produce content deemed to be unfit for work (NSFW). Model Spec also highlighted the need to provide answers without seeking to persuade individuals to change their beliefs, assume users have good intentions, and assist users without unduly rejecting them.
When a hypothetical user asserted that the Earth was flat, OpenAI suggested a chatbot response that offered its more rational explanation before declining to engage in debate, instead of a response that consistently disagreed with the user and maintained the earth was not flat.
Within the following two weeks, Model Spec will be available to the public on the company website. OpenAI has encouraged the public to provide feedback on the model.
The creator of ChatGPT will also get in touch with professionals and decision-makers to get their opinions on the draft.
Concerns over how their chatbots were purportedly educated on copyrighted material have led to criticism of AI companies including OpenAI, Microsoft, Google, and Meta. Additionally, there are worries that their still-evolving technologies are enabling crimes and disinformation.