Introduction:
OpenAI, a leader in artificial intelligence, has unveiled a thorough framework that gives priority to safety precautions for its state-of-the-art models. The plan, which is available on the business website, has a noteworthy clause that gives the board the authority to override CEO Sam Altman’s judgments about safety. This action demonstrates OpenAI’s dedication to ensuring the proper application of its cutting-edge AI technology.
Deployment of OpenAI’s newest technologies will only take place if they satisfy strict safety requirements, especially in crucial domains such as cybersecurity and nuclear dangers, the company adds. Microsoft is on board with this initiative. The business’s commitment to minimizing any risks related to its cutting-edge AI models is reflected in this cautious strategy.
Committee and Reversal of Decision:
OpenAI is forming an advisory committee to oversee safety reports in an effort to strengthen safety procedures. The leadership team of the organization as well as the board will then review these reports. The board has the power to overturn executive decisions about safety, adding an extra degree of scrutiny, even though the executives will be the ones making most of these choices.
Context and Concerns: Ever since ChatGPT’s debut a year ago, there has been a lot of discussion among the general public and the AI research community about the possible hazards associated with AI. Although generative AI technology has demonstrated its talent for creative jobs like composing essays and poems, it has also sparked concerns about its ability to propagate misinformation and influence others.
In light of these worries, a number of prominent figures in the AI field have already demanded a temporary halt to the development of systems more potent than OpenAI’s GPT-4, citing potential social hazards. More than two-thirds of Americans voiced concern about potential negative impacts, with 61% feeling AI poses a threat to society. This increased awareness of AI’s influence is mirrored in a Reuters/Ipsos survey.