OpenAI, an AI company, has revealed a safety framework for its advanced models. This plan, released on Monday, outlines measures, including granting the board the authority to overturn safety decisions.
Supported by Microsoft, OpenAI commits to deploying its latest technology exclusively in areas considered safe, such as cybersecurity and nuclear threats. Concurrently, the company is establishing an advisory group responsible for assessing safety reports. These reports will be forwarded to the company’s executives and board for review. Ultimately, decisions made by executives can be reversed by the board.
In the year since ChatGPT’s introduction, concerns about the risks associated with AI have been prominent among AI researchers and the public. While generative AI technology has impressed users with its capacity to craft poetry and essays, it has simultaneously raised safety apprehensions due to its potential to disseminate misinformation and manipulate individuals.
In April, a coalition of AI industry leaders and experts endorsed an open letter urging a six-month halt in the development of systems surpassing the capabilities of OpenAI’s GPT-4. Their concern stemmed from potential societal risks associated with such advancements.
A Reuters/Ipsos poll in May revealed that over two-thirds of Americans express apprehension about the potential adverse impacts of AI, with 61% believing it could pose a threat to civilization.