OpenAI shares AI safety approach amid ChatGPT controversy

Share

As the concerns for ChatGPT-4 rise around the world, OpenAI finally expresses its approach to AI Safety, where it seems to develop and range its Safe AI System. In its blog post, the company revealed that a practical approach to solving AI safety concerns is to devote more time and resources to researching effective diminution and alignment techniques and testing them against real-world abuse.

In addition to it, it said that enhancing AI safety and capabilities should go hand in hand. OpenAI considers that its best safety work to date has come from working with its models as they are better at following users’ instructions and easier to steer or ‘guide.’ Also, the company said that it will be highly cautious with the creation and deployment of more qualified models, and will continue to upgrade safety precautions as its AI systems progress.

The latest and recent development comes in the backdrop of over 11,000+ people signing an open letter to pause giant AI experiments for six months, particularly training of models that are more powerful than GPT-4. Also, many countries are banning ChatGPT. Recently, Italy banned ChatGPT over privacy concerns, and many countries follow, including Spain and others.

OpenAI said that it waited over six months to deploy GPT-4 to understand its capabilities better, benefits, and risks. It considers that it is many times necessary to take more time than that to improve the AI system’s safety. It also said that policymakers and AI providers will need to ensure that AI development and deployment are governed effectively at a global scale so that no one cuts corners to get ahead. “This is a daunting challenge requiring both technical and institutional innovation, but it is one that we are eager to contribute to,” said OpenAI.

“We make our most capable models available through our own services and through an API so developers can build this technology directly into their apps”

the company explained. “This allows us to monitor for and take action on misuse, and continually build mitigations that respond to the real ways people misuse our systems — not just theories about what misuse might look like.”

OpenAI agrees that some of the training data used by its systems contain personal information that is publicly available on the web. Nevertheless, it stressed that its goal is for its systems to learn about the world rather than private individuals. To that end, its team attempts to remove personal information from training datasets whenever practicable. At the same time, it has fine-tuned its models to deny any request made for the personal information of private individuals. OpenAI’s models will also respond positively to any request from an individual to delete personal information from its systems.

“These steps minimize the possibility that our models might generate responses that include the personal information of private individuals”

the company explained.

“It’s good to see OpenAI laying down what basically amounts to its ethical principles,” said Holger Mueller of Constellation Research Inc. “But it remains to be seen if this kind of self-governance will suffice to prevent government regulation.”

OpenAI may be willing to accommodate regulations, though, and even help lawmakers to design them. Concluding its blog post, it called upon policymakers and AI providers to ensure that the development and deployment of AI systems is governed effectively at a global scale. More dialogue will be required to do this, and it said it’s keen to participate.

Read more

Recommended For You