OpenAI’s CEO says the company won’t train GPT-5 for some time

OpenAI is focusing on increasing the capabilities of its latest GPT-4 model, CEO Sam Altman confirmed last week

OpenAI, the company that developed the artificial intelligence chatbot ChatGPT, has confirmed that it will not be attempting to train GPT 5, following the release of the highly effective GPT 4. During a presentation at MIT, OpenAI CEO Sam Altman confirmed this information when he was questioned about an open letter written by Elon Musk and other researchers that requests companies to halt the development of artificial intelligence systems that are more powerful than GPT-4.

The capacity of OpenAI’s ChatGPT to provide in-depth responses to a wide variety of questions in a relatively short amount of time has contributed to the app’s meteoric rise in popularity. Because it is used on a monthly basis by more than one hundred million people, it has become the most popular app of its kind in the history of the world. Concerns have been raised about the ways in which its rapid expansion might influence factors such as individuals’ personal security and privacy, as well as the labour market.

In his talk at MIT, Altman addressed some of these concerns by arguing that the open letter failed to take into account crucial technical nuances regarding when and where development should be halted. Altman’s talk was a part of the MIT Institute for the Study of Human Intelligence. Altman was unconvinced by the letter that Musk had signed. It’s missing a lot of the technical nuance that would tell us where the pause needs to be,” he said. It was his response that “We aren’t, and we won’t for some time.”

In addition, he set the record straight regarding the letter’s misleading assertion that OpenAI was currently in the process of instructing GPT-5. Instead, the company’s primary focus is on eliminating all potential safety concerns associated with GPT-4 and the rest of its AI systems.

- Advertisement -

Researchers in the field of technology, such as Elon Musk, Steve Wozniak, and Professor Stuart Russell, have discussed the threats posed by artificial intelligence and proposed potential solutions to these problems.
The European Data Protection Board (EDPB) has just recently announced that it will be establishing a task force on ChatGPT in order to establish a common policy on privacy rules for artificial intelligence. The data protection commissioner for Germany has dropped hints that his country might follow Italy’s lead and prohibit ChatGPT after that country made the decision to do so.

As discussion on issues such as safety, privacy, and employment implications of artificial intelligence continues, OpenAI and other companies in the industry will be required to address these issues. It is essential that all stakeholders work together to ensure that artificial intelligence is developed in a responsible and ethical manner. This is the case despite the fact that there may be disagreements about the best way to accomplish this goal.

The GPT marketing hype and the falsehood of versioning

The remarks made by Altman are intriguing; however, they do not shed any new light on the plans that OpenAI has in store for the near and distant future. Instead, they draw attention to a major obstacle that must be overcome when discussing the safety of artificial intelligence, and that is the difficulty of measuring and tracking advancements. Despite the fact that Altman asserts that OpenAI is not currently training GPT-5, this statement does not reveal very much.

- Advertisement -

This misconception is helped along by the “fallacy of version numbers,” which is the mistaken belief that incrementally higher-numbered software updates represent definitive and linear improvements in functionality. The consumer technology industry has been responsible for perpetuating a widespread misconception for a number of years now, which is the idea that the version numbers given to new phones and operating systems represent legitimate attempts at version control. If we follow this line of reasoning, it stands to reason that the iPhone 35 will be an improvement over the iPhone 34. The higher the number, the more advanced and capable the phone.

This isn’t to say that concerns about AI safety are unfounded or that these systems are advancing quickly and may become beyond our control. But the point is to show that not all arguments are equal and that assigning a numerical value to something (like a new phone or intelligence) does not mean we understand it.

Instead of talking about what these systems can’t do, we should demonstrate and predict their capabilities.

Altman’s assurance that OpenAI isn’t working on GPT-5 won’t ease AI security concerns. The company is still developing GPT-4’s capabilities, as are others in the industry. Version numbers can also be misleading because OpenAI is likely optimising GPT-4 and may release GPT-4.5 first, as it did GPT-3.5.

Even if governments worldwide could ban new AI development, society would already be overburdened. GPT-5 is far off, but does that matter when we don’t even understand GPT-4?

Launch of OpenAI’s Bug Bounty Program

This project is essential for OpenAI to accomplish its goal of developing reliable and cutting-edge artificial intelligence in order to fulfil its mission. whereas they construct products and services that are trustworthy, safe, and reliable.

The mission of OpenAI’s Bug Bounty Program is to recognise and commend the efforts of security researchers who contribute to the continued integrity of the company’s products and operations while also providing financial compensation for their work. They are the individuals who will report any bugs, security holes, or vulnerabilities that they discover within the system. Your contribution to making their product safer for use by everyone will be greatly appreciated if you share your findings.

OpenAI has formed a partnership with Bugcrowd, the most prominent bug bounty platform in the industry, in order to simplify the process of reporting bugs and receiving rewards to the greatest extent possible for all parties involved. On the page devoted to the OpenAI Bug Bounty Program, you’ll find detailed rules and instructions for taking part.

OpenAI will be offering cash rewards to those who report bugs to them, with the amount depending on how serious the problem is and how much of an impact it will have. OpenAI’s rewards start at $200 for minor discoveries and go up to $20,000 for game-changing ones. OpenAI will also make every effort to publicly thank people who detect bugs.

- Advertisement -

Latest articles

Related articles