After a lot of news and a lot of experiments, OpenAI has launched ChatGPT4. OpenAI has announced GPT-4: the latest in its line of AI language models that power applications like ChatGPT and the new Bing.
Let’s now say a goodbye to ChatGPT and welcome ChatGPT-4, an even more powerful tool, sure to send even bigger ripples across the world.
Everyone using ChatGPT knows its limitations. It has always been talked about and criticised for its incorrect answers, showing bias and many times for its hilarious answers. There are arguments that it is only good on the information that it has been trained on. OpenAI says it has spent the past six months making the new software safer. It claims ChatGPT-4 is more accurate, creative and collaborative than the previous iteration, ChatGPT-3.5, and “40% more likely” to produce factual responses.
OpenAI says it has spent the past six months making the new software safer. It claims ChatGPT-4 is more accurate, creative and collaborative than the previous iteration, ChatGPT-3.5, and “40% more likely” to produce factual responses”. OpenAI says it’s already partnered with a number of companies to integrate GPT-4 into their products, including Duolingo, Stripe, and Khan Academy. The new model is available to everyone via ChatGPT Plus, OpenAI’s $20 monthly ChatGPT subscription, and is powering Microsoft’s Bing chatbot. It will also be accessible as an API for developers to build on.
What makes it different from ChatGPT?
One of ChatGPT-4’s most sparkling new features is the ability to handle not only words but pictures too, in what is being called “multimodal” technology. A user will have the ability to submit a picture alongside text — both of which ChatGPT-4 will be able to process and discuss and then respond. The ability to input video is also on the horizon. OpenAI says it is
‘more creative, and able to handle much more nuanced instructions’
Also, ChatGPT-4 is also trained in Indian languages. This could technically mean that users may soon be able to ask a GPT-4-powered bot, such as ChatGPT Plus, questions in local languages, and get an answer. However, it is not yet clear if the responses would be offered in the local languages too, or in English only.
They say it is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5. To give an example of how smart it is, OpenAI said that GPT-4 could clear the US law exam in the 90th percentile (i.e. by scoring among the top 10 percent students), and score in the 99th percentile in a US Biology olympiad (i.e. by scoring among the top 1 percent). In comparison, ChatGPT with GPT-3.5 could only score 10th percentile and 31st percentile in the two exams, respectively.
Limitations of ChatGPT-4?
Like its predecessor, ChatGPT-4 isn’t too hot of the press at reasoning on current events, given that it was trained on data that existed before 2021. OpenAI said in a blog post that the latest iteration “still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts.” Morgan Stanley is using it to organize wealth management data, payment company Stripe Inc. is testing to see whether it can help combat fraud, and language-learning app Duolingo is incorporating it to explain mistakes and to allow users to practice real-world conversation.
GPT-4, launched on March 14th, is the successor to GPT-3. According to OpenAI, it can process up to 25,000 words, which is about eight times more than GPT-3. Additionally, it can process images and handle more nuanced instructions than GPT-3.5.
How can you access ChatGPT-4?
If you’re new to ChatGPT, the first thing to do is visit chat.openai.com. You can create a free account that grants you access to GPT-3, the current version available to everyone.
To use GPT-4, however, you’ll need to subscribe to ChatGPT Plus, which costs $20 per month and provides premium access to the service. Currently, there’s a limit of 100 messages every four hours when using GPT-4. OpenAI has not yet provided a timeline for when the update might be available to everyone.
What are the new capabilities of GPT-4?
OpenAI assumes that GPT-4 is good at tasks requiring advanced thinking, understanding of complex instructions, and creativity. People have already started using it for different creative tasks since it was launched a few hours ago. GPT-4 can process images, meaning users can upload photos and get recommendations from GPT-4 based on the image.
Now you can describe images, and generate recipes, code video games, create websites, and create a functional Chrome extension in just a few hours.
What is the next thing?
Microsoft Corp. has pledged $10 billion to OpenAI, but other tech companies also threw their hats in the ring in AI. Google has introduced its own AI service, Bard, to testers, and numerous startups are also endeavouring to keep up with the pace of AI development. In China, Baidu Inc. is set to launch its own bot, Ernie, while Meituan, Alibaba, and other smaller companies are also entering the fray.
OpenAI also detailed that it has focused on safety with the new GPT-4 model. As part of its release, the company specially mentioned that while there could still be issues around AI bias, safety, and accuracy of information, they have worked with over 50 AI safety experts to vet the responses that GPT-4 will produce. However, whether this could actually work or not will remain to be seen as more users gain access to GPT-4.