OpenAI is building next-generation AI GPT-5 and CEO claims it could be superintelligent

GPT-5 release: No date for ChatGPT upgrade from Sam Altman

what is gpt5

Thanks to public access through OpenAI Playground, anyone can use the language model. While it might be too early to say with certainty, we fully expect GPT-5 to be a considerable leap from GPT-4. GPT-4 improved on that by being both a language model and a vision model. We expect GPT-5 might possess the abilities of a sound recognition model in addition to the abilities of GPT-4. OpenAI’s Generative Pre-trained Transformer (GPT) is one of the most talked about technologies ever.

what is gpt5

While it’s good news that the model is also rolling out to free ChatGPT users, it’s not the big upgrade we’ve been waiting for. Meta, the parent company to Facebook, has also spent the last few years developing its own AI chatbot based on its family of Llama large language models. The company finally revealed its chatbot in April 2024, dubbed Meta AI, and revealed that it leverages the company’s latest to date model, Llama 3. The assistant is available in more than a dozen countries and operates across Meta’s app suite, including Facebook, Instagram, WhatsApp, and Messenger.

What’s on the horizon for OpenAI

We should also expect to see these models—while still unreliable—become substantially more reliable than previous versions. GPT-4 isn’t just better at longer tasks than GPT-3; it is also more factual. Business Insider’s Darius Rafieyan spoke to OpenAI customers and developers about their expectations for the latest iteration. “Every ChatGPT App week, over 250 million people around the world use ChatGPT to enhance their work, creativity, and learning,” the company wrote in its announcement post. “The new funding will allow us to double down on our leadership in frontier AI research, increase compute capacity, and continue building tools that help people solve hard problems.”

what is gpt5

According to some of the people who tested it, it’s apparently beating or matching GPT-4 (ChatGPT Plus) in benchmarks. Maybe OpenAI will go for a ChatGPT 2 upgrade, therefore skipping the GPT-5 model name entirely. With that in mind, could GPT2-Chatbot be the precursor of a future OpenAI evolution of ChatGPT?

Overall, we can’t conclude much, and this interview suggests that what OpenAI is working on is pretty important and kept tightly under wraps – and that Altman likes speaking in riddles. That’s somewhat amusing, but I think people would like to know how large the advancement in AI we’re about to see is. Rumors aside, OpenAI did confirm a few days ago that the text-to video Sora service will launch publicly later this year. The same anonymous employee also said that OpenAI is going to give GPT-5 new capabilities.

US achieves billion-fold power-saving semiconductor tech; could challenge China

This means the AI can autonomously browse the web, conduct research, plan, and execute actions based on its findings. This feature positions Project Strawberry as a powerful tool for performing what is gpt5 complex, multi-step tasks that go beyond traditional AI capabilities. An AI researcher passionate about technology, especially artificial intelligence and machine learning.

One of the most exciting improvements to the GPT family of AI models has been multimodality. For clarity, multimodality is the ability of an AI model to process more than just text but also other types of inputs like images, audio, and video. Multimodality will be an important advancement benchmark for the GPT family of models going forward. I analysed my usage of LLMs, which spans Claude, GPT-4, Perplexity, You.com, Elicit, a bunch of summarisation tools, mobile apps and access to the Gemini, ChatGPT and Claude APIs via various services. Excluding API access, yesterday I launched 23 instances of various AI tools, covering more than 80,000 words.

The most interesting bit of news from this podcast is the aforementioned video capabilities, on top of the GPT-5 release confirmation. The current version of ChatGPT already supports image and audio but with video, the breadth of what generative AI can do will massively expand. The publication talked to two anonymous sources inside OpenAI who are familiar with the development and said that some enterprise customers of OpenAI have already tested the GPT-5 model.

Tools like Open Interpreter and Code Interpreter API have already demonstrated cool demos. In a recent interview with Lex Fridman, OpenAI CEO Sam Altman commented that GPT-4 “kind of sucks” when he was asked about the most impressive capabilities of GPT-4 and GPT-4 Turbo. He clarified that both are amazing, but people thought GPT-3 was also amazing, but now it is “unimaginably horrible.” Altman expects the delta between GPT-5 and 4 will be the same as between GPT-4 and 3. Altman commented, “Maybe [GPT] 5 will be the pivotal moment, I don’t know.

Sign Up For The Neuron AI Newsletter

According to some reports, GPT-5 should complete its training by December 2023. OpenAI might release the ChatGPT upgrade as soon as it’s available, just like it did with the GPT-4 update. But rumors are already here and they claim that GPT-5 will be so impressive, it’ll make humans question whether ChatGPT has reached AGI. That’s short for artificial general intelligence, and it’s the goal of companies like OpenAI.

  • The next generation of AI models is expected to not only surpass humans in terms of knowledge, but also match humanity’s ability to reason and process complex ideas.
  • Early reports of the model first appeared on 4chan, then spread to social media platforms like X, with hype following not far behind.
  • With GPT-4 already adept at handling image inputs and outputs, improvements covering audio and video processing are the next milestone for OpenAI, and GPT-5 is a good place to start.
  • I’ve started to use agentic workflows in Wordware (where I am an investor) to sometimes automate several LLMs to help with a research or analysis task.
  • These enterprise customers of OpenAI are part of the company’s bread and butter, bringing in significant revenue to cover growing costs of running ever larger models.

GPT-5 will also display a significant improvement in the accuracy of how it searches for and retrieves information, making it a more reliable source for learning. GPT-3’s introduction marked a quantum leap in AI capabilities, with 175 billion parameters. This enormous model brought unprecedented fluency and versatility, able to perform a wide range of tasks with minimal prompting. These proprietary datasets could cover specific areas that are relatively absent from the publicly available data taken from the internet. Specialized knowledge areas, specific complex scenarios, under-resourced languages, and long conversations are all examples of things that could be targeted by using appropriate proprietary data. This estimate is based on public statements by OpenAI, interviews with Sam Altman, and timelines of previous GPT model launches.

What’s next for OpenAI’s o1 Series

For now, details on GPT-5’s launch timeline and capabilities remain unknown. OpenAI CEO Sam Altman told the Financial Times yesterday that GPT-5 is in the early stages of development, even as the latest public version GPT-4 is rampaging through the AI marketplace. The last official update provided by OpenAI about GPT-5 was given in April 2023, in which it was said that there were “no plans” for training in the immediate future. This process could go on for months, so OpenAI has not set a concrete release date for GPT-5, and current predictions could change.

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He’s since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he’s continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Lastly, Apple had long been rumored to be working on an artificial intelligence system of its own, and proved the world right at WWDC 2024 in June, where the company revealed Apple Intelligence.

Sam Altman’s mixed feelings about GPT-4

Next up was Meta’s promising new approach to large language models (LLMs). Scientists at the company are exploring a solution to hallucinations that penalizes models for producing wrong answers. While there’s still plenty of testing to be done, it’s resulted in some benchmark improvements so far. If the team can successfully evolve how models decide which words to string together, the result could mean greater sophistication and contextual accuracy for generated text.

GPT-5 can process up to 50,000 words at a time, which is twice as many as GPT-4 can do, making it even better equipped to handle large documents. However, GPT-5 will have superior capabilities with different languages, making it possible for non-English speakers to communicate and interact with the system. The upgrade will also have an improved ability to interpret the context of dialogue and interpret the nuances of language. Recently, there has been a flurry of publicity about the planned upgrades to OpenAI’s ChatGPT AI-powered chatbot and Meta’s Llama system, which powers the company’s chatbots across Facebook and Instagram.

what is gpt5

Up until that point, ChatGPT relied on the older GPT-3.5 language model. For context, GPT-3 debuted in 2020 and OpenAI had simply fine-tuned it for conversation in the time leading up to ChatGPT’s launch. Yes, OpenAI and its CEO have confirmed that GPT-5 is in active development. The steady march of AI innovation ChatGPT means that OpenAI hasn’t stopped with GPT-4. That’s especially true now that Google has announced its Gemini language model, the larger variants of which can match GPT-4. In response, OpenAI released a revised GPT-4o model that offers multimodal capabilities and an impressive voice conversation mode.

  • Altman could have been referring to GPT-4o, which was released a couple of months later.
  • Further, OpenAI is also said to have alluded to other as-yet-unreleased capabilities of the model, including the ability to call AI agents being developed by OpenAI to perform tasks autonomously.
  • Yes, GPT-5 is coming at some point in the future although a firm release date hasn’t been disclosed yet.

According to the slide shared by Huet we’ll see something codenamed GPT-Next by the end of this year, which I suspect is effectively Omni-2 — a more refined, better trained and larger version of GPT-4o. OpenAI started to make its mark with the release of GPT-3 and then ChatGPT. This model was a step change over anything we’d seen before, particularly in conversation and there has been near exponential progress since that point. We have Grok, a chatbot from xAI and Groq, a new inference engine that is also a chatbot. You can foun additiona information about ai customer service and artificial intelligence and NLP. “I don’t know if it’s going to feel as big,” said Jake Heller, the CEO and cofounder of Casetext, an AI-powered legal assistant that was recently acquired by Thomson Reuters.

The CEO is hopeful that the successes it has enjoyed with Microsoft will continue and bring in revenues for both companies in the future. Intriguingly, OpenAI’s future depends on other tech companies like Microsoft, Google, Intel, and AMD. It is well known that OpenAI has the backing of Microsoft regarding investments and training.

Rumors swirl about mystery “gpt2-chatbot” that some think is GPT-5 in disguise – Ars Technica

Rumors swirl about mystery “gpt2-chatbot” that some think is GPT-5 in disguise.

Posted: Tue, 30 Apr 2024 07:00:00 GMT [source]

The launch timeline has not been officially confirmed but it is set for a mid-year release, potentially in the summer, according to a report by Business Insider. With advanced multimodality coming into the picture, an improved context window is almost inevitable. Maybe an increase by a factor of two or four would suffice, but we hope to see something like a factor of ten. This will allow GPT-5 to process much more information in a much more efficient manner. So, rather than just increasing the context window, we’d like to see an increased efficiency of context processing.