OpenAI just announced the latest version of its primary large language model, the GPT-4. This new one is even smarter, and according to the company, it has been trained on more data, which can help achieve better results.

The startup said that Microsoft Azure was used to training the new model (the Redmond giant has invested billions in the new technology) – unfortunately, no specifics were given on how GPT-4 was trained or which hardware was used, information that is comprehensively kept behind closed doors because of the increasing competition.

Microsoft revealed this Tuesday (14) that Bing’s AI chatbot is already using GTP-4 and, from the looks of it, this latest version will likely be adopted by consumer product chatbots in the following weeks.

Human-level performance, fewer mistakes, and more

chat gpt

In the past six months, OpenAI’s GPT language has been powering many of the AI demos that got many of us mesmerized, but what was already pretty good, now is even better; the startup claims that the new model will be able to reply with fewer factually incorrect answers, as well as less conversation about forbidden topics.

But one of the most interesting features has to do with a so-called human-level performance, that according to the company, makes GPT-4 perform even better than a regular human being in SAT exams. On simulated tests, the new language scored 93% on an SAT reading exam, 89% on an SAT Math exam, and 90% on a bar exam.

It’s not all rainbows and butterflies – but is it getting there?

Despite all of its power, GPT-4 still does not perform exceptionally well when it comes to making stuff up – in a recent blog post, the company said that their new product has limitations when it comes to social biases, hallucinations, and adversarial prompts but, from what it seems, they are already working on improvements.

OpenAI also said in a blog post that the differences between GPT-4 and GPT 3.5 are subtle in casual conversations; the difference comes out when the complexity of the task reaches a sufficient threshold because the newer version is both more creative and reliable and also is more prepared to handle different nuances of instructions.

Regarding availability, GPT-4 will be released first to ChatGPT subscribers (and also will be available as part of an API that allows its integration into the apps. When it comes to pricing, the startup will charge:

  • 3 cents for ~750-word prompts.
  • 6 cents for ~750-word answers.

Filed in General. Read more about , and .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading