GPT-4 is the new language model created by OpenAI that can generate text that is much closer to the speech of a human. According to OpenAi, version 4.0 has three key areas that it has advanced in, that being creativity, visual input, and longer context. The processing has also greatly improved as it can now read up to 25,000 words of text from the user, this will be helpful when having a long conversation, or asking it to interact with a web link. A new feature of the bot allows it to process images given to it. In an example provided on the GPT-4 website, the chatbot is given an image of a few baking ingredients and is asked what can be made with them. Those who work at OpenAi have made significant progress by reducing the chance that it is incorrect by 40% and it is 82% less likely to “respond to requests for disallowed content.”
GPT-4 has been out for only a few weeks and users have been able to use GPT-4 to invent new languages and make new animations from scratch. One user reported that they were able to make a working version of pong in just 60 seconds using a mix of HTML and JavaScript. There are three ways as of now to use GPT-4, one method is to buy it for a monthly subscription of $20. If you don’t like the idea of paying but still want to use it you can use it for free with Bing chat and Quaro. Although they are more limited than the paid version you can still send 15 chats per session and do 150 sessions per day. According to digitalTrends.com, GPT-4 has also been made available as an API “for developers to build applications and services.” Some of the companies that have already integrated GPT-4 include Duolingo, Be My Eyes, Stripe, and Khan Academy.
GPT-4 is great in many ways but has been noted to be slower, in a comparison of GPT-4 versus GPT-3.5, the newer version has much slower responses, as it was trained on a much larger set of data. Some people have reported that the new version of chatGPT is “dumber”, but the openAi Executive dismissed this saying “Quite the opposite: we make each new version smarter than the previous one. Current hypothesis: When you use it more heavily, you start noticing issues you didn’t see before”. Later a study confirmed that by comparing GPT-4 between the months of March and June, and according to digitalTrends.com “the researchers were able to ascertain that GPT-4 went from 97.6% accuracy down to 2.4%.”. One of the most anticipated features in GPT-4 is visual input as it allows GPT-4 to analyze and interact with images. The feature has been delayed due to a “mitigation of safety challenges” according to the C.E.O of OpenAi, Sam Altman.