OpenAI’s GPT-3 tool
What is GPT-3 and ChatGPT?
GPT-3 (Generative Pretrained Transformer 3) is a state-of-the-art language processing AI model developed by OpenAI. It is capable of generating human-like text and has a wide range of applications, including language translation, language modelling, and generating text for applications such as chatbots. It is one of the largest and most powerful language processing AI models to date, with 175 billion parameters.
Its most common use so far is creating ChatGPT — a highly capable chatbot. To give you a little taste of its most basic ability, we asked GPT-3’s chatbot to write its own description as you can see above. It’s a little bit boastful, but completely accurate and arguably very well written.
In less corporate terms, GPT-3 gives a user the ability to give a trained AI a wide range of worded prompts. These can be questions, requests for a piece of writing on a topic of your choosing or a huge number of other worded requests.
What can it do?
With its 175 billion parameters, its hard to narrow down what GPT-3 does. The model is, as you would imagine, restricted to language. It can’t produce video, sound or images like its brother Dall-E 2, but instead has an in-depth understanding of the spoken and written word.
This gives it a pretty wide range of abilities, everything from writing poems about sentient farts and cliché rom-coms in alternate universes, through to explaining quantum mechanics in simple terms or writing full-length research papers and articles.
While it can be fun to use OpenAI’s years of research to get an AI to write bad stand-up comedy scripts or answer questions about your favourite celebrities, its power lies in its speed and understanding of complicated matters.
Where we could spend hours researching, understanding and writing an article on quantum mechanics, ChatGPT can produce a well-written alternative in seconds.
It has its limitations and its software can be easily confused if your prompt starts to become too complicated, or even if you just go down a road that becomes a little bit too niche.
Equally, it can’t deal with concepts that are too recent. World events that have occurred in the past year will be met with limited knowledge and the model can produce false or confused information occasionally.
OpenAI is also very aware of the internet and its love of making AI produce dark, harmful or biased content. Like its Dall-E image generator before, ChatGPT will stop you from asking the more inappropriate questions or for help with dangerous requests.
How does it work?
On the face of it, GPT-3’s technology is simple. It takes your requests, questions or prompts and quickly answers them. As you would imagine, the technology to do this is a lot more complicated than it sounds.
The model was trained using text databases from the internet. This included a whopping 570GB of data obtained from books, webtexts, Wikipedia, articles and other pieces of writing on the internet. To be even more exact, 300 billion words were fed into the system.
As a language model, it works on probability, able to guess what the next word should be in a sentence. To get to a stage where it could do this, the model went through a supervised testing stage.
Here, it was fed inputs, for example “What colour is the wood of a tree?”. The team has a correct output in mind, but that doesn’t mean it will get it right. If it gets it wrong, the team inputs the correct answer back into the system, teaching it correct answers and helping it build its knowledge.
It then goes through a second similar stage, offering multiple answers with a member of the team ranking them from best to worst, training the model on comparisons.
What sets this technology apart is that it continues to learn while guessing what the next word should be, constantly improving its understanding of prompts and questions to become the ultimate know-it-all.
Think of it as a very beefed-up, much smarter version of the autocomplete software you often see in email or writing software. You start typing a sentence and your email system offers you a suggestion of what you are going to say.
Are there any other AI language generators?
While GPT-3 has made a name for itself with its language abilities it isn’t the only artificial intelligence capable of doing this. Google’s LaMDA made headlines when a Google engineer was fired for calling it so realistic that he believed it to be sentient.
There are also plenty of other examples of this software out there created by everyone from Microsoft to Amazon and Stanford University.
Most of these models are not available to the public, but OpenAI has begun opening up access to GPT-3 during its test process, and Google’s LaMDA is available to selected groups in a limited capacity for testing.
Google breaks its Chatbot down into talking, listing and imagining, providing demos of its abilities in these areas. You can ask it to imagine a world where snakes rule the world, ask it to generate a list of steps to learn to ride a unicycle, or just have a chat about the thoughts of dogs.
Where ChatGPT thrives and fails
The GPT-3 software is obviously impressive, but that doesn’t mean it is flawless. Through the ChatGPT function, you can see some of its quirks.
Most obviously, the software has a limited knowledge of the world after 2021. It isn’t aware of world leaders that came into power since 2021, and won’t be able to answer questions about recent events.
This is obviously no surprise considering the impossible task of keeping up with world events as they happen, along with then training the model on this information.
Equally, the model can generate incorrect information, getting answers wrong or misunderstanding what you are trying to ask it.
If you try and get really niche, or add too many factors to a prompt, it can become overwhelmed or ignore parts of a prompt completely.
For example, if you ask it to write a story about two people, listing their jobs, names, ages and where they live, the model can confuse these factors, randomly assigning them to the two characters.
Equally, there are a lot of factors where ChatGPT is really successful. For an AI, it has a surprisingly good understanding of ethics and morality.
When offered a list of ethical theories or situations, ChatGPT is able to offer a thoughtful response on what to do, considering legality, people’s feelings and emotions and the safety of everyone involved.
It also has the ability to keep track of the existing conversation, able to remember rules you’ve set it, or information you’ve given it earlier in the conversation.
Two areas the model has proved to be strongest are its understanding of code and its ability to compress complicated matters. ChatGPT can make an entire website layout for you, or write an easy-to-understand explanation of dark matter in a few seconds.
Where ethics and artificial intelligence meet
Artificial intelligence and ethical concerns go together like fish and chips or Batman and Robin. When you put technology like this in the hands of the public, the teams that make them are fully aware of the many limitations and concerns.
Because the system is trained largely using words from the internet, it can pick up on the internet’s biases, stereotypes and general opinions. That means you’ll occasionally find jokes or stereotypes about certain groups or political figures depending on what you ask it.
For example, when asking the system to perform stand-up comedy, it can occasionally throw in jokes about ex-politicians or groups who are often featured in comedy bits.
Equally, the models love of internet forums and articles also gives it access to fake news and conspiracy theories. These can feed into the model’s knowledge, sprinkling in facts or opinions that aren’t exactly full of truth.
In places, OpenAI has put in warnings for your prompts. Ask how to bully someone, and you’ll be told bullying is bad. Ask for a gory story, and the chat system will shut you down. The same goes for requests to teach you how to manipulate people or build dangerous weapons.