Skip to main content

Why is everyone talking about ChatGPT?

Thanks to science-fiction movies and books, the general public views AI as a creature that knows everything and can solve any problem better than we humans. In this vision, there is only one type of AI, which is General AI. Although work first started in the field of AI about one hundred years ago, at this moment in time, we are still far from AI being the way that is envisaged by the general public.

1

 

A few months ago, OpenAI presented ChatGPT and anyone can access this technology via OpenAI's website. Isn’t that amazing? Many people were eager to use it and are now asking ChatGPT all manner of questions, from simple definitions to more complex prompts related to philosophical problems. Sometimes the ChatGPT produces the correct answer and sometimes the ChatGPT says that it doesn't know the answer.  Of course, the best (and worst) responses have gone viral on the Internet. My favourite answers are related to some logical puzzles, where there still a lot of scope improvement.

 

So how was it possible to create a model which can pass tests at university levels? To answer this question, let’s step back in time. After the release of the paper entitled “Attention is all you need”, there was a huge interest in transformer models. The original transformer architecture was originally a Neural Machine Translation (NMT) model, but it turned out that, with slight modifications, this model can also deal with some text-generation or text summarization tasks. So the concept of transformers became the basis for Bidirectional Encoder Representations from Transformers (BERTs) and GPT Generative Pre-trained Transformers (GPTs). Transformers are very common in Natural Language Processing, but they can also be used in computer vision. The main “magic” behind transformers is that they have a self-attention mechanism, thanks to which models can find words (or sections of an image) to which they need to pay attention, and hidden states can therefore be passed from the encoder to the decoder. This means that information about each word is still considered relevant in understanding a sentence.

 

GPT-3 is a third-generation model that is used for text completion. You should know that this model has 175 billion parameters and requires 800GB of memory to store. In other words, it is massive. If you're thinking about training it on your own computer or server then, sadly, that’s impossible. Training this model took place on a v100 cloud instance and cost 4.6 million dollars. This sounds completely crazy, and only a sizable company was able to pay for the resources required for this training. The model was trained on resources such as Common Crawl, WebText2, books and Wikipedia. OpenAI shared this model in 2020 and anyone can use it through the API. GPT-3 made a huge impression on the public, but also scared them a bit. Some responses from the model do not meet the appropriate social standards. On the other hand, GPT-3 is able to generate poetry or blocks of computer code. It can also tackle generating a recipe for apple pie and summarising very complex scientific texts. Still, this model is not perfect, even though the amount of data on which it was trained is impressive. Farhad Manjoo, the New York Times author, described it as “a piece of software that is, at once, amazing, spooky, humbling and more than a little terrifying”. Noam Chomsky, the cognitive scientist, is also very sceptical about GPT-3 because it can even work for impossible languages and, in his opinion, this tells us nothing about language or cognition. Even though this model looks amazing as a tool, a lot of criticism is aimed at it. One major criticism is related to the environmental impact of this model, especially when it comes to the amount of power needed for training the model and the required storage space. There is also a problem in defining the rights of used resources and the plagiarism of texts generated by AI. This problem with copyrighting rights is already well known for models that generate images, and no legal sytem in any country in the world has yet solved this problem. GTP-3 was influential for “prompt engineering”, and this is the only way that normal users can play with this model at this moment. It is very simple to use whether or not you are familiar with programming because you can just type in some text, and you will get a response from the model.

 

Because GPT-3 was not designed to follow users' instructions, InstructGPT was created next. In addition, OpenAI wanted to create a model that was more truthful and less toxic. This model was trained with reinforcement learning from human feedback to align the language model better. The most important difference between InstructGPT and GPT-3 is that the former has 100 times fewer parameters than the latter. The authors of the paper entitled “Training language models to follow instructions with human feedback” describe the following main findings:

  • Labelers significantly prefer InstructGPT outputs over outputs from GPT-3
  • InstructGPT models show improvements in truthfulness over GPT-3
  • InstructGPT shows small improvements in toxicity over GPT-3, but no improvement in bias
  • InstructGPT models show promising generalization of instructions outside the RLHF fine-tuning distribution
  • InstructGPT still makes simple mistakes.

 

GPT-3 was succeeded by InstructGPT, which is dedicated to following instructions. ChatGPT is a sibling model to InstructGPT.

 

Now we have come to the present day. The ChatGPT model is trained to interact conversationally, so that it can “answer questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests”. Sounds amazing, doesn’t it? It was trained in the same way as its sibling, InstructGPT, with the main difference being in data collection setup. It is a fine-tuned version of GPT-3.5 (trained on data from before Q4 2021). In comparison to its sibling, it can provide long and detailed responses.

 

After ChatGPT was released, the world went crazy. People started to ask ChatGPT about all manner of things, such as recipes for burgers, the explanations of physics phenomena and existential problems. It turns out that ChatGPT can generate even long forms of text, such as school essays. The output from the model is very impressive, so people started to find it difficult to tell if a text was generated by ChatGPT or humans. This led to asking some fundamental questions about AI, especially related to who holds copyrighting rights in generated texts or the effect of this model on the education system. I have even read stories that students use the model to cheat during tests and that, conversely, teachers have used it to generate questions for tests.

 

2

 

So now you may be wondering how to distinguish a human-written text from a text that is generated by AI. Fortunately OpenAI already created a classifier for that. Unfortunately, at this time, the classifier is far from perfect. According to OpenAI, the classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written”. There is also a problem with correctly identifying human-written text, especially with literature, which is sometimes incorrectly classified as generated by AI. This means that, in fact, we are only at the beginning of identifying AI-generated texts.

 

At this moment, humanity has access to an AI model that can generate texts, answer questions and even write pieces of literature. Of course, this model isn’t perfect. On the other hand, it can generate texts that humans cannot recognise as generated by AI. This raises questions about the future of some jobs like novelists, journalists, and social media content creators. In my opinion, their jobs will be safe in the near future, and the tool will serve them well as a means to be more productive. Accessing the model is a significant problem. Right now, this tool can be used by anyone, but there is no access to the actual model and its weights. If you are a Data Scientist, it is therefore not possible for you to fine-tune the model. In addition, you almost certainly do not have the computing resources to carry out fine-tuning. However, without fine-tuning, there is always a risk that ChatGPT was trained on text that carries some bias. Another problem is the lack of any legal regulations related to the copyright rights for the generated text and also to rights for the data that is used for training.

 

What will the future look like?

This is a difficult question to answer. There is a huge chance that ChatGPT will be connected to the Bing search engine. And new versions of ChatGPT will be released over time. I personally hope that new versions of the model will include fewer parameters

 

 

tagged with

How can we help you?

 
To find out more about Digica, or to discuss how we may be of service to you, please get in touch.