ChatGPT Evolution

Chat bots play a crucial role in the daily life of a user when he/she visits an online website or an app. In recent years, the advancements in Artificial Intelligence and Natural Language Processing paved a way for significant developments in conversational AI. One of the incredible innovations in this field is ChatGPT. ChatGPT is a highly efficient conversational Artificial intelligence chatbot which was developed by OpenAI. ChatGPT is built on Large Learning Models(LLM’s), which has an ability to understand and generate natural language.

Definition of ChatGPT

The foundation of ChatGPT is the GPT architecture. The acronym GPT was derived from the characteristics of the AI model.

GPT: Generative Pre-Trained Transformer

Generative: GPT is a generative model, that means it has an ability to generate new content. GPT is trained on a large dataset.so it can generate text or other types of data based on the patterns it learnt during the training. It can generate coherent and contextually relevant responses based on the given input.

Pre-Trained: GPT is a pretrained model, it undergoes training with a huge amount of data before being used. During pre training it is learnt to predict the next word based on the previous context. This allows chatGPT to capture grammar,emotions and semantics from the training data. Then the pretrained model can be used for specific domains to make it much more useful and accurate for those particular applications.

Transformer: Transformer is a specific type of neural network architecture used in GPT. It was designed to handle sequential data such as text, by understanding the relationship between different elements of the sequence. Transformer model consists of an encoder and a decoder,which work together to process and generate text. Transformer performs better on machine translation, text summarization and language generation which revolutionized natural language processing tasks.

Journey of GPT series from GPT-1 to GPT-4:

Source: Link

GPT has a series of GPT versions namely GPT-1,GPT-2,GPT-3,GPT-3.5,ChatGPT,GPT.4. The efficiency of the model increases in its successive versions.

Source: Link

GPT-1:

GPT-1 was launched by OpenAI in January-2018.This model was trained on an 40GB text corpus dataset.so the model was able to learn large range dependencies and acquire vast knowledge on diverse corpus of contiguous text and long stretches. The data set used for training is unlabeled.

Architecture: This model consists of 12 stacked transformer encoded layers.Each layer incorporates multi-head self-attention. This enables the model to understand the relationship between words in a sentence. In addition to this a feedforward neural network was employed to process the outputs of the self-attention layer. GPT-1 was created only using 117 million parameters.

Though GPT-1 was able to perform text generation, language translation, summarization and sentiment analysis etc., it lacks the ability to understand the coherence of conversations.

GPT-2:

The successor of GPT-1 is GPT-2 was launched in February-2019. GPT-2 was fine tuned model of GPT-1. The model was trained on 8 million webpages in addition to 40GB text corpus data which made it to generate more accurate and consistent responses than GPT-1.

Source: Link

Architecture: This model consists of 48 stacked transformer encoded layers.Each layer incorporates multi-head self-attention. This enables the model to understand the relationship between words in a sentence. In addition to this a feedforward neural network was employed to process the outputs of the self-attention layer. GPT-2 was created only using 1.5B parameters.

Though GPT-1 was better than GPT-2 it still has some limitations such as lack of factual knowledge and a tendency to generate biased data.

GPT-3:

GPT-3 was the third generation language model developed by openAI and was in the year 2020 March. It had become a sensation after its launch and revolutionized the AI world because of its higher efficiency compared to the previous GPT series. It has been trained with 45TB text data from Wikipedia and various books. 60 percent of data for pre training was taken from common crawl. Techniques like zero-shot learning, one-shot learning and few-shot learning made it more versatile and brought a huge improvement in the NLP field.

Architecture: GPT-3 uses the same transformer framework as GPT-2.It comes with different sizes, the largest model has 175B parameters and there are 96 decoder layers and is built on a system with 285k CPU cores,10k GPU’s and 400Gbps network connectivity for each GPU server.

Source: Link

The usage of more dataset and parameters made GPT-3 more powerful and made it to generate more generic responses and user friendly.

Architecture of GPT-1 vs GPT-2 vs GPT-3:

Source: Link

Though GPT-3 had fulfilled many limitations which previous versions cannot, to make it much more user friendly and to provide longer and more generic responses GPT-3.5 came into the world.

GPT-3.5:

GPT-3.5 is the successor and fine tuned model of GPT-3.It was launched in January-2022.The parameters used for this version were 1.3B which were 100 times less than the previous version. GPT-3.5 was trained on the same datasets of GPT-3. Unlike its predecessors it's fine tuning of GPT-3 added a new learning method Reinforcement learning with human feedback. GPT-3.5 is also called Instruct GPT.

In this method the model learns from the human feedback by rewarding or punishing the model’s actions, by providing the labels for unlabeled data or adjusting model parameters. The aim of Reinforcement Learning is to incorporate human expertise in addition to the knowledge of Machine Learning algorithms to solve complex problems.

Comparison between GPT-2,GPT-3 and GPT-3.5:

Source: Link

GPT-4:

GPT-4 was launched in March-2023.The parameters were not known exactly but are estimated to be in the range of trillions. There are several layers used in this model. The architecture is fine tuned versions of its previous ones.GPT-4 is 10 times more advanced than its predecessor, GPT-3.5.It has much greater ability than GPT-3.5 in understanding and generating different dialects, answering the questions by synthesizing the information from various resources, creativity and coherence, complex problem solving in programming power, in analyzing graphics and images.

ChatGPT was introduced in November-2022. It was a fine tuned model of GPT-3.5  which itself is a fine tuned version of GPT-3.

Do Checkout:

To get deeply indulged into AI visit TestAing

References:

  1. https://blog.enterprisedna.co/what-is-chat-gpt-everything-you-need-to-know/
  2. https://www.searchenginejournal.com/gpt-4-vs-gpt-3-5/482463/#close
  3. https://blog.routinehub.co/origin-and-evolution-of-chat-gpt/
  4. https://iq.opengenus.org/gpt-3-5-model/#:~:text=In%20short%2C%20GPT%2D3.5%20model,B%2C%206B%20and%20175B%20parameters
  5. The Evolution Of ChatGPT From GPT-1 To GPT-4
  6. GPT 3 Explained | What is GPT 3 | GPT 3 DEMO | GPT 3 AI | Artificial Intelligence | Simplilearn

-Grandhi Priya