GPT-3 (Generative Pre-trained Transformer 3) is a language generation model developed by OpenAI. Ever since it went into beta development in 2020, the deep learning model has been making headlines for being a breakthrough technology in artificial intelligence (AI) advancement. It has a wide range of applications ranging from text generation, translation, summarization to text classification and sentiment analysis tasks. However, no technology can be without its flaws and the same is true for GPT. So, what are some of the disadvantages of GPT-3? Read on to find out.
It should be noted that some of the drawbacks discussed here are factual problems with this model, whereas some others are only potential issues which can lead to some problems once it becomes more widely used. However, this also means that the developers can assess and fix them before that. The following disadvantages are there with currently available information around GPT-3.
The biggest disadvantage of GPT-3 is its cost. The API required to access GPT-3 is quite expensive. This puts it out of budget for many individuals and even small businesses. The most advanced language model, Davinci, costs $0.02 (Rs. 1.5 approximately) per thousand tokens. Tokens can be understood as pieces of words, where 1,000 tokens is about 750 words, as per OpenAI. To produce high volume text and generate content at this price can be unaffordable for many.
A potential issue with GPT-3 is its bias. As with any machine learning model, GPT-3 is only as good as the data it was trained on. In effect, garbage in, garbage out. If the training data contains biases, the model may exhibit those biases in its output. While this can be mitigated by a team of experts, it may not be easy for an
Read more on tech.hindustantimes.com