A look at GPT-3

A+look+at+GPT-3

Michael McClellan, Staff Writer

Say hello to Generative Pre-trained Transformer 3, better simplified and known as GPT-3, an “autoregressive language model that uses deep learning to produce human text.” OpenAI, Elon Musk’s research laboratory based in San Francisco, California initially released the AI on June 11th, 2020. GPT-3 is the most complex and sophisticated text generator to date, beating its predecessors by a large margin. By analyzing Wikipedia sites, online novels, social media platforms, and other language sources on the internet, GTP-3 has become an answer provider and task doer. But how does GTP-3 work, and what can it really do? 

GTP-3 is an artificial intelligence system that takes a user’s language and predicts another piece of language that will be beneficial for the user. If a user asks GTP-3 a question, GTP-3 will use internet-gathered information to formulate the best answer. It also means that the AI can fill in the blank word of a phrase. For example, if you try to say something like “The cow jumped over the ____,” GTP-3 will use its database and see that the user was most likely looking for the word “moon.” Using all sorts of text from the internet, GTP-3 has a strong base of words at its disposal. But GTP-3 is more than just a machine that knows a bunch of words; it cultivates an understanding of these words by taking apart sentences and rebuilding them. With every experience, GTP-3 gradually and continuously learns to reach more correct responses. 

When invited to see what GTP-3 could do, Mckay Wrigley, the founder/CEO at LearnFromAnyone, decided to experiment with the AI, wondering whether or not the system could imitate public figures’ speech and writing style. In conducting his experiment, Wrigley gave GTP-3 Scott Barry Kaufman’s name and a topic of discussion, which in this case, was creativity. When answering Wrigley’s question of “how can we become more creative?” GTP-3 left Wrigley and Kaufman with a surprising answer, responding to Wrigley’s prompt as Kaufman would; Kaufman even admitted that “it definitely sounds like something [he] would say.” It even surprised the GTP-3’s designers, as they had originally only intended for GTP-3 to fill in the blank for missing words in a sentence. 

Although GTP-3 works seemingly flawlessly and better than expected from this particular anecdote, sources note that GTP-3 has a long way to go, both in terms of the type of language it provides answer seekers and its accessibility to the general public. Because there is no way to filter the information that GTP-3 gathers from the internet, GTP-3 has spewed out toxic and biased language, often lacking comprehension and sentimentality in particular situations. For example, Jerome Pesenti, the leader of Facebook’s AI lab, called GTP-3 “unsafe” for the workplace; it has used discriminatory language when prompted with topics such as African Americans and the Holocaust. In addition to its incapability to censor racist and discriminatory language, GTP-3 also cannot censor fake information. Experts are concerned that AI (including GTP-3) will contribute to the spread of fake news, which will ultimately negatively impact nations worldwide, as seen in the 2016 United States presidential election, where social media and artificial intelligence were exploited and used as tools against democracy. 

The technology itself is very expensive, stretching beyond many small companies’ reaches, let alone individuals. Additionally, OpenAI hasn’t released GTP-3’s code to the public, causing its effects on everyday life to be relatively limited thus far. OpenAI hasn’t released any information on how the GTP-3’s algorithms function, preventing plagiarism and further limiting its impact.

While GTP-3 shows promise and a glimpse of what the future may hold in terms of artificial intelligence, there are still several problems, both apparent now and potentially unforeseen, that need to be considered and addressed.

Sources:

Kamarck, Elaine. “Malevolent Soft Power, AI, and the Threat to Democracy.” Brookings, Brookings, 25 Oct. 2019, www.brookings.edu/research/malevolent-soft-power-ai-and-the-threat-to-democracy/.  

Marcus, Gary. “GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea What It’s Talking About.” MIT Technology Review, MIT Technology Review, 9 Sept. 2020, www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/.  

Marr, Bernard. “What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?” Forbes, Forbes Magazine, 5 Oct. 2020, www.forbes.com/sites/bernardmarr/2020/10/05/what-is-gpt-3-and-why-is-it-revolutionizing-artificial-intelligence/?sh=61cab44b481a.  

Metz, Cade. “Meet GPT-3. It Has Learned to Code (and Blog and Argue).” The New York Times, The New York Times, 24 Nov. 2020, www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html.  

Shukla, Vikas. “AI Text Generator GPT-3 Is Brilliant but Worrisome.” Insider Paper, 1 Aug. 2020, www.insiderpaper.com/ai-text-generator-gpt-3/.  

Photo Credit: Vikas Shukla