Stochastic Parrot

In machine learning, the term stochastic parrot is a metaphor to describe the theory that large language models, though able to generate plausible language, do not understand the meaning of the language they process.

The term was coined by Emily M. Bender in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🩜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.

Origin and definition

The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🩜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"). They argued very large language models (LLMs) present dangers such as environmental and financial costs, inscrutability leading to unknown dangerous biases, and their potential for deception, and that they can't understand the concepts underlying what they learn. Gebru and Mitchell lost their jobs at Google for publishing their criticisms, along with subsequent contributing events. Their firing sparked a protest by Google employees.

The word “stochastic” derives from the ancient Greek word “stokhastikos” meaning “based on guesswork,” or “randomly determined.” The word "parrot" refers to the idea that LLMs merely repeat words without understanding their meaning.

In their paper, Bender et al. argue that LLMs are probabilistically linking words and sentences together without considering meaning. Therefore, they are labeled to be mere "stochastic parrots."

According to the machine learning professionals Lindholm, Wahlstrom, Lindsten, and Schon, the analogy highlights two vital limitations:

  • LLMs are limited by the data they are trained by and are simply stochastically repeating contents of datasets.
  • Because they are just making up outputs based on training data, LLMs do not understand if they are saying something incorrect or inappropriate.

Lindholm et al. noted that, with poor quality datasets and other limitations, a learning machine might produce results that are "dangerously wrong".

Subsequent usage

In July of 2021, the Alan Turing Institute hosted a keynote and panel discussion on the paper. As of May 2023, the paper has been cited in 1,529 publications. The term has been used in publications in the fields of law, grammar, narrative, and humanities. The authors continue to maintain their concerns about the dangers of chatbots based on large language models, such as GPT-4.

Stochastic parrot is now a neologism used by AI skeptics to refer to machines' lack of understanding of the meaning of their outputs and is sometimes interpreted as a "slur against AI." Its use expanded further when Sam Altman, CEO of Open AI, used the term ironically when he tweeted, "i am a stochastic parrot and so r u." The term was then designated to be the 2023 AI-related Word of the Year for the American Dialect Society, even over the words "ChatGPT" and "LLM."

The phrase is often referenced by some researchers to describe LLMs as pattern matchers that can generate plausible human-like text through their vast amount of training data, merely parroting in a stochastic fashion. However, other researchers argue that LLMs are, in fact, able to understand language.

Debate

Some LLMs, such as ChatGPT, have become capable of interacting with users in convincingly human-like conversations. The development of these new systems has deepened the discussion of the extent to which LLMs are simply “parroting.”

In the mind of a human being, words and language correspond to things one has experienced. For LLMs, words correspond only to other words and patterns of usage fed into their training data. Proponents of the idea of stochastic parrots thus conclude that LLMs are incapable of actually understanding language.

The tendency of LLMs to pass off fake information as fact is held as support. Called hallucinations, LLMs will occasionally synthesize information that matches some pattern, but not reality. That LLMs can’t distinguish fact and fiction leads to the claim that they can’t connect words to a comprehension of the world, as language should do.  Further, LLMs often fail to decipher complex or ambiguous grammar cases that rely on understanding the meaning of language. As an example, borrowing from Saba et al., is the prompt:

The wet newspaper that fell down off the table is my favorite newspaper. But now that my favorite newspaper fired the editor I might not like reading it anymore. Can I replace ‘my favorite newspaper’ by ‘the wet newspaper that fell down off the table’ in the second sentence?

LLMs respond to this in the affirmative, not understanding that the meaning of "newspaper" is different in these two contexts; it is first an object and second an institution. Based on these failures, some AI professionals conclude they are no more than stochastic parrots.

However, there is support for the claim that LLMs are more than that. LLMs do pass many tests for understanding well, such as the Super General Language Understanding Evaluation (SuperGLUE).  Tests such as these and the smoothness of many LLM responses help as many as 51% of AI professionals believe they can truly understand language with enough data, according to 2022 survey.

Another technique which has been applied to show this is termed "mechanistic interpretability". The idea is to reverse-engineer a large language model by discovering symbolic algorithms that approximate the inference performed by LLM. One example is Othello-GPT, where a small transformer is trained to predict legal Othello moves. It has been found that it can make a linear representation of Othello board, and modifying the representation changes the predicted legal Othello moves in the correct way.

In another example, a small Transformer is trained on Karel programs. Similar to the Othello-GPT example, there is a linear representation of Karel program semantics, and modifying the representation changes output in the correct way. The model also generates correct programs that are on average shorter than those in the training set.

However, when tests created to test people for language comprehension are used to test LLMs, they sometimes result in false positives caused by spurious correlations within text data. Models have shown examples of shortcut learning, which is when a system makes unrelated correlations within data instead of using human-like understanding. One such experiment tested Google’s BERT LLM using the argument reasoning comprehension task. They asked it to choose between 2 statements, which is more consistent with an argument. Below is an example of one of these prompts:

Argument: Felons should be allowed to vote. A person who stole a car at 17 should not be barred from being a full citizen for life.
Statement A: Grand theft auto is a felony.
Statement B: Grand theft auto is not a felony.

Researchers found that specific words such as “not” hint the model towards the correct answer, allowing near-perfect scores when included but resulting in random selection when hint words were removed. This problem, and the known difficulties defining intelligence, causes some to argue all benchmarks that find understanding in LLMs are flawed, that they all allow shortcuts to fake understanding.

Without a reliable benchmark, researchers have found difficulties differentiating models between stochastic parrots and entities capable of understanding. When experimenting on ChatGPT-3, one scientist argued that the model was in between true human-like understanding and being a stochastic parrot. He found that the model was coherent and informative when attempting to predict future events based on the information in the prompt. ChatGPT-3 was frequently able to parse subtextual information from text prompts as well. However, the model frequently failed when tasked with logic and reasoning, especially when these prompts involved spatial awareness. The model’s varying quality of responses indicates that LLM models may have a form of “understanding” in certain categories of tasks while acting as a stochastic parrot in others.

See also

References

Works cited

Further reading

Tags:

Stochastic Parrot Origin and definitionStochastic Parrot Subsequent usageStochastic Parrot DebateStochastic Parrot Further readingStochastic ParrotArtificial intelligenceEmily M. BenderLarge language modelMachine learningMargaret Mitchell (scientist)Timnit Gebru

đŸ”„ Trending searches on Wiki English:

Civil War (film)Soviet UnionElection Commission of IndiaAnsel AdamsHTTP cookieFrank SinatraSplit (2016 American film)Diana, Princess of WalesJapanCailee SpaenyVoyager 1JoJo SiwaJacob FatuDwayne JohnsonBreathe (2024 film)YouTube (YouTube channel)Luke PerrySandra DeeHeart (band)Saint George's DayManchester City F.C.ThailandMount TakaheEaster RisingNetflixSeptember 11 attacksKingdom of the Planet of the ApesGenghis KhanOrlando BloomSabrina CarpenterJelly Roll (singer)Boeing 747Sex and the CityMain PageRobert F. Kennedy Jr.LimoneneElla PurnellEredivisieKepler's SupernovaWhatsAppJulius CaesarClara BowPeriodic tableTom CleverleyRedditShaquille O'NealInvincible (TV series)Paveway IVBharatiya Janata PartyList of country calling codesKaren McDougalResults of the 2019 Indian general electionUnited NationsJude BellinghamSean Foley (director)Jesse PlemonsDeaths in 2024Queen of TearsNikola JokićEuropeSigmund FreudDonte DiVincenzoLina KhanDavid BeckhamAndrew Scott (actor)GmailMurder of Junko FurutaKu Klux KlanOperation CottageArti SinghLiberation Day (Italy)Harley BalicLovely RunnerSouth KoreaYou Should Have Left2024 Andhra Pradesh Legislative Assembly electionAnyone but You🡆 More