GPT-4 — a shift from 'what it can do' to ‘what it augurs’

GPT-4 — a shift from ‘what it can do’ to ‘what it augurs’

Context:

  • The advent of large language models raises the question about building models that leave out society’s concerns

Introduction:

  • OpenAI has developed a new AI language model called GPT-4, which is an improvement over its predecessor GPT-3.5.
  • GPT-4 can accept both text and image input and produce more creative and conversational language.
  • The model can understand human emotions and describe images, benefiting the visually impaired.

Capabilities:

  • GPT-4 can take into context up to 25,000 words, an improvement of more than 8x compared to GPT-3.5.
  • It scored in the 90th percentile in a simulated bar examination and performed well in various courses in environmental science, statistics, art history, biology, and economics.
  • However, it did not perform as well in advanced English language and literature.
  • Its language comprehension surpasses other high-performing language models in English and 25 other languages, including Punjabi, Marathi, Bengali, Urdu, and Telugu.
  • GPT-4 can potentially replace white-collar jobs, especially in programming and writing.
  • The model has sparked discussions about the emergence of artificial general intelligence, which can excel at several task types and combine concepts.
  • Ethical concerns:
  • GPT-4 is still prone to flaws, such as producing incorrect information and harmful biases and stereotypes.
  • OpenAI has not been transparent about the inner workings of GPT-4, which has raised concerns about critical scrutiny and safety.
  • The model has been trained on data scraped from the internet that may contain harmful biases and stereotypes, and OpenAI’s efforts to fix these biases may not be sufficient.
  • There is a potential for GPT-4 to be misused as a propaganda and disinformation engine, raising concerns about where the decision to not do the wrong thing should be born: in the machine’s rules or the human’s mind.
  • OpenAI has made efforts to make GPT-4 safer to use but whether these efforts will be sufficient remains to be seen.

Source The Hindu