GPT-4



Summary: A description of the GPT-4 technical report by OpenAI.
Topics: GPT-4, Large Language Model, Capabilities, Risks
Slides: link (pdf)

References
  • T. Henighan et al., "Scaling laws for autoregressive generative modeling", arxiv (2020)
  • S. Gehman et al., "RealToxicityPrompts: Evaluating neural toxic degeneration in language models", arxiv (2020)
  • D. Hendrycks et al., "Measuring Massive Multitask Language Understanding", ICLR (2021)
  • M. Chen et al., "Evaluating large language models trained on code", arxiv (2021)
  • S. Lin et al., "TruthfulQA: Measuring how models mimic human falsehoods", arxiv (2021)
  • J. Wei et al., "Inverse scaling can become U-shaped", arxiv (2022)
  • J. Hoffmann et al., "Training compute-optimal large language models", arxiv (2022)
  • A. Chowdhery et al., "PaLM: Scaling language modeling with pathways", arxiv (2022)
  • E. Perez et al., "Red teaming language models with language models", arxiv (2022)
  • A. Glaese et al., "Improving alignment of dialogue agents via targeted human judgements", arxiv (2022)
  • OpenAI, GPT-4 Technical Report (2023)