PromptSource - an IDE and repo for natural language prompts



Summary: a description of "PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts" by S. Bach and others under the BigScience project as a demo at ACL 2022.
Resources: the PromptSource paper can be found here and the library can be found here.
Topics: Prompting, annotation tools, NLP
Slides: link (pdf)

References
  • H. Cunningham, "GATE, a general architecture for text engineering." Computers and the Humanities (2002)
  • D. Salgado et al., "MyMiner: a web application for computer-assisted biocuration and text annotation", Bioinformatics (2012)
  • P. Stenetorp et al., "BRAT: a web-based tool for NLP-assisted text annotation", EACL (2012)
  • S. Bowman et al., "A large annotated corpus for learning natural language inference", EMNLP (2015)
  • P. Helfrich et al., "TreeAnnotator: versatile visual annotation of hierarchical text relations", LREC (2018)
  • J. Yang et al., "YEDDA: A Lightweight Collaborative Text Span Annotation Tool", ACL (2018)
  • B. Lin et al., "AlpacaTag: an active learning-based crowd annotation framework for sequence tagging", ACL (2019)
  • T. Brown et al., "Language models are few-shot learners", NeurIPS (2020)
  • Q. Lhoest et al., "Datasets: A community library for natural language processing", EMNLP Demo (2021)
  • T. Schick et al., "It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners", NAACL-HLT (2021)
  • Le Scao et al., "How many data points is a prompt worth?", NAACL-HLT (2021)
  • E. Perez et al., "True few-shot learning with language models", NeurIPS (2021)
  • V. Sanh et al., "Multitask Prompted Training Enables Zero-Shot Task Generalization", ICLR (2021)
  • S. Bach et al., "PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts", ACL Demo (2022)
  • V. Lin et al., "Few-shot learning with multilingual language models", arXiv (2021)
  • S. Min et al., "MetaICL: Learning to learn in context", arXiv (2021)
  • T. Schick et al., "Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference", EACL (2021)
  • J. Wei et al., "Finetuned Language Models are Zero-Shot Learners", ICLR (2021)