The BIGGEN BENCH: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models

  • Seungone Kim
  • , Juyoung Suk
  • , Ji Yong Cho
  • , Shayne Longpre
  • , Chaeeun Kim
  • , Dongkeun Yoon
  • , Guijin Son
  • , Yejin Cho
  • , Sheikh Shafayat
  • , Jinheon Baek
  • , Sue Hyun Park
  • , Hyeonbin Hwang
  • , Jinkyung Jo
  • , Hyowon Cho
  • , Haebin Shin
  • , Seongyun Lee
  • , Hanseok Oh
  • , Noah Lee
  • , Namgyu Ho
  • , Se June Joo
  • Miyoung Ko, Yoonjoo Lee, Hyungjoo Chae, Jamin Shin, Joel Jang, Seonghyeon Ye, Bill Yuchen Lin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Scopus citations

Abstract

As language models (LMs) become capable of handling a wide range of tasks, their evaluation is becoming as challenging as their development. Most generation benchmarks currently assess LMs using abstract evaluation criteria-like helpfulness and harmlessness-which often lack the flexibility and granularity of human assessment. Additionally, these benchmarks tend to focus disproportionately on specific capabilities such as instruction following, leading to coverage bias. To overcome these limitations, we introduce the BIGGEN BENCH, a principled generation benchmark designed to thoroughly evaluate nine distinct capabilities of LMs across 77 diverse tasks. A key feature of the BIGGEN BENCH is its use of instance-specific evaluation criteria, closely mirroring the nuanced discernment of human evaluation. We apply this benchmark to assess 103 frontier LMs using five evaluator LMs. Our code, data, and evaluation results are all publicly available.

Original languageEnglish
Title of host publicationLong Papers
EditorsLuis Chiruzzo, Alan Ritter, Lu Wang
PublisherAssociation for Computational Linguistics (ACL)
Pages5877-5919
Number of pages43
ISBN (Electronic)9798891761896
DOIs
StatePublished - 2025
Event2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2025 - Hybrid, Albuquerque, United States
Duration: 29 Apr 20254 May 2025

Publication series

NameProceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies: Long Papers, NAACL-HLT 2025
Volume1

Conference

Conference2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2025
Country/TerritoryUnited States
CityHybrid, Albuquerque
Period29/04/254/05/25

Fingerprint

Dive into the research topics of 'The BIGGEN BENCH: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models'. Together they form a unique fingerprint.

Cite this