Does a Large Language Model Really Speak in Human-Like Language?

Research output: Contribution to journalArticlepeer-review

Abstract

Large language models (LLMs) have recently emerged, attracting considerable attention due to their ability to generate highly natural, human-like text. This study compares the latent community structures of LLM-generated text and human-written text within a hypothesis testing procedure. Specifically, we analyse three text sets: original human-written texts ((Formula presented.)), their LLM-paraphrased versions ((Formula presented.)) and a twice-paraphrased set ((Formula presented.)) derived from (Formula presented.). Our analysis addresses two key questions: (1) Is the difference in latent community structures between (Formula presented.) and (Formula presented.) the same as that between (Formula presented.) and (Formula presented.) ? (2) Does (Formula presented.) become more similar to (Formula presented.) as the LLM parameter controlling text variability is adjusted? The first question is based on the assumption that if LLM-generated text truly resembles human language, then the gap between the pair ((Formula presented.)) should be similar to that between the pair ((Formula presented.)), as both pairs consist of an original text and its paraphrase. The second question examines whether the degree of similarity between LLM-generated and human text varies with changes in the breadth of text generation. To address these questions, we propose a statistical hypothesis testing framework that leverages the fact that each text has corresponding parts across (Formula presented.) and (Formula presented.). This relationship enables the mapping of one dataset's relative position to another, allowing two datasets to be mapped to a third dataset. As a result, both mapped datasets can be quantified with respect to the space characterized by the third dataset, facilitating a direct comparison between them. For (Formula presented.), the original human text, we collected customer reviews from an accommodation booking site; for (Formula presented.) and (Formula presented.), we used GPT-3.5 to paraphrase (Formula presented.). Our results indicate that GPT-generated text remains distinct from human-authored text.

Original languageEnglish
Article numbere70060
JournalStat
Volume14
Issue number2
DOIs
StatePublished - Jun 2025

Keywords

  • community detection
  • hypothesis testings
  • large language model
  • natural language processing
  • nonparametric tests

Fingerprint

Dive into the research topics of 'Does a Large Language Model Really Speak in Human-Like Language?'. Together they form a unique fingerprint.

Cite this