EnCur: Curriculum-based in-context learning with structural encoding for code time complexity prediction

  • Joonghyuk Hahn
  • , Aditi
  • , Seung Yeop Baik
  • , Shinwoo Park
  • , Sang Ki Ko
  • , Yo Sub Han

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Large Language Models (LLMs) demonstrate promising performance on numerous complex inference tasks such as domain-specific text classification, code generation, code translations, and math word problems (MWPs). Among these tasks, we explore the capabilities of LLMs in predicting time complexity of code snippets. This task requires the LLMs to comprehend and analyze both the context and structural properties such as loops and recursions of given code snippets deeply—a critical factor for accurate time complexity predictions. Unlike conventional state-of-the-art models fine-tuned for downstream tasks, which require large amounts of task-specific data and computational resources, we use In-Context Learning (ICL) approaches of LLMs to leverage their pre-trained knowledge. By utilizing various ICL prompts, including instruction-based, Chain-of-Thought (CoT), and Chain-of-Verification (CoVe) prompts, we avoid the limitations of fine-tuning, such as overfitting to specific tasks and the need for extensive retraining. To this end, we propose EnCur, a curriculum-like ICL strategy that incorporates with encoded structures of code snippets. Our strategy forms iterative prompts that accumulate the context of LLMs in a similar way that curriculum learning aims for. The empirical results and the analyses underscore the potential of LLMs, specifically GPT models, in accumulating knowledge, which then leads to better performance in code time complexity prediction. We provide empirical results of GPT-3.5, GPT-4o-mini, and GPT-4o, where EnCur enhances 3.0 % and 5.2 % in average performance over accuracy and F1 scores compared with baseline ICL techniques. Our implementation will be made publicly available at https://github.com/peer0/EnCur.

Original languageEnglish
Article number129094
JournalExpert Systems with Applications
Volume296
DOIs
StatePublished - 15 Jan 2026

Keywords

  • Code complexity prediction
  • Curriculum learning
  • In-context learning
  • Large language models
  • Structural encoding

Fingerprint

Dive into the research topics of 'EnCur: Curriculum-based in-context learning with structural encoding for code time complexity prediction'. Together they form a unique fingerprint.

Cite this