Preprocessing for Keypoint-Based Sign Language Translation without Glosses

Youngmin Kim, Hyeongboo Baek

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


While machine translation for spoken language has advanced significantly, research on sign language translation (SLT) for deaf individuals remains limited. Obtaining annotations, such as gloss, can be expensive and time-consuming. To address these challenges, we propose a new sign language video-processing method for SLT without gloss annotations. Our approach leverages the signer’s skeleton points to identify their movements and help build a robust model resilient to background noise. We also introduce a keypoint normalization process that preserves the signer’s movements while accounting for variations in body length. Furthermore, we propose a stochastic frame selection technique to prioritize frames to minimize video information loss. Based on the attention-based model, our approach demonstrates effectiveness through quantitative experiments on various metrics using German and Korean sign language datasets without glosses.

Original languageEnglish
Article number3231
Issue number6
StatePublished - Mar 2023


  • computer vision
  • deep learning
  • sign language translation
  • video processing


Dive into the research topics of 'Preprocessing for Keypoint-Based Sign Language Translation without Glosses'. Together they form a unique fingerprint.

Cite this