Scalable HMM based inference engine in large vocabulary continuous speech recognition

Jike Chong, Kisun You, Youngmin Yi, Ekaterina Gonina, Christopher Hughes, Wonyong Sung, Kurt Keutzer

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations

Abstract

Parallel scalability allows an application to efficiently utilize an increasing number of processing elements. In this paper we explore a design space for application scalability for an inference engine in large vocabulary continuous speech recognition (LVCSR). Our implementation of the inference engine involves a parallel graph traversal through an irregular graph-based knowledge network with millions of states and arcs. The challenge is not only to define a software architecture that exposes sufficient fine-grained application concurrency, but also to efficiently synchronize between an increasing number of concurrent tasks and to effectively utilize the parallelism opportunities in today's highly parallel processors. We propose four application-level implementation alternatives we call "algorithm styles", and construct highly optimized implementations on two parallel platforms: an Intel Core i7 multicore processor and a NVIDIA GTX280 manycore processor. The highest performing algorithm style varies with the implementation platform. On 44 minutes of speech data set, we demonstrate substantial speedups of 3.4x on Core i7 and 10.5x on GTX280 compared to a highly optimized sequential implementation on Core i7 without sacrificing accuracy. The parallel implementations contain less than 2.5% sequential overhead, promising scalability and significant potential for further speedup on future platforms.

Original languageEnglish
Title of host publicationProceedings - 2009 IEEE International Conference on Multimedia and Expo, ICME 2009
Pages1797-1800
Number of pages4
DOIs
StatePublished - 2009
Event2009 IEEE International Conference on Multimedia and Expo, ICME 2009 - New York, NY, United States
Duration: 28 Jun 20093 Jul 2009

Publication series

NameProceedings - 2009 IEEE International Conference on Multimedia and Expo, ICME 2009

Conference

Conference2009 IEEE International Conference on Multimedia and Expo, ICME 2009
Country/TerritoryUnited States
CityNew York, NY
Period28/06/093/07/09

Fingerprint

Dive into the research topics of 'Scalable HMM based inference engine in large vocabulary continuous speech recognition'. Together they form a unique fingerprint.

Cite this