people

Hyuckchul Jung

Jung, Hyuckchul
1 AT&T Way
Bedminster, NJ


Email contact: hjung you-know-what research.att.com

I'm a member of Speech and Language Research Group. I have extensive experiences in

  • Dialog-based systems
  • Task learning and management
  • Natural language processing
  • Intelligent agents.

At AT&T, my research focuses on two areas:

  • Personal Assistant Agents
  • Natural Language Processing

In particular, for personal assistant agents, I'm investigating techniques to rapidly extend application domains for virtual agents and manage tasks requested to agents. On the NL side, I'm working on temporal information analysis and coreference resolution.

 

My research prior to joining AT&T in 2012 is described in this website (with demo videos). A notable past research project is CALO (best known for its spin off Apple Siri) and our work in CALO received an outstanding paper award at AAAI-2007. CALO envisioned many virtual agent capabilities that are not fully realized yet (refer to CALO functions). At AT&T, based on deep ASR/NL/dialog techniques, I continue to work on research to make the vision of CALO come true in both personal and enterprise applications.

 

Below are some of the results from my work at AT&T.

Publications

Svetlana Stoyanchev, Hyuckchul Jung, John Chen, Srinivas Bangalore, Tag & Parse Approach to Semantic Parsing of Robot Spatial Commands, Proceedings of International Workshop on Semantic Evaluation, 2014

>> We participated in SemEval-2014 Task 6 “Supervised Semantic Parsing of Robotic Spatial Commands” and our performance was the 2nd out of five teams. This work demonstrated our ability to rapidly develop an NL system for very specialized domains with relatively small number of datasets. Our statistical data-driven approach will allow flexible development.

 

Michael Johnston, John Chen, Patrick Ehlen, Hyuckchul Jung, Jay Lieske, Aarthy Reddy, Ethan Selfridge, Svetlana Stoyanchev, Brant Vasilieff, Jay Wilpon, MVA: The Multimodal Virtual Assistant, Proceedings of Annual SIGdial Meeting on Discourse and Dialogue, 2014

>> We developed a multi-modal personal assistant agents that help users find information about events/movies/restaurants. We developed systematic techniques that are extendable to multiple domains and our gesture-based interaction can allow very natural interactions (e.g., pointing/drawing areas coupled with speech commands)

 

Hyuckchul Jung and Amanda Stent, Temporal Annotation Using Big Windows and Rich Syntactic and Semantic Features, Proceedings of International Workshop on Semantic Evaluation, 2013

>> We participated in TempEval-2013 Temporal Annotation in which systems are required to extract events and time information. Our peformance was best in event extraction and time precision. Time is critical in many applications and we plan to deploy our time extractor/normalizer to AT&T business applications

 

Patent

Pending United States Patent (filed on December 10, 2013): Web-Based Dialog Systems Through Javascript, Hyuckchul Jung, Dan Melamed, Nobal Niraula, Amanda Stent

>> We developed a system that automatically detects and analyzes web forms and creates a dialog to get information for web forms by asking users (e.g., visiting an unseen travel website, if there is a number of guests field or menu, the system automatically asks for the number). This technique has a huge potential by enabling virtual agents to visit any website and automatically fill in or ask required information for the visited website, which can be very useful in mobile devices or in assistive technology

graphviz

Connections

Graphviz