Search Results for author: Ting Han

Found 14 papers, 2 papers with code

Diversity as a By-Product: Goal-oriented Language Generation Leads to Linguistic Variation

no code implementations SIGDIAL (ACL) 2021 Simeon Schüz, Ting Han, Sina Zarrieß

The ability for variation in language use is necessary for speakers to achieve their conversational goals, for instance when referring to objects in visual environments.

Image Captioning Text Generation

TCBERT: A Technical Report for Chinese Topic Classification BERT

no code implementations21 Nov 2022 Ting Han, Kunhao Pan, Xinyu Chen, Dingjie Song, Yuchen Fan, Xinyu Gao, Ruyi Gan, Jiaxing Zhang

Bidirectional Encoder Representations from Transformers or BERT~\cite{devlin-etal-2019-bert} has been one of the base models for various NLP tasks due to its remarkable performance.

Classification Contrastive Learning +1

Coreference Augmentation for Multi-Domain Task-Oriented Dialogue State Tracking

no code implementations16 Jun 2021 Ting Han, Chongxuan Huang, Wei Peng

Dialogue State Tracking (DST), which is the process of inferring user goals by estimating belief states given the dialogue history, plays a critical role in task-oriented dialogue systems.

Dialogue State Tracking Task-Oriented Dialogue Systems

Enabling Robots to Draw and Tell: Towards Visually Grounded Multimodal Description Generation

no code implementations14 Jan 2021 Ting Han, Sina Zarrieß

Socially competent robots should be equipped with the ability to perceive the world that surrounds them and communicate about it in a human-like manner.

MultiWOZ 2.3: A multi-domain task-oriented dialogue dataset enhanced with annotation corrections and co-reference annotation

3 code implementations12 Oct 2020 Ting Han, Ximing Liu, Ryuichi Takanobu, Yixin Lian, Chongxuan Huang, Dazhen Wan, Wei Peng, Minlie Huang

In this paper, we introduce MultiWOZ 2. 3, in which we differentiate incorrect annotations in dialogue acts from dialogue states, identifying a lack of co-reference when publishing the updated dataset.

Dialogue State Tracking Natural Language Understanding +1

Sketch Me if You Can: Towards Generating Detailed Descriptions of Object Shape by Grounding in Images and Drawings

no code implementations WS 2019 Ting Han, Sina Zarrie{\ss}

A lot of recent work in Language {\&} Vision has looked at generating descriptions or referring expressions for objects in scenes of real-world images, though focusing mostly on relatively simple language like object names, color and location attributes (e. g., brown chair on the left).

Image Captioning

Draw and Tell: Multimodal Descriptions Outperform Verbal- or Sketch-Only Descriptions in an Image Retrieval Task

no code implementations IJCNLP 2017 Ting Han, David Schlangen

While language conveys meaning largely symbolically, actual communication acts typically contain iconic elements as well: People gesture while they speak, or may even draw sketches while explaining something.

Image Retrieval Retrieval

Grounding Language by Continuous Observation of Instruction Following

no code implementations EACL 2017 Ting Han, David Schlangen

Grounded semantics is typically learnt from utterance-level meaning representations (e. g., successful database retrievals, denoted objects in images, moves in a game).

Instruction Following

Usability Investigation on the Localization of Text CAPTCHAs: Take Chinese Characters as a Case Study

no code implementations4 Dec 2016 Junnan Yu, Xuna Ma, Ting Han

Moreover, those design practices were also summarized as a general procedure which is expected to be applicable for the design of CAPTCHAs based on other languages.

Human-Computer Interaction

Cannot find the paper you are looking for? You can Submit a new open access paper.