no code implementations • COLING 2022 • Jianguo Mao, Jiyuan Zhang, Zengfeng Zeng, Weihua Peng, Wenbin Jiang, Xiangdong Wang, Hong Liu, Yajuan Lyu
It then performs dynamic reasoning based on the hierarchical representations of evidences to solve complex biomedical problems.
no code implementations • COLING 2022 • Yu Xia, Wenbin Jiang, Yajuan Lyu, Sujian Li
Existing works are based on end-to-end neural models which do not explicitly model the intermediate states and lack interpretability for the parsing process.
no code implementations • NAACL 2022 • Jianguo Mao, Wenbin Jiang, Xiangdong Wang, Zhifan Feng, Yajuan Lyu, Hong Liu, Yong Zhu
Then, it performs multistep reasoning for better answer decision between the representations of the question and the video, and dynamically integrate the reasoning results.
no code implementations • 19 Sep 2024 • Chen Liang, Zhifan Feng, Zihe Liu, Wenbin Jiang, Jinan Xu, Yufeng Chen, Yong Wang
Chain-of-thought prompting significantly boosts the reasoning ability of large language models but still faces three issues: hallucination problem, restricted interpretability, and uncontrollable generation.
1 code implementation • 12 Jun 2024 • Mingyu Zheng, Xinwei Feng, Qingyi Si, Qiaoqiao She, Zheng Lin, Wenbin Jiang, Weiping Wang
Although great progress has been made by previous table understanding methods including recent approaches based on large language models (LLMs), they rely heavily on the premise that given tables must be converted into a certain text sequence (such as Markdown or HTML) to serve as model input.
no code implementations • 9 Apr 2024 • Yi Gui, Zhen Li, Yao Wan, Yemin Shi, Hongyu Zhang, Yi Su, Shaoling Dong, Xing Zhou, Wenbin Jiang
Automatically generating UI code from webpage design visions can significantly alleviate the burden of developers, enabling beginner developers or designers to directly generate Web pages from design diagrams.
no code implementations • 9 Mar 2023 • Feng He, Qi Wang, Zhifan Feng, Wenbin Jiang, Yajuan Lv, Yong Zhu, Xiao Tan
While most video retrieval methods overlook that phenomenon, we propose an adaptive margin changed with the distance between positive and negative pairs to solve the aforementioned issue.
no code implementations • 31 Jul 2022 • Damai Dai, Wenbin Jiang, Qingxiu Dong, Yajuan Lyu, Qiaoqiao She, Zhifang Sui
The ability of pretrained Transformers to remember factual knowledge is essential but still limited for existing models.
no code implementations • 15 Apr 2022 • Damai Dai, Wenbin Jiang, Jiyuan Zhang, Weihua Peng, Yajuan Lyu, Zhifang Sui, Baobao Chang, Yong Zhu
In this paper, in order to alleviate the parameter competition problem, we propose a Mixture-of-Expert (MoE) based question answering method called MoEBQA that decouples the computation for different types of questions by sparse routing.
1 code implementation • 23 Sep 2021 • Liangchen Zhou, Wenbin Jiang, Jingyan Xu, Fei Wen, Peilin Liu
Typically, a single T-F mask is first estimated based on DNN and then used to mask the spectrogram of noisy speech in an order to suppress the noise.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Wenbin Jiang, Mengfei Guo, Yufeng Chen, Ying Li, Jinan Xu, Yajuan Lyu, Yong Zhu
This paper describes a novel multi-view classification model for knowledge graph completion, where multiple classification views are performed based on both content and context information for candidate triple evaluation.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Zhifan Feng, Qi Wang, Wenbin Jiang, Yajuan Lyu, Yong Zhu
Named entity disambiguation is an important task that plays the role of bridge between text and knowledge.
3 code implementations • 6 Nov 2019 • Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, Hua Wu
This work presents Contextualized Knowledge Graph Embedding (CoKE), a novel paradigm that takes into account such contextual nature, and learns dynamic, flexible, and fully contextualized entity and relation embeddings.
no code implementations • IJCNLP 2019 • Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, Jun Zhao
Our method dynamically updates the representation of the knowledge according to the structural information of the constructed sub-graph.
no code implementations • 10 Aug 2015 • Hui Yu, Xiaofeng Wu, Wenbin Jiang, Qun Liu, ShouXun Lin
The widely-used automatic evaluation metrics cannot adequately reflect the fluency of the translations.
no code implementations • 9 Aug 2015 • Hui Yu, Xiaofeng Wu, Wenbin Jiang, Qun Liu, ShouXun Lin
To avoid these problems, we propose a novel automatic evaluation metric based on dependency parsing model, with no need to define sub-structures by human.
no code implementations • 17 Mar 2015 • Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, Qun Liu
Different from previous work on neural network-based language modeling and generation (e. g., RNN or LSTM), we choose not to greedily summarize the history of words as a fixed length vector.
no code implementations • IJCNLP 2015 • Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu
The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT.