Search Results for author: Can Xu

Found 77 papers, 37 papers with code

Cross-composition Feature Disentanglement for Compositional Zero-shot Learning

no code implementations19 Aug 2024 Yuxia Geng, Runkai Zhu, Jiaoyan Chen, Jintai Chen, Zhuo Chen, Xiang Chen, Can Xu, Yuxiang Wang, Xiaoliang Xu

Disentanglement of visual features of primitives (i. e., attributes and objects) has shown exceptional results in Compositional Zero-shot Learning (CZSL).

Attribute Compositional Zero-Shot Learning +2

AgentGen: Enhancing Planning Abilities for Large Language Model based Agent via Environment and Task Generation

no code implementations1 Aug 2024 Mengkang Hu, Pu Zhao, Can Xu, Qingfeng Sun, JianGuang Lou, QIngwei Lin, Ping Luo, Saravan Rajmohan

Moreover, to increase the difficulty diversity of generated planning tasks, we propose a bidirectional evolution method, Bi-Evol, that evolves planning tasks from easier and harder directions to synthesize a task set with a smoother difficulty curve.

Diversity Language Modelling +1

Arena Learning: Build Data Flywheel for LLMs Post-training via Simulated Chatbot Arena

no code implementations15 Jul 2024 Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, QIngwei Lin, JianGuang Lou, Shifeng Chen, Yansong Tang, Weizhu Chen

In this paper, we introduce Arena Learning, an innovative offline strategy designed to simulate these arena battles using AI-driven annotations to evaluate battle outcomes, thus facilitating the continuous improvement of the target model through both supervised fine-tuning and reinforcement learning.

Chatbot

Improving Graph Out-of-distribution Generalization on Real-world Data

no code implementations14 Jul 2024 Can Xu, Yao Cheng, Jianxiang Yu, Haosen Wang, Jingsong Lv, Xiang Li

In contrast to previous studies that impose rigid independence assumptions on environments and invariant sub-graphs, this paper presents the theorems of environment-label dependency and mutable rationale invariance, where the former characterizes the usefulness of environments in determining graph labels while the latter refers to the mutable importance of graph rationales.

Bayesian Inference Out-of-Distribution Generalization +1

Automatic Instruction Evolving for Large Language Models

1 code implementation2 Jun 2024 Weihao Zeng, Can Xu, Yingxiu Zhao, Jian-Guang Lou, Weizhu Chen

Fine-tuning large pre-trained language models with Evol-Instruct has achieved encouraging results across a wide range of tasks.

GSM8K HumanEval

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone

no code implementations22 Apr 2024 Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Qin Cai, Vishrav Chaudhary, Dong Chen, Dongdong Chen, Weizhu Chen, Yen-Chun Chen, Yi-Ling Chen, Hao Cheng, Parul Chopra, Xiyang Dai, Matthew Dixon, Ronen Eldan, Victor Fragoso, Jianfeng Gao, Mei Gao, Min Gao, Amit Garg, Allie Del Giorno, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Wenxiang Hu, Jamie Huynh, Dan Iter, Sam Ade Jacobs, Mojan Javaheripi, Xin Jin, Nikos Karampatziakis, Piero Kauffmann, Mahoud Khademi, Dongwoo Kim, Young Jin Kim, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Yunsheng Li, Chen Liang, Lars Liden, Xihui Lin, Zeqi Lin, Ce Liu, Liyuan Liu, Mengchen Liu, Weishung Liu, Xiaodong Liu, Chong Luo, Piyush Madan, Ali Mahmoudzadeh, David Majercak, Matt Mazzola, Caio César Teodoro Mendes, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Liliang Ren, Gustavo de Rosa, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Yelong Shen, Swadheen Shukla, Xia Song, Masahiro Tanaka, Andrea Tupini, Praneetha Vaddamanu, Chunyu Wang, Guanhua Wang, Lijuan Wang, Shuohang Wang, Xin Wang, Yu Wang, Rachel Ward, Wen Wen, Philipp Witte, Haiping Wu, Xiaoxia Wu, Michael Wyatt, Bin Xiao, Can Xu, Jiahang Xu, Weijian Xu, Jilong Xue, Sonali Yadav, Fan Yang, Jianwei Yang, Yifan Yang, ZiYi Yang, Donghan Yu, Lu Yuan, Chenruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, Xiren Zhou

We introduce phi-3-mini, a 3. 8 billion parameter language model trained on 3. 3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3. 5 (e. g., phi-3-mini achieves 69% on MMLU and 8. 38 on MT-bench), despite being small enough to be deployed on a phone.

Ranked #5 on MMR total on MRR-Benchmark (using extra training data)

Language Modelling Math +2

A Survey on Knowledge Distillation of Large Language Models

1 code implementation20 Feb 2024 Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, DaCheng Tao, Tianyi Zhou

In the era of Large Language Models (LLMs), Knowledge Distillation (KD) emerges as a pivotal methodology for transferring advanced capabilities from leading proprietary LLMs, such as GPT-4, to their open-source counterparts like LLaMA and Mistral.

Data Augmentation Knowledge Distillation +2

Diffusion-based Graph Generative Methods

1 code implementation28 Jan 2024 Hongyang Chen, Can Xu, Lingyu Zheng, Qiang Zhang, Xuemin Lin

Being the most cutting-edge generative methods, diffusion methods have shown great advances in wide generation tasks.

Denoising Graph Generation +1

Leveraging Large Language Models for NLG Evaluation: Advances and Challenges

1 code implementation13 Jan 2024 Zhen Li, Xiaohan Xu, Tao Shen, Can Xu, Jia-Chen Gu, Yuxuan Lai, Chongyang Tao, Shuai Ma

In the rapidly evolving domain of Natural Language Generation (NLG) evaluation, introducing Large Language Models (LLMs) has opened new avenues for assessing generated content quality, e. g., coherence, creativity, and context relevance.

nlg evaluation Specificity +1

Geometric-Facilitated Denoising Diffusion Model for 3D Molecule Generation

1 code implementation5 Jan 2024 Can Xu, Haosen Wang, Weigang Wang, Pengfei Zheng, Hongyang Chen

The second challenge involves accommodating molecule generation to diffusion and accurately predicting the existence of bonds.

3D Molecule Generation Denoising

WaveCoder: Widespread And Versatile Enhancement For Code Large Language Models By Instruction Tuning

1 code implementation20 Dec 2023 Zhaojian Yu, Xin Zhang, Ning Shang, Yangyu Huang, Can Xu, Yishujie Zhao, Wenxiang Hu, Qiufeng Yin

Recent work demonstrates that, after instruction tuning, Code Large Language Models (Code LLMs) can obtain impressive capabilities to address a wide range of code-related tasks.

Code Generation

AirIMU: Learning Uncertainty Propagation for Inertial Odometry

1 code implementation7 Oct 2023 Yuheng Qiu, Chen Wang, Can Xu, Yutian Chen, Xunfei Zhou, Youjie Xia, Sebastian Scherer

In contrast, data-driven IO methods struggle to accurately model the sensor motions, often leading to generalizability and interoperability issues.

Re-Reading Improves Reasoning in Large Language Models

2 code implementations12 Sep 2023 Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, Jian-Guang Lou, Shuai Ma

To enhance the reasoning capabilities of off-the-shelf Large Language Models (LLMs), we introduce a simple, yet general and effective prompting method, Re2, i. e., \textbf{Re}-\textbf{Re}ading the question as input.

Decoder

WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct

1 code implementation18 Aug 2023 Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, JianGuang Lou, Chongyang Tao, Xiubo Geng, QIngwei Lin, Shifeng Chen, Dongmei Zhang

Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model.

Ranked #51 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning GSM8K +2

Investigating the Learning Behaviour of In-context Learning: A Comparison with Supervised Learning

1 code implementation28 Jul 2023 Xindi Wang, YuFei Wang, Can Xu, Xiubo Geng, BoWen Zhang, Chongyang Tao, Frank Rudzicz, Robert E. Mercer, Daxin Jiang

Large language models (LLMs) have shown remarkable capacity for in-context learning (ICL), where learning a new task from just a few training examples is done without being explicitly pre-trained.

In-Context Learning

WizardCoder: Empowering Code Large Language Models with Evol-Instruct

3 code implementations14 Jun 2023 Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang

Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+.

Code Generation HumanEval

Synergistic Interplay between Search and Large Language Models for Information Retrieval

2 code implementations12 May 2023 Jiazhan Feng, Chongyang Tao, Xiubo Geng, Tao Shen, Can Xu, Guodong Long, Dongyan Zhao, Daxin Jiang

Information retrieval (IR) plays a crucial role in locating relevant resources from vast amounts of data, and its applications have evolved from traditional knowledge bases to modern retrieval models (RMs).

Information Retrieval Retrieval

Augmented Large Language Models with Parametric Knowledge Guiding

1 code implementation8 May 2023 Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang

We demonstrate that our PKG framework can enhance the performance of "black-box" LLMs on a range of domain knowledge-intensive tasks that require factual (+7. 9%), tabular (+11. 9%), medical (+3. 0%), and multimodal (+8. 1%) knowledge.

Self-Supervised Multi-Modal Sequential Recommendation

1 code implementation26 Apr 2023 Kunzhe Song, Qingfeng Sun, Can Xu, Kai Zheng, Yaming Yang

To address this issue, we propose a dual-tower retrieval architecture for sequence recommendation.

Contrastive Learning Retrieval +1

WizardLM: Empowering Large Language Models to Follow Complex Instructions

4 code implementations24 Apr 2023 Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Daxin Jiang

In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans.

Instruction Following

LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval

1 code implementation6 Feb 2023 Ziyang Luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing Ma, Qingwen Lin, Daxin Jiang

The conventional dense retrieval paradigm relies on encoding images and texts into dense representations using dual-stream encoders, however, it faces challenges with low retrieval speed in large-scale retrieval scenarios.

Image-text Retrieval Text Retrieval

Iterative Proposal Refinement for Weakly-Supervised Video Grounding

no code implementations CVPR 2023 Meng Cao, Fangyun Wei, Can Xu, Xiubo Geng, Long Chen, Can Zhang, Yuexian Zou, Tao Shen, Daxin Jiang

Weakly-Supervised Video Grounding (WSVG) aims to localize events of interest in untrimmed videos with only video-level annotations.

Sentence Video Grounding

Fine-Grained Distillation for Long Document Retrieval

no code implementations20 Dec 2022 Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Guodong Long, Can Xu, Daxin Jiang

Long document retrieval aims to fetch query-relevant documents from a large-scale collection, where knowledge distillation has become de facto to improve a retriever by mimicking a heterogeneous yet powerful cross-encoder.

Knowledge Distillation Retrieval

Adam: Dense Retrieval Distillation with Adaptive Dark Examples

no code implementations20 Dec 2022 Chongyang Tao, Chang Liu, Tao Shen, Can Xu, Xiubo Geng, Binxing Jiao, Daxin Jiang

Different from previous works that only rely on one positive and hard negatives as candidate passages, we create dark examples that all have moderate relevance to the query through mixing-up and masking in discrete space.

Knowledge Distillation Retrieval

Latent User Intent Modeling for Sequential Recommenders

no code implementations17 Nov 2022 Bo Chang, Alexandros Karatzoglou, Yuyan Wang, Can Xu, Ed H. Chi, Minmin Chen

We demonstrate the effectiveness of the latent user intent modeling via offline analyses as well as live experiments on a large-scale industrial recommendation platform.

Recommendation Systems

Reward Shaping for User Satisfaction in a REINFORCE Recommender

no code implementations30 Sep 2022 Konstantina Christakopoulou, Can Xu, Sai Zhang, Sriraj Badam, Trevor Potter, Daniel Li, Hao Wan, Xinyang Yi, Ya Le, Chris Berg, Eric Bencomo Dixon, Ed H. Chi, Minmin Chen

How might we design Reinforcement Learning (RL)-based recommenders that encourage aligning user trajectories with the underlying user satisfaction?

Imputation Reinforcement Learning (RL)

LexMAE: Lexicon-Bottlenecked Pretraining for Large-Scale Retrieval

1 code implementation31 Aug 2022 Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang

In large-scale retrieval, the lexicon-weighting paradigm, learning weighted sparse representations in vocabulary space, has shown promising results with high quality and low latency.

Decoder Language Modelling +2

LED: Lexicon-Enlightened Dense Retriever for Large-Scale Retrieval

2 code implementations29 Aug 2022 Kai Zhang, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Binxing Jiao, Daxin Jiang

The alignment is achieved by weakened knowledge distillations to enlighten the retriever via two aspects -- 1) a lexicon-augmented contrastive objective to challenge the dense encoder and 2) a pair-wise rank-consistent regularization to make dense model's behavior incline to the other.

Representation Learning Retrieval

LFGCF: Light Folksonomy Graph Collaborative Filtering for Tag-Aware Recommendation

no code implementations6 Aug 2022 Yin Zhang, Can Xu, XianJun Wu, Yan Zhang, LiGang Dong, Weigang Wang

Recently, many efforts have been devoted to improving Tag-aware recommendation systems (TRS) with Graph Convolutional Networks (GCN), which has become new state-of-the-art for the general recommendation.

Collaborative Filtering Recommendation Systems +1

Towards Robust Ranker for Text Retrieval

no code implementations16 Jun 2022 Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Guodong Long, Binxing Jiao, Daxin Jiang

A ranker plays an indispensable role in the de facto 'retrieval & rerank' pipeline, but its training still lags behind -- learning from moderate negatives or/and serving as an auxiliary module for a retriever.

Passage Retrieval Text Retrieval

PCL: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings

1 code implementation28 Jan 2022 Qiyu Wu, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Daxin Jiang

A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field.

Contrastive Learning Open-Ended Question Answering +3

Recency Dropout for Recurrent Recommender Systems

no code implementations26 Jan 2022 Bo Chang, Can Xu, Matthieu Lê, Jingchen Feng, Ya Le, Sriraj Badam, Ed Chi, Minmin Chen

Recurrent recommender systems have been successful in capturing the temporal dynamics in users' activity trajectories.

Data Augmentation Recommendation Systems

Multimodal Dialogue Response Generation

no code implementations ACL 2022 Qingfeng Sun, Yujing Wang, Can Xu, Kai Zheng, Yaming Yang, Huang Hu, Fei Xu, Jessica Zhang, Xiubo Geng, Daxin Jiang

In such a low-resource setting, we devise a novel conversational agent, Divter, in order to isolate parameters that depend on multimodal dialogues from the entire generation model.

Dialogue Generation Response Generation +1

RecInDial: A Unified Framework for Conversational Recommendation with Pretrained Language Models

no code implementations14 Oct 2021 Lingzhi Wang, Huang Hu, Lei Sha, Can Xu, Kam-Fai Wong, Daxin Jiang

Furthermore, we propose to evaluate the CRS models in an end-to-end manner, which can reflect the overall performance of the entire system rather than the performance of individual modules, compared to the separate evaluations of the two modules used in previous work.

Conversational Recommendation Dialogue Generation +2

Learning to Ground Visual Objects for Visual Dialog

no code implementations Findings (EMNLP) 2021 Feilong Chen, Xiuyi Chen, Can Xu, Daxin Jiang

Specifically, a posterior distribution over visual objects is inferred from both context (history and questions) and answers, and it ensures the appropriate grounding of visual objects during the training process.

Visual Dialog

Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation

no code implementations NeurIPS 2021 YuFei Wang, Can Xu, Huang Hu, Chongyang Tao, Stephen Wan, Mark Dras, Mark Johnson, Daxin Jiang

Sequence-to-Sequence (S2S) neural text generation models, especially the pre-trained ones (e. g., BART and T5), have exhibited compelling performance on various natural language generation tasks.

Text Generation

MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding

1 code implementation ACL 2021 Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Xiubo Geng, Daxin Jiang

Recently, various neural models for multi-party conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction.

Language Modelling Speaker Identification

Maria: A Visual Experience Powered Conversational Agent

1 code implementation ACL 2021 Zujie Liang, Huang Hu, Can Xu, Chongyang Tao, Xiubo Geng, Yining Chen, Fan Liang, Daxin Jiang

The retriever aims to retrieve a correlated image to the dialog from an image index, while the visual concept detector extracts rich visual knowledge from the image.

Learning Matching Representations for Individualized Organ Transplantation Allocation

1 code implementation28 Jan 2021 Can Xu, Ahmed M. Alaa, Ioana Bica, Brent D. Ershoff, Maxime Cannesson, Mihaela van der Schaar

Organ transplantation is often the last resort for treating end-stage illness, but the probability of a successful transplantation depends greatly on compatibility between donors and recipients.

counterfactual Representation Learning

Are Pre-trained Language Models Knowledgeable to Ground Open Domain Dialogues?

no code implementations19 Nov 2020 Yufan Zhao, Wei Wu, Can Xu

We study knowledge-grounded dialogue generation with pre-trained language models.

Dialogue Generation

StyleDGPT: Stylized Response Generation with Pre-trained Language Models

1 code implementation Findings of the Association for Computational Linguistics 2020 Ze Yang, Wei Wu, Can Xu, Xinnian Liang, Jiaqi Bai, Liran Wang, Wei Wang, Zhoujun Li

Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training.

Response Generation Sentence

Zero-Resource Knowledge-Grounded Dialogue Generation

1 code implementation NeurIPS 2020 Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, Chongyang Tao

While neural conversation models have shown great potentials towards generating informative and engaging responses via introducing external knowledge, learning such a model often requires knowledge-grounded dialogues that are difficult to obtain.

Dialogue Generation

Open Domain Dialogue Generation with Latent Images

no code implementations4 Apr 2020 Ze Yang, Wei Wu, Huang Hu, Can Xu, Wei Wang, Zhoujun Li

Thus, we propose learning a response generation model with both image-grounded dialogues and textual dialogues by assuming that the visual scene information at the time of a conversation can be represented by an image, and trying to recover the latent images of the textual dialogues through text-to-image generation techniques.

Dialogue Generation Response Generation +1

Low-Resource Knowledge-Grounded Dialogue Generation

no code implementations ICLR 2020 Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, Rui Yan

In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.

Decoder Dialogue Generation +1

THUEE system description for NIST 2019 SRE CTS Challenge

no code implementations25 Dec 2019 Yi Liu, Tianyu Liang, Can Xu, Xianwei Zhang, Xianhong Chen, Wei-Qiang Zhang, Liang He, Dandan song, Ruyun Li, Yangcheng Wu, Peng Ouyang, Shouyi Yin

This paper describes the systems submitted by the department of electronic engineering, institute of microelectronics of Tsinghua university and TsingMicro Co. Ltd. (THUEE) to the NIST 2019 speaker recognition evaluation CTS challenge.

Speaker Recognition

Low-Resource Response Generation with Template Prior

1 code implementation IJCNLP 2019 Ze Yang, Wei Wu, Jian Yang, Can Xu, Zhoujun Li

Since the paired data now is no longer enough to train a neural generation model, we consider leveraging the large scale of unpaired data that are much easier to obtain, and propose response generation with both paired and unpaired data.

Decoder Response Generation

A Document-grounded Matching Network for Response Selection in Retrieval-based Chatbots

no code implementations11 Jun 2019 Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, Rui Yan

We present a document-grounded matching network (DGMN) for response selection that can power a knowledge-aware retrieval-based chatbot system.

Chatbot Retrieval

Multiobjective Optimization Training of PLDA for Speaker Verification

2 code implementations25 Aug 2018 Liang He, Xianhong Chen, Can Xu, Jia Liu

Most current state-of-the-art text-independent speaker verification systems take probabilistic linear discriminant analysis (PLDA) as their backend classifiers.

Multiobjective Optimization Text-Independent Speaker Verification

Improving Matching Models with Hierarchical Contextualized Representations for Multi-turn Response Selection

no code implementations22 Aug 2018 Chongyang Tao, Wei Wu, Can Xu, Yansong Feng, Dongyan Zhao, Rui Yan

In this paper, we study context-response matching with pre-trained contextualized representations for multi-turn response selection in retrieval-based chatbots.

Decoder Dialogue Generation +2

Towards Explainable and Controllable Open Domain Dialogue Generation with Dialogue Acts

no code implementations19 Jul 2018 Can Xu, Wei Wu, Yu Wu

We study open domain dialogue generation with dialogue acts designed to explain how people engage in social chat.

Dialogue Generation reinforcement-learning +3

Towards Interpretable Chit-chat: Open Domain Dialogue Generation with Dialogue Acts

no code implementations ICLR 2018 Wei Wu, Can Xu, Yu Wu, Zhoujun Li

Conventional methods model open domain dialogue generation as a black box through end-to-end learning from large scale conversation data.

Dialogue Generation Response Generation

A Sequential Matching Framework for Multi-turn Response Selection in Retrieval-based Chatbots

no code implementations CL 2019 Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, Ming Zhou

The task requires matching a response candidate with a conversation context, whose challenges include how to recognize important parts of the context, and how to model the relationships among utterances in the context.

Retrieval

Large Margin Discriminant Dimensionality Reduction in Prediction Space

no code implementations NeurIPS 2016 Mohammad Saberian, Jose Costa Pereira, Can Xu, Jian Yang, Nuno Nvasconcelos

We argue that the intermediate mapping, e. g. boosting predictor, is preserving the discriminant aspects of the data and by controlling the dimension of this mapping it is possible to achieve discriminant low dimensional representations for the data.

Dimensionality Reduction General Classification +1

Visual Sentiment Prediction with Deep Convolutional Neural Networks

no code implementations21 Nov 2014 Can Xu, Suleyman Cetintas, Kuang-Chih Lee, Li-Jia Li

Images have become one of the most popular types of media through which users convey their emotions within online social networks.

Object Recognition Sentiment Analysis +2

Cannot find the paper you are looking for? You can Submit a new open access paper.