no code implementations • ACL (ECNLP) 2021 • Ying Lin, Han Wang, Jiangning Chen, Tong Wang, Yue Liu, Heng Ji, Yang Liu, Premkumar Natarajan
We first build a cross-source heterogeneous knowledge graph from customer purchase history and product knowledge graph to jointly learn customer and product embeddings.
no code implementations • ICML 2020 • Hassan Rafique, Tong Wang, Qihang Lin, Arshia Singhani
We propose a novel type of hybrid model for multi-class classification, which utilizes competing linear models to collaborate with an existing black-box model, promoting transparency in the decision-making process.
no code implementations • ECCV 2020 • Tong Wang, Yousong Zhu, Chaoyang Zhao, Wei Zeng, Yao-Wei Wang, Jinqiao Wang, Ming Tang
Most of existing object detectors usually adopt a small training batch size ( ~16), which severely hinders the whole community from exploring large-scale datasets due to the extremely long training procedure.
no code implementations • 2 Apr 2025 • Lirui Qi, Hongliang He, Tong Wang, Siwei Feng, Guohong Fu
To tackle this challenge, we propose a novel data augmentation framework Instance Migration Diffusion Model (IM-Diffusion), IM-Diffusion designed to generate more varied pathological images by constructing diverse nuclear layouts and internuclear spatial relationships.
no code implementations • 14 Mar 2025 • Mingjia Shi, Ruihan Lin, Xuxi Chen, Yuhao Zhou, Zezhen Ding, Pingzhi Li, Tong Wang, Kai Wang, Zhangyang Wang, Jiheng Zhang, Tianlong Chen
Learning to Optimize (L2O) enhances optimization efficiency with integrated neural networks.
no code implementations • 17 Feb 2025 • Tong Wang
Although classical fixed-point theorems guarantee the existence of a Nash Equilibrium (NE) under mild concavity and compactness conditions, the convergence of practical iterative algorithms to that equilibrium remains a challenging endeavor.
no code implementations • 27 Dec 2024 • Nicholas Wolczynski, Maytal Saar-Tsechansky, Tong Wang
Despite advances in AI's performance and interpretability, AI advisors can undermine experts' decisions and increase the time and effort experts must invest to make decisions.
no code implementations • 16 Dec 2024 • Bowen Deng, Tong Wang, Lele Fu, Sheng Huang, Chuan Chen, Tao Zhang
However, due to their reliance on K-means, these methods inherit its drawbacks when the cluster separability of encoder output is low, facing challenges from the Uniform Effect and Cluster Assimilation.
no code implementations • 2 Nov 2024 • Xu-Wen Wang, Tong Wang, Yang-Yu Liu
Advancements in artificial intelligence (AI) have transformed many scientific fields, with microbiology and microbiome research now experiencing significant breakthroughs through machine learning and deep learning applications.
1 code implementation • 27 Oct 2024 • Yiyang Sun, Tong Wang, Cynthia Rudin
We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models.
no code implementations • 6 Sep 2024 • Yangguang Chen, Tong Wang, Guanzhou Chen, Kun Zhu, Xiaoliang Tan, Jiaqi Wang, Wenchao Guo, Qing Wang, Xiaolong Luo, Xiaodong Zhang
To address these issues, we develop the BFA-YOLO model and the BFA-3D dataset in this study.
no code implementations • 30 Aug 2024 • Ronilo Ragodos, Tong Wang, Lu Feng, Yu, Hu
Machine learning models have been increasingly used in business research.
no code implementations • 20 Aug 2024 • Tong Wang, Xiaochao Qu, Ting Liu
Scene text editing aims to modify texts on images while maintaining the style of newly generated text similar to the original.
no code implementations • 13 Aug 2024 • Tong Wang, K. Sudhir, Dat Hong
Unlike traditional knowledge distillation, where the "student" model learns directly from the "teacher" model's responses via fine-tuning, our interpretable "strategy" teaching approach involves the teacher providing strategies to improve the student's performance in various scenarios.
1 code implementation • 22 Jul 2024 • Yiran Yang, Xu Gao, Tong Wang, Xin Hao, Yifeng Shi, Xiao Tan, Xiaoqing Ye, Jingdong Wang
This module adjusts the feature distributions from both the camera and LiDAR, bringing them closer to the ground truth domain and minimizing differences.
1 code implementation • 15 Jul 2024 • Zhe Liu, Jinghua Hou, Xiaoqing Ye, Tong Wang, Jingdong Wang, Xiang Bai
We argue that the main challenges are twofold: 1) How to obtain the appropriate object queries is challenging due to the high sparsity and uneven distribution of point clouds; 2) How to implement an effective query interaction by exploiting the rich geometric structure of point clouds is not fully explored.
1 code implementation • 15 Jul 2024 • Jinghua Hou, Tong Wang, Xiaoqing Ye, Zhe Liu, Shi Gong, Xiao Tan, Errui Ding, Jingdong Wang, Xiang Bai
Accurate depth information is crucial for enhancing the performance of multi-view 3D object detection.
no code implementations • 5 Jul 2024 • Tong Wang, Taotao Gu, Huan Deng, Hu Li, Xiaohui Kuang, Gang Zhao
As autonomous driving systems (ADS) advance towards higher levels of autonomy, orchestrating their safety verification becomes increasingly intricate.
no code implementations • 28 Jun 2024 • Guanzhou Chen, Kaiqi Zhang, Xiaodong Zhang, Hong Xie, Haobo Yang, Xiaoliang Tan, Tong Wang, Yule Ma, Qing Wang, Jinzhou Cao, Weihong Cui
The EXP-CASA model effectively improves the CASA model by using novel functions for estimating the fraction of absorbed photosynthetically active radiation (FPAR) and environmental stress, by utilizing long-term observational data from FLUXNET and MODIS surface reflectance data.
no code implementations • 21 Apr 2024 • Tong Wang, Guanzhou Chen, Xiaodong Zhang, Chenxi Liu, Xiaoliang Tan, Jiaqi Wang, Chanjuan He, Wenlin Zhou
Addressing this gap, we propose a novel \textbf{L}ightweight \textbf{M}ultimodal data \textbf{F}usion \textbf{Net}work (LMFNet) to accomplish the tasks of fusion and semantic segmentation of multimodal remote sensing images.
Ranked #1 on
Semantic Segmentation
on Potsdam
no code implementations • 9 Apr 2024 • Tong Wang, Ninad Kulkarni, Yanjun Qi
Assessing the factual consistency of automatically generated texts in relation to source context is crucial for developing reliable natural language generation applications.
no code implementations • 6 Apr 2024 • Ming Zhou, Weize Quan, Ziqi Zhou, Kai Wang, Tong Wang, Dong-Ming Yan
Motivated by these insights, we introduce a Text-oriented Cross-Attention Network (TCAN), emphasizing the predominant role of the text modality in MSA.
no code implementations • 26 Feb 2024 • Tong Wang, Jian Huang, Shuangge Ma
Deep networks are increasingly applied to a wide variety of data, including data with high-dimensional predictors.
1 code implementation • 15 Feb 2024 • Yiyang Sun, Zhi Chen, Vittorio Orlandi, Tong Wang, Cynthia Rudin
In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied.
no code implementations • 10 Feb 2024 • Yuecheng Li, Tong Wang, Chuan Chen, Jian Lou, Bin Chen, Lei Yang, Zibin Zheng
This implies that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space for different privacy settings and continuous training processes.
1 code implementation • 3 Feb 2024 • Juan-Ni Wu, Tong Wang, Li-Juan Tang, Hai-Long Wu, Ru-Qin Yu
Language models demonstrate fundamental abilities in syntax, semantics, and reasoning, though their performance often depends significantly on the inputs they process.
1 code implementation • 3 Jan 2024 • Qingyuan Yang, Guanzhou Chen, Xiaoliang Tan, Tong Wang, Jiaqi Wang, Xiaodong Zhang
Stereo matching and semantic segmentation are significant tasks in binocular satellite 3D reconstruction.
1 code implementation • 27 Dec 2023 • Xiaoliang Tan, Guanzhou Chen, Tong Wang, Jiaqi Wang, Xiaodong Zhang
The field of Remote Sensing (RS) widely employs Change Detection (CD) on very-high-resolution (VHR) images.
1 code implementation • 16 Dec 2023 • Kaiyou Song, Shan Zhang, Tong Wang
In this study, inspired by human beings' way of grasping an image, i. e., focusing on the main object first, we present a semantic-aware autoregressive image modeling (SemAIM) method to tackle this challenge.
no code implementations • 14 Dec 2023 • Dat Hong, Tong Wang
This paper introduces Personalized Path Recourse, a novel method that generates recourse paths for a reinforcement learning agent.
no code implementations • 23 Oct 2023 • Ju Wu, Tong Wang, Min Ma
This paper investigates the finite-time adaptive fuzzy tracking control problem for a class of pure-feedback system with full-state constraints.
no code implementations • 23 Oct 2023 • Ju Wu, Tong Wang
Furthermore, a low-pass filter driven by a newly-defined control input, is employed to generate the actual control input, which facilitates the design of backstepping control.
1 code implementation • 11 Oct 2023 • Qingyi Si, Tong Wang, Zheng Lin, Xu Zhang, Yanan Cao, Weiping Wang
This paper will release a powerful Chinese LLMs that is comparable to ChatGLM.
no code implementations • 3 Oct 2023 • Tong Wang, Shuichi Kurabayashi
Recent advances in Neural Radiance Fields (NeRF) have demonstrated significant potential for representing 3D scene appearances as implicit neural networks, enabling the synthesis of high-fidelity novel views.
no code implementations • 28 Sep 2023 • He Zhang, Siyuan Liu, Jiacheng You, Chang Liu, Shuxin Zheng, Ziheng Lu, Tong Wang, Nanning Zheng, Bin Shao
Orbital-free density functional theory (OFDFT) is a quantum chemistry formulation that has a lower cost scaling than the prevailing Kohn-Sham DFT, which is increasingly desired for contemporary molecular research.
1 code implementation • NeurIPS 2023 • Haochen Wang, Junsong Fan, Yuxi Wang, Kaiyou Song, Tong Wang, Zhaoxiang Zhang
As it is empirically observed that Vision Transformers (ViTs) are quite insensitive to the order of input tokens, the need for an appropriate self-supervised pretext task that enhances the location awareness of ViTs is becoming evident.
no code implementations • 27 Jun 2023 • Shanshan Song, Tong Wang, Guohao Shen, Yuanyuan Lin, Jian Huang
Our approach simultaneously estimates a regression function and a conditional generator using a generative learning framework, where a conditional generator is a function that can generate samples from a conditional distribution.
1 code implementation • 4 Jan 2023 • Juan-Ni Wu, Tong Wang, Yue Chen, Li-Juan Tang, Hai-Long Wu, Ru-Qin Yu
Effective representation of molecules is a crucial factor affecting the performance of artificial intelligence models.
no code implementations • ICCV 2023 • Kaiyou Song, Shan Zhang, Zihao An, Zimeng Luo, Tong Wang, Jin Xie
In contrastive self-supervised learning, the common way to learn discriminative representation is to pull different augmented "views" of the same image closer while pushing all other images further apart, which has been proven to be effective.
no code implementations • 25 Nov 2022 • Rania Abdelghani, Yen-Hsiang Wang, Xingdi Yuan, Tong Wang, Pauline Lucas, Hélène Sauzéon, Pierre-Yves Oudeyer
In this context, we propose to leverage advances in the natural language processing field (NLP) and investigate the efficiency of using a large language model (LLM) for automating the production of the pedagogical content of a curious question-asking (QA) training.
no code implementations • 23 Nov 2022 • Yusong Wang, Shaoning Li, Zun Wang, Xinheng He, Bin Shao, Tie-Yan Liu, Tong Wang
In the technical report, we provide our solution for OGB-LSC 2022 Graph Regression Task.
2 code implementations • 6 Nov 2022 • Ronilo J. Ragodos, Tong Wang, Qihang Lin, Xun Zhou
To teach ProtoX about visual similarity, we pre-train an encoder using contrastive learning via self-supervised learning to recognize states as similar if they occur close together in time and receive the same action from the black-box agent.
no code implementations • 23 Oct 2022 • Nicholas Wolczynski, Maytal Saar-Tsechansky, Tong Wang
The human's reconciliation costs and imperfect discretion behavior introduce the need to develop AI systems which (1) provide recommendations selectively, (2) leverage the human partner's ADB to maximize the team's decision accuracy while regularizing for reconciliation costs, and (3) are inherently interpretable.
no code implementations • 22 Sep 2022 • Xingdi Yuan, Tong Wang, Yen-Hsiang Wang, Emery Fine, Rania Abdelghani, Pauline Lucas, Hélène Sauzéon, Pierre-Yves Oudeyer
Large Language Models (LLMs) have in recent years demonstrated impressive prowess in natural language generation.
1 code implementation • Proceedings of the First International Conference on Automated Machine Learning 2022 • Trapit Bansal, Salaheddin Alzubi, Tong Wang, Jay-Yoon Lee, Andrew McCallum
Meta-Adapters perform competitively with state-of-the-art few-shot learning methods that require full fine-tuning, while only fine-tuning 0. 6% of the parameters.
1 code implementation • 30 Aug 2022 • Kehan Wu, Yingce Xia, Yang Fan, Pan Deng, Haiguang Liu, Lijun Wu, Shufang Xie, Tong Wang, Tao Qin, Tie-Yan Liu
Structure-based drug design is drawing growing attentions in computer-aided drug discovery.
1 code implementation • 20 Aug 2022 • Rui Meng, Tong Wang, Xingdi Yuan, Yingbo Zhou, Daqing He
Finally, we fine-tune the model with limited data with true labels to fully adapt it to the target domain.
no code implementations • 20 Aug 2022 • Tong Wang
Human's rational thinking mode has a high degree of freedom and transcendence, and such problems cannot be expected to be studied by elaborating the realization of the nervous system.
no code implementations • 13 Aug 2022 • Tong Wang, Yuan YAO, Feng Xu, Miao Xu, Shengwei An, Ting Wang
Existing defenses are mainly built upon the observation that the backdoor trigger is usually of small size or affects the activation of only a few neurons.
2 code implementations • 16 Jun 2022 • Tong Wang, Guanyu Yang, Qijia He, Zhenquan Zhang, Junhua Wu
However, most existing methods 1) do not directly address the clustering task, since the representation learning and clustering process are separated; 2) depend too much on data augmentation, which greatly limits the capability of contrastive learning; 3) ignore the contrastive message for clustering tasks, which adversely degenerate the clustering results.
no code implementations • 28 Mar 2022 • Zimeng Li, Shichao Zhu, Bin Shao, Tie-Yan Liu, Xiangxiang Zeng, Tong Wang
Drug-drug interaction (DDI) prediction provides a drug combination strategy for systemically effective treatment.
1 code implementation • ACL 2022 • He Bai, Tong Wang, Alessandro Sordoni, Peng Shi
Class-based language models (LMs) have been long devised to address context sparsity in $n$-gram LMs.
no code implementations • 11 Mar 2022 • Ashish B. George, Tong Wang, Sergei Maslov
To understand the community structure in these energy-limited environments, we developed a microbial community consumer-resource model incorporating energetic and thermodynamic constraints on an interconnected metabolic network.
1 code implementation • 3 Feb 2022 • Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Yusong Wang, Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, Tie-Yan Liu
Molecular conformation generation aims to generate three-dimensional coordinates of all the atoms in a molecule and is an important task in bioinformatics and pharmacology.
no code implementations • CVPR 2022 • Tong Wang, Yousong Zhu, Yingying Chen, Chaoyang Zhao, Bin Yu, Jinqiao Wang, Ming Tang
The decision boundary between any two categories is the angular bisector of their weight vectors.
no code implementations • 13 Dec 2021 • Zhengfei Kuang, Jiaman Li, Mingming He, Tong Wang, Yajie Zhao
To make the local features aware of the global context and improve their matching accuracy, we introduce DenseGAP, a new solution for efficient Dense correspondence learning with a Graph-structured neural network conditioned on Anchor Points.
1 code implementation • 22 Nov 2021 • Tong Wang, Yuan YAO, Feng Xu, Shengwei An, Hanghang Tong, Ting Wang
We also evaluate FTROJAN against state-of-the-art defenses as well as several adaptive defenses that are designed on the frequency domain.
no code implementations • EMNLP 2021 • Trapit Bansal, Karthick Gunasekaran, Tong Wang, Tsendsuren Munkhdalai, Andrew McCallum
Meta-learning considers the problem of learning an efficient learning process that can leverage its past experience to accurately solve new tasks.
no code implementations • 14 Oct 2021 • Siyuan Liu, Yusong Wang, Tong Wang, Yifan Deng, Liang He, Bin Shao, Jian Yin, Nanning Zheng, Tie-Yan Liu
The identification of active binding drugs for target proteins (termed as drug-target interaction prediction) is the key challenge in virtual screening, which plays an essential role in drug discovery.
1 code implementation • 9 Oct 2021 • Mu Yang, Shaojin Ding, Tianlong Chen, Tong Wang, Zhangyang Wang
This work presents a lifelong learning approach to train a multilingual Text-To-Speech (TTS) system, where each language was seen as an individual task and was learned sequentially and continually.
no code implementations • 4 Jun 2021 • Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, Tong Wang
We propose a framework for such decisions, including a globally interpretable machine learning model, an interactive visualization of it, and several types of summaries and explanations for any given decision.
no code implementations • NAACL 2021 • Mingyue Shang, Tong Wang, Mihail Eric, Jiangning Chen, Jiyang Wang, Matthew Welch, Tiantong Deng, Akshay Grewal, Han Wang, Yue Liu, Yang Liu, Dilek Hakkani-Tur
In recent years, incorporating external knowledge for response generation in open-domain conversation systems has attracted great interest.
no code implementations • NAACL 2021 • Tong Wang, Jiangning Chen, Mohsen Malmir, Shuyan Dong, Xin He, Han Wang, Chengwei Su, Yue Liu, Yang Liu
In dialog systems, the Natural Language Understanding (NLU) component typically makes the interpretation decision (including domain, intent and slots) for an utterance before the mentioned entities are resolved.
2 code implementations • ACL 2021 • Rui Meng, Khushboo Thaker, Lei Zhang, Yue Dong, Xingdi Yuan, Tong Wang, Daqing He
Faceted summarization provides briefings of a document from different perspectives.
Ranked #1 on
Unsupervised Extractive Summarization
on FacetSum
no code implementations • 6 May 2021 • Tong Wang, Jingyi Yang, Yunyi Li, Boxiang Wang
We propose Partially Interpretable Estimators (PIE) which attribute a prediction to individual features via an interpretable model, while a (possibly) small part of the PIE prediction is attributed to the interaction of features via a black-box model, with the goal to boost the predictive performance while maintaining interpretability.
no code implementations • 6 Apr 2021 • Ying Lin, Han Wang, Jiangning Chen, Tong Wang, Yue Liu, Heng Ji, Yang Liu, Premkumar Natarajan
For example, with "add milk to my cart", a customer may refer to a certain organic product, while some customers may want to re-order products they regularly purchase.
1 code implementation • CVPR 2021 • Tong Wang, Yousong Zhu, Chaoyang Zhao, Wei Zeng, Jinqiao Wang, Ming Tang
To address the problem of long-tail distribution for the large vocabulary object detection task, existing methods usually divide the whole categories into several groups and treat each group with different strategies.
no code implementations • 4 Feb 2021 • Chit Siong Lau, Jing Yee Chee, Yee Sin Ang, Shi Wun Tong, Liemao Cao, Zi-En Ooi, Tong Wang, Lay Kee Ang, Yan Wang, Manish Chhowalla, Kuan Eng Johnson Goh
Here, temperature-dependent transfer length measurements are performed on chemical vapour deposition grown single-layer and bilayer WS$_2$ devices with indium alloy contacts.
Materials Science Mesoscale and Nanoscale Physics
no code implementations • 28 Jan 2021 • Alexei V. Tkachenko, Sergei Maslov, Tong Wang, Ahmed Elbanna, George N. Wong, Nigel Goldenfeld
It is well recognized that population heterogeneity plays an important role in the spread of epidemics.
no code implementations • 12 Jan 2021 • Shaosheng Xu, Jinde Cao, Yichao Cao, Tong Wang
As gradient descent method in deep learning causes a series of questions, this paper proposes a novel gradient-free deep learning structure.
no code implementations • 6 Jan 2021 • Yao Li, Tong Wang, Juanrong Zhang, Bin Shao, Haipeng Gong, Yusong Wang, Siyuan Liu, Tie-Yan Liu
We performed molecular dynamics simulation on the S protein with a focus on the function of its N-terminal domains (NTDs).
no code implementations • 17 Nov 2020 • Tong Wang, Maytal Saar-Tsechansky
We formulate a multi-objective optimization for building a surrogate model, where we simultaneously optimize for both predictive performance and bias.
1 code implementation • NAACL 2021 • Rui Meng, Xingdi Yuan, Tong Wang, Sanqiang Zhao, Adam Trischler, Daqing He
Recent years have seen a flourishing of neural keyphrase generation (KPG) works, including the release of several large-scale datasets and a host of new models to tackle them.
no code implementations • 19 Jul 2020 • Xinwei Chen, Tong Wang, Barrett W. Thomas, Marlin W. Ulmer
The demand for same-day delivery (SDD) has increased rapidly in the last few years and has particularly boomed during the COVID-19 pandemic.
1 code implementation • 3 Jul 2020 • Dat Hong, Tong Wang, Stephen S. Baek
We propose a novel interpretable deep neural network for text classification, called ProtoryNet, based on a new concept of prototype trajectories.
1 code implementation • EMNLP 2020 • Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, Mohit Iyyer
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task, and we validate their effectiveness in experiments controlled for source and target data size.
no code implementations • 1 Apr 2020 • Fengling Li, Tong Wang, Lei Zhu, Zheng Zhang, Xinhua Wang
Unlike previous cross-modal hashing approaches, our learning framework jointly optimizes semantic preserving that transforms deep features of multimedia data into binary hash codes, and the semantic regression which directly regresses query modality representation to explicit label.
no code implementations • 23 Feb 2020 • Azhar Hussain, Tong Wang, Cao Jiahua
We consider a system to optimize duration of traffic signals using multi-agent deep reinforcement learning and Vehicle-to-Everything (V2X) communication.
no code implementations • 19 Feb 2020 • Chen Liao, Tong Wang, Sergei Maslov, Joao B. Xavier
We used the three models to study each community's limits of robustness to perturbations such as variations in resource supply, antibiotic treatments and invasion by other "cheaters" species.
no code implementations • 10 Feb 2020 • Danqing Pan, Tong Wang, Satoshi Hara
We present an interpretable companion model for any pre-trained black-box classifiers.
no code implementations • 9 Nov 2019 • Tong Wang, Fujie Jin, Yu, Hu, Yuan Cheng
The prediction model and the interpretable insights can be applied to assist fundraisers with better promoting their fundraising campaigns and can potentially help crowdfunding platforms to provide more timely feedback to all fundraisers.
no code implementations • 23 Sep 2019 • Hassan Rafique, Tong Wang, Qihang Lin
Driven by an increasing need for model interpretability, interpretable models have become strong competitors for black-box models in many real applications.
1 code implementation • 9 Sep 2019 • Rui Meng, Xingdi Yuan, Tong Wang, Peter Brusilovsky, Adam Trischler, Daqing He
Recently, concatenating multiple keyphrases as a target sequence has been proposed as a new learning paradigm for keyphrase generation.
1 code implementation • NeurIPS 2019 • Tsendsuren Munkhdalai, Alessandro Sordoni, Tong Wang, Adam Trischler
We augment recurrent neural networks with an external memory mechanism that builds upon recent progress in metalearning.
no code implementations • 10 May 2019 • Tong Wang, Qihang Lin
The interpretable model substitutes the black-box model on a subset of data where the black-box is overkill or nearly overkill, gaining transparency at no or low cost of the predictive accuracy.
no code implementations • 13 Dec 2018 • Yunyi Li, Tong Wang
Our goal is to predict the location of the next crime in a crime series, based on the identified previous offenses in the series.
1 code implementation • NeurIPS 2018 • Tong Wang
We present the Multi-value Rule Set (MRS) for interpretable classification with feature efficient presentations.
no code implementations • 30 Nov 2018 • Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, Tong Wang
We propose a possible solution to a public challenge posed by the Fair Isaac Corporation (FICO), which is to provide an explainable model for credit risk assessment.
1 code implementation • ACL 2020 • Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Peter Brusilovsky, Daqing He, Adam Trischler
With both previous and new evaluation metrics, our model outperforms strong baselines on all datasets.
1 code implementation • 6 Jul 2018 • Tong Wang, Veerajalandhar Allareddy, Sankeerth Rampa, Veerasathpurush Allareddy
We propose a Bayesian framework for formulating a MRS model and propose an efficient inference method for learning a maximum \emph{a posteriori}, incorporating theoretically grounded bounds to iteratively reduce the search space and improve the search efficiency.
no code implementations • WS 2018 • S Subramanian, eep, Tong Wang, Xingdi Yuan, Saizheng Zhang, Adam Trischler, Yoshua Bengio
We propose a two-stage neural model to tackle question generation from documents.
1 code implementation • 12 Feb 2018 • Tong Wang
This work addresses the situation where a black-box model with good predictive performance is chosen over its interpretable competitors, and we show interpretability is still achievable in this case.
no code implementations • 16 Oct 2017 • Tong Wang, Cynthia Rudin
The Bayesian model has tunable parameters that can characterize subgroups with various sizes, providing users with more flexible choices of models from the \emph{treatment efficient frontier}.
no code implementations • 15 Oct 2017 • Tong Wang
MARS introduces a more generalized form of association rules that allows multiple values in a condition.
no code implementations • LREC 2018 • Boyang Li, Beth Cardier, Tong Wang, Florian Metze
Stories are a vital form of communication in human culture; they are employed daily to persuade, to elicit sympathy, or to convey a message.
Cultural Vocal Bursts Intensity Prediction
Vocal Bursts Intensity Prediction
no code implementations • 8 Jul 2017 • Tong Wang, Ping Chen, Boyang Li
An important and difficult challenge in building computational models for narratives is the automatic evaluation of narrative quality.
no code implementations • 14 Jun 2017 • Sandeep Subramanian, Tong Wang, Xingdi Yuan, Saizheng Zhang, Yoshua Bengio, Adam Trischler
We propose a two-stage neural model to tackle question generation from documents.
no code implementations • 5 Jun 2017 • Tong Wang, Xingdi Yuan, Adam Trischler
We propose a generative machine comprehension model that learns jointly to ask and answer questions based on documents.
4 code implementations • WS 2017 • Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, Adam Trischler
We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers.
no code implementations • 21 Apr 2017 • Ping Chen, Fei Wu, Tong Wang, Wei Ding
In this paper, we will present some preliminary results on one especially useful and challenging problem in NLP system evaluation: how to pinpoint content differences of two text passages (especially for large pas-sages such as articles and books).
2 code implementations • WS 2017 • Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman
We present NewsQA, a challenging machine comprehension dataset of over 100, 000 human-generated question-answer pairs.
13 code implementations • 28 Nov 2016 • Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering.
1 code implementation • 27 Sep 2016 • Jipeng Qiang, Ping Chen, Tong Wang, Xindong Wu
Inferring topics from the overwhelming amount of short texts becomes a critical but challenging task for many content analysis tasks, such as content charactering, user interest profiling, and emerging topic detecting.
no code implementations • 13 Sep 2016 • Tong Wang, Ping Chen, Kevin Amaral, Jipeng Qiang
Text simplification (TS) aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning.
no code implementations • 6 Nov 2015 • Tong Wang, Cynthia Rudin
Or's of And's (OA) models are comprised of a small number of disjunctions of conjunctions, also called disjunctive normal form.
no code implementations • 28 Apr 2015 • Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, Perry MacNeille
In both cases, there are prior parameters that the user can set to encourage the model to have a desired size and shape, to conform with a domain-specific definition of interpretability.