no code implementations • ICML 2020 • Ting Chen, Lala Li, Yizhou Sun
Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings.
no code implementations • Findings (EMNLP) 2021 • Aashi Jain, Mandy Guo, Krishna Srinivasan, Ting Chen, Sneha Kudugunta, Chao Jia, Yinfei Yang, Jason Baldridge
Both image-caption pairs and translation pairs provide the means to learn deep representations of and connections between languages.
no code implementations • 17 Dec 2024 • Xuanzhong Chen, Ye Jin, Xiaohao Mao, Lun Wang, Shuyang Zhang, Ting Chen
Rare diseases, despite their low individual incidence, collectively impact around 300 million people worldwide due to the huge number of diseases.
1 code implementation • 19 Nov 2024 • Zhengyao Ding, Yujian Hu, Youyao Xu, Chengchen Zhao, Ziyu Li, Yiheng Mao, Haitao Li, Qian Li, Jing Wang, Yue Chen, Mengjia Chen, Longbo Wang, Xuesen Chu, Weichao Pan, Ziyi Liu, Fei Wu, HongKun Zhang, Ting Chen, Zhengxing Huang
Cardiovascular diseases (CVDs) present significant challenges for early and accurate diagnosis.
no code implementations • 12 Nov 2024 • Yuanbo Wen, Tao Gao, ZiQi Li, Jing Zhang, Kaihao Zhang, Ting Chen
Specifically, our model employs the contrastive language-image pre-training model (CLIP) to facilitate the training of our proposed latent prompt generators (LPGs), which represent three types of latent prompts to characterize the degradation type, degradation property and image caption.
1 code implementation • 16 Oct 2024 • Lingxiao Luo, Bingda Tang, Xuanzhong Chen, Rong Han, Ting Chen
For instance, most VLMs rely on a single method of visual grounding, whereas complex medical tasks demand more versatile approaches.
1 code implementation • 23 Sep 2024 • Jiachi Chen, Qingyuan Zhong, Yanlin Wang, Kaiwen Ning, Yongkun Liu, Zenan Xu, Zhe Zhao, Ting Chen, Zibin Zheng
Despite their benefits, LLMs also pose notable risks, including the potential to generate harmful content and being abused by malicious developers to create malicious code.
1 code implementation • 21 Aug 2024 • Rong Han, Xiaohong Liu, Tong Pan, Jing Xu, Xiaoyu Wang, Wuyang Lan, Zhenyu Li, Zixuan Wang, Jiangning Song, Guangyu Wang, Ting Chen
We propose a Co-Former to combine the cross-modal sequence and structure information and a bi-scope pre-training strategy for improving Co-Former's interaction understanding.
no code implementations • 24 Jul 2024 • Yuanbo Wen, Tao Gao, Ting Chen
In addition, we employ the rain-relevance discarding energy function (RDEF) and the rain-irrelevance preserving energy function (RPEF) to direct the reverse sampling procedure of a pre-trained diffusion model, effectively removing the rain streaks while preserving the image contents.
no code implementations • 8 Jul 2024 • Emaad Khwaja, Abdullah Rashwan, Ting Chen, Oliver Wang, Suraj Kothawade, Yeqing Li
We present a one-shot text-to-image diffusion model that can generate high-resolution images from natural language descriptions.
no code implementations • 2 Apr 2024 • Rong Han, Wenbing Huang, Lingxiao Luo, Xinyan Han, Jiaming Shen, Zhiqiang Zhang, Jun Zhou, Ting Chen
In this paper, we propose a neural network model to address multiple tasks jointly upon the input of 3D protein structures.
no code implementations • 8 Mar 2024 • Yazhe Li, Jorg Bornschein, Ting Chen
In this paper, we explore a new generative approach for learning visual representations.
1 code implementation • 9 Feb 2024 • Xuanzhong Chen, Xiaohao Mao, Qihan Guo, Lun Wang, Shuyang Zhang, Ting Chen
Meanwhile, we have compiled the largest open-source dataset on rare disease patients, establishing a benchmark for future studies in this domain.
1 code implementation • 2 Feb 2024 • Kalvin Chang, Nathaniel R. Robinson, Anna Cai, Ting Chen, Annie Zhang, David R. Mortensen
We describe a set of new methods to partially automate linguistic phylogenetic inference given (1) cognate sets with their respective protoforms and sound laws, (2) a mapping from phones to their articulatory features and (3) a typological database of sound changes.
no code implementations • 13 Dec 2023 • Yuanbo Wen, Tao Gao, ZiQi Li, Jing Zhang, Ting Chen
Haze obscures remote sensing images, hindering valuable information extraction.
1 code implementation • 12 Dec 2023 • Lingxiao Luo, Xuanzhong Chen, Bingda Tang, Xinsheng Chen, Rong Han, Chengpeng Hu, Yujiang Li, Ting Chen
In this work, we propose a universal foundation model for medical image analysis that processes images with heterogeneous spatial properties using a unified structure.
no code implementations • 30 Nov 2023 • Zhiwei Deng, Ting Chen, Yang Li
In this paper, we propose the Perceptual Group Tokenizer, a model that entirely relies on grouping operations to extract visual features and perform self-supervised representation learning, where a series of grouping operations are used to iteratively hypothesize the context for pixels or superpixels to refine feature representations.
Ranked #25 on Self-Supervised Image Classification on ImageNet
Representation Learning Self-Supervised Image Classification +2
no code implementations • 10 Nov 2023 • Calvin Luo, Boqing Gong, Ting Chen, Chen Sun
Motivated by the recent success of multi-task transformers for visual recognition and language understanding, we propose a unified neural architecture for visual recognition and reasoning with a generic interface (e. g., tokens) for both.
no code implementations • 19 Sep 2023 • Yuanbo Wen, Tao Gao, ZiQi Li, Jing Zhang, Ting Chen
This module leverages dimension-wise queries that are independent of the input features and employs global context-aware attention (GCA) to capture essential features while avoiding the entanglement of redundant or irrelevant information.
no code implementations • 27 May 2023 • Kaize Ding, Albert Jiongqian Liang, Bryan Perrozi, Ting Chen, Ruoxi Wang, Lichan Hong, Ed H. Chi, Huan Liu, Derek Zhiyuan Cheng
Learning expressive representations for high-dimensional yet sparse features has been a longstanding problem in information retrieval.
1 code implementation • 22 May 2023 • Ting Chen, Lala Li
We employ two types of transformer layers: local layers operate on data tokens within each group, while global layers operate on a smaller set of introduced latent tokens.
1 code implementation • 6 Apr 2023 • Tao Gao, Yuanbo Wen, Kaihao Zhang, Peng Cheng, Ting Chen
Rain-by-snow weather removal is a specialized task in weather-degraded image restoration aiming to eliminate coexisting rain streaks and snow particles.
2 code implementations • 26 Jan 2023 • Ting Chen
We empirically study the effect of noise scheduling strategies for denoising diffusion generative models.
2 code implementations • 22 Dec 2022 • Allan Jabri, David Fleet, Ting Chen
We show how to leverage recurrence by conditioning the latent tokens at each forward pass of the reverse diffusion process with those from prior computation, i. e. latent self-conditioning.
Ranked #5 on Video Prediction on Kinetics-600 12 frames, 64x64
no code implementations • 7 Dec 2022 • Xuanyu Shi, Shiyao Xie, Wenjia Wang, Ting Chen, Jian Du
Failure is common in clinical trials since the successful failures presented in negative results always indicate the ways that should not be taken.
1 code implementation • CVPR 2023 • Xuan Zhang, Shiyu Li, Xi Li, Ping Huang, Jiulong Shan, Ting Chen
In this study, we propose an improved model called DeSTSeg, which integrates a pre-trained teacher network, a denoising student encoder-decoder, and a segmentation network into one framework.
Ranked #52 on Anomaly Detection on MVTec AD
1 code implementation • ICCV 2023 • Ting Chen, Lala Li, Saurabh Saxena, Geoffrey Hinton, David J. Fleet
Panoptic segmentation assigns semantic and instance ID labels to every pixel of an image.
7 code implementations • 8 Aug 2022 • Ting Chen, Ruixiang Zhang, Geoffrey Hinton
The main idea behind our approach is to first represent the discrete data as binary bits, and then train a continuous diffusion model to model these bits as real numbers which we call analog bits.
Ranked #7 on Image Captioning on MS COCO
no code implementations • 3 Aug 2022 • Wenkai Li, Cheng Feng, Ting Chen, Jun Zhu
In this work, to tackle this important challenge, we firstly investigate the robustness of commonly used deep TSAD methods with contaminated training data which provides a guideline for applying these methods when the provided training data are not guaranteed to be anomaly-free.
2 code implementations • 12 Jul 2022 • Wentse Chen, Shiyu Huang, Yuan Chiang, Tim Pearce, Wei-Wei Tu, Ting Chen, Jun Zhu
We propose Diversity-Guided Policy Optimization (DGPO), an on-policy algorithm that discovers multiple strategies for solving a given task.
1 code implementation • 15 Jun 2022 • Ting Chen, Saurabh Saxena, Lala Li, Tsung-Yi Lin, David J. Fleet, Geoffrey Hinton
Despite that, by formulating the output of each task as a sequence of discrete tokens with a unified interface, we show that one can train a neural network with a single model architecture and loss function on all these tasks, with no task-specific customization.
no code implementations • 8 Jun 2022 • Weijie He, Ting Chen
In the critic network, a supervised diagnosis model for disease predictions is involved to precisely estimate the state-value function.
1 code implementation • 23 May 2022 • Emmanuel Brempong Asiedu, Simon Kornblith, Ting Chen, Niki Parmar, Matthias Minderer, Mohammad Norouzi
We propose a decoder pretraining approach based on denoising, which can be combined with supervised pretraining of the encoder.
3 code implementations • 19 May 2022 • Shekoofeh Azizi, Laura Culp, Jan Freyberg, Basil Mustafa, Sebastien Baur, Simon Kornblith, Ting Chen, Patricia MacWilliams, S. Sara Mahdavi, Ellery Wulczyn, Boris Babenko, Megan Wilson, Aaron Loh, Po-Hsuan Cameron Chen, YuAn Liu, Pinal Bavishi, Scott Mayer McKinney, Jim Winkens, Abhijit Guha Roy, Zach Beaver, Fiona Ryan, Justin Krogue, Mozziyar Etemadi, Umesh Telang, Yun Liu, Lily Peng, Greg S. Corrado, Dale R. Webster, David Fleet, Geoffrey Hinton, Neil Houlsby, Alan Karthikesalingam, Mohammad Norouzi, Vivek Natarajan
These results suggest that REMEDIS can significantly accelerate the life-cycle of medical imaging AI development thereby presenting an important step forward for medical imaging AI to deliver broad impact.
1 code implementation • 7 May 2022 • Yuanbo Wen, Tao Gao, Jing Zhang, Kaihao Zhang, Ting Chen
This approach comprises two key modules, a rain streaks removal network (R$^2$Net) focusing on accurate rain removal, and a details reconstruction network (DRNet) designed to recover the textural details of rain-free images.
no code implementations • 24 Nov 2021 • Shiqi Liu, Lu Wang, Jie Lian, Ting Chen, Cong Liu, Xuchen Zhan, Jintao Lu, Jie Liu, Ting Wang, Dong Geng, Hongwei Duan, Yuze Tian
Relative radiometric normalization(RRN) of different satellite images of the same terrain is necessary for change detection, object classification/segmentation, and map-making tasks.
1 code implementation • NeurIPS 2021 • Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang
Contrastive learning approaches have achieved great success in learning visual representations with few labels of the target classes.
no code implementations • 15 Oct 2021 • Yao Qin, Chiyuan Zhang, Ting Chen, Balaji Lakshminarayanan, Alex Beutel, Xuezhi Wang
We show that patch-based negative augmentation consistently improves robustness of ViTs across a wide set of ImageNet based robustness benchmarks.
1 code implementation • 9 Oct 2021 • Shiyu Huang, Wenze Chen, Longfei Zhang, Shizhen Xu, Ziyang Li, Fengming Zhu, Deheng Ye, Ting Chen, Jun Zhu
To the best of our knowledge, Tikick is the first learning-based AI system that can take over the multi-agent Google Research Football full game, while previous work could either control a single agent or experiment on toy academic scenarios.
1 code implementation • 8 Oct 2021 • Shiyu Huang, Bin Wang, Dong Li, Jianye Hao, Ting Chen, Jun Zhu
In this work, we propose a new algorithm for circuit routing, named Ranking Cost, which innovatively combines search-based methods (i. e., A* algorithm) and learning-based methods (i. e., Evolution Strategies) to form an efficient and trainable router.
6 code implementations • ICLR 2022 • Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, Geoffrey Hinton
We present Pix2Seq, a simple and generic framework for object detection.
Ranked #79 on Object Detection on COCO minival (using extra training data)
no code implementations • 10 Sep 2021 • Aashi Jain, Mandy Guo, Krishna Srinivasan, Ting Chen, Sneha Kudugunta, Chao Jia, Yinfei Yang, Jason Baldridge
Both image-caption pairs and translation pairs provide the means to learn deep representations of and connections between languages.
Ranked #1 on Semantic Image-Text Similarity on CxC
1 code implementation • NeurIPS 2021 • Long Zhao, Zizhao Zhang, Ting Chen, Dimitris N. Metaxas, Han Zhang
Attention-based models, exemplified by the Transformer, can effectively model long range dependency, but suffer from the quadratic complexity of self-attention operation, making them difficult to be adopted for high-resolution image generation based on Generative Adversarial Networks (GANs).
Ranked #2 on Image Generation on CelebA 256x256 (FID metric)
6 code implementations • 26 May 2021 • Zizhao Zhang, Han Zhang, Long Zhao, Ting Chen, Sercan O. Arik, Tomas Pfister
Hierarchical structures are popular in recent vision transformers, however, they require sophisticated designs and massive datasets to work well.
Ranked #89 on Image Classification on CIFAR-10
1 code implementation • NeurIPS 2021 • Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang
Contrastive learning approaches have achieved great success in learning visual representations with few labels of the target classes.
1 code implementation • 18 May 2021 • Wenkai Li, WenBo Hu, Ting Chen, Ning Chen, Cheng Feng
We also leverage a graph learning module to learn a sparse adjacency matrix to explicitly capture the stable interrelation structure among multiple time series channels for the interpretable pattern reconstruction of interrelated channels.
1 code implementation • ICLR 2021 • Wonkwang Lee, Whie Jung, Han Zhang, Ting Chen, Jing Yu Koh, Thomas Huang, Hyungsuk Yoon, Honglak Lee, Seunghoon Hong
Despite the recent advances in the literature, existing approaches are limited to moderately short-term prediction (less than a few seconds), while extrapolating it to a longer future quickly leads to destruction in structure and content.
1 code implementation • ICCV 2021 • Shekoofeh Azizi, Basil Mustafa, Fiona Ryan, Zachary Beaver, Jan Freyberg, Jonathan Deaton, Aaron Loh, Alan Karthikesalingam, Simon Kornblith, Ting Chen, Vivek Natarajan, Mohammad Norouzi
Self-supervised pretraining followed by supervised fine-tuning has seen success in image recognition, especially when labeled examples are scarce, but has received limited attention in medical image analysis.
no code implementations • 1 Jan 2021 • Simon Kornblith, Honglak Lee, Ting Chen, Mohammad Norouzi
It is common to use the softmax cross-entropy loss to train neural networks on classification datasets where a single class label is assigned to each example.
no code implementations • 1 Jan 2021 • Shiyu Huang, Bin Wang, Dong Li, Jianye Hao, Jun Zhu, Ting Chen
In our method, we introduce a new set of variables called cost maps, which can help the A* router to find out proper paths to achieve the global object.
no code implementations • 2 Dec 2020 • Xiaoqi Li, Ting Chen, Xiapu Luo, Chenxu Wang
Because the locked cryptocurrencies can never be controlled by users, avoid interacting with the accounts discovered by CLUE and repeating the same mistakes again can help users to save money.
Cryptography and Security
no code implementations • 2 Dec 2020 • Weijie He, Xiaohao Mao, Chao Ma, Yu Huang, José Miguel Hernández-Lobato, Ting Chen
To address the challenge, we propose a non-RL Bipartite Scalable framework for Online Disease diAgnosis, called BSODA.
3 code implementations • NeurIPS 2021 • Ting Chen, Calvin Luo, Lala Li
We construct datasets with explicit and controllable competing features, and show that, for contrastive learning, a few bits of easy-to-learn shared features can suppress, and even fully prevent, the learning of other sets of competing features.
no code implementations • NeurIPS 2021 • Simon Kornblith, Ting Chen, Honglak Lee, Mohammad Norouzi
We show that many objectives lead to statistically significant improvements in ImageNet accuracy over vanilla softmax cross-entropy, but the resulting fixed feature extractors transfer substantially worse to downstream tasks, and the choice of loss has little effect when networks are fully fine-tuned on the new tasks.
1 code implementation • NeurIPS 2020 • Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness In this work, we improve robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations.
4 code implementations • NeurIPS 2020 • Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang shen
In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
no code implementations • 21 Oct 2020 • Wang-Cheng Kang, Derek Zhiyuan Cheng, Tiansheng Yao, Xinyang Yi, Ting Chen, Lichan Hong, Ed H. Chi
Embedding learning of categorical features (e. g. user/item IDs) is at the core of various recommendation models including matrix factorization and neural collaborative filtering.
1 code implementation • 25 Jul 2020 • Tiansheng Yao, Xinyang Yi, Derek Zhiyuan Cheng, Felix Yu, Ting Chen, Aditya Menon, Lichan Hong, Ed H. Chi, Steve Tjoa, Jieqi Kang, Evan Ettinger
Our online results also verify our hypothesis that our framework indeed improves model performance even more on slices that lack supervision.
8 code implementations • NeurIPS 2020 • Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey Hinton
The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
Self-Supervised Image Classification Semi-Supervised Image Classification
no code implementations • 4 Jun 2020 • Zhengli Zhao, Zizhao Zhang, Ting Chen, Sameer Singh, Han Zhang
We provide new state-of-the-art results for conditional generation on CIFAR-10 with both consistency loss and contrastive loss as additional regularizations.
no code implementations • 8 May 2020 • Manchao Zhang, Yi Xie, Jie Zhang, Weichen Wang, Chunwang Wu, Ting Chen, Wei Wu, Pingxing Chen
Decoherence induced by the laser frequency noise is one of the most important obstacles in the quantum information processing.
Quantum Physics
no code implementations • 20 Feb 2020 • Wang-Cheng Kang, Derek Zhiyuan Cheng, Ting Chen, Xinyang Yi, Dong Lin, Lichan Hong, Ed H. Chi
In this paper, we seek to learn highly compact embeddings for large-vocab sparse features in recommender systems (recsys).
92 code implementations • ICML 2020 • Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
Ranked #4 on Contrastive Learning on imagenet-1k
no code implementations • ICLR 2020 • Jinlong Liu, Guoqing Jiang, Yunzhi Bai, Ting Chen, Huayan Wang
As deep neural networks (DNNs) achieve tremendous success across many application domains, researchers tried to explore in many aspects on why they generalize well.
no code implementations • ICLR 2020 • Shiyu Huang, Hang Su, Jun Zhu, Ting Chen
Partially Observable Markov Decision Processes (POMDPs) are popular and flexible models for real-world decision-making applications that demand the information from past observations to make optimal decisions.
no code implementations • NeurIPS 2020 • Katherine L. Hermann, Ting Chen, Simon Kornblith
By taking less aggressive random crops at training time and applying simple, naturalistic augmentation (color distortion, noise, and blur), we train models that classify ambiguous images by shape a majority of the time, and outperform baselines on out-of-distribution test sets.
Ranked #9 on Object Recognition on shape bias
no code implementations • IJCNLP 2019 • Xinzhu Lin, Xiahui He, Qin Chen, Huaixiao Tou, Zhongyu Wei, Ting Chen
In this paper, we first construct a dialogue symptom diagnosis dataset based on an online medical forum with a large amount of dialogues between patients and doctors.
no code implementations • 25 Sep 2019 • Ting Chen, Lala Li, Yizhou Sun
Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings.
2 code implementations • 26 Aug 2019 • Ting Chen, Lala Li, Yizhou Sun
Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings.
1 code implementation • ACL 2019 • Ziniu Hu, Ting Chen, Kai-Wei Chang, Yizhou Sun
Existing approaches for learning word embeddings often assume there are sufficient occurrences for each word in the corpus, such that the representation of words can be accurately estimated from their contexts.
no code implementations • 31 May 2019 • Ziniu Hu, Changjun Fan, Ting Chen, Kai-Wei Chang, Yizhou Sun
With the proposed pre-training procedure, the generic structural information is learned and preserved, thus the pre-trained GNN requires less amount of labeled data and fewer domain-specific features to achieve high performance on different downstream tasks.
1 code implementation • 11 May 2019 • Ting Chen, Song Bian, Yizhou Sun
In this work, we propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction.
Ranked #1 on Graph Classification on RE-M12K
1 code implementation • 1 Apr 2019 • Yunsheng Bai, Hao Ding, Yang Qiao, Agustin Marinovic, Ken Gu, Ting Chen, Yizhou Sun, Wei Wang
We introduce a novel approach to graph-level representation learning, which is to embed an entire graph into a vector space where the embeddings of two graphs preserve their graph-graph proximity.
Ranked #1 on Graph Classification on Web
no code implementations • ICLR 2019 • Shun Liao, Ting Chen, Tian Lin, Denny Zhou, Chong Wang
In this paper, we present a novel softmax inference speedup method, Doubly Sparse Softmax (DS-Softmax), that leverages sparse mixture of sparse experts to efficiently retrieve top-k classes.
4 code implementations • CVPR 2019 • Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, Neil Houlsby
In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, and take a step towards bridging the gap between conditional and unconditional GANs.
Ranked #6 on Image Generation on CelebA-HQ 128x128
no code implementations • 27 Oct 2018 • Ting Chen, Xiaohua Zhai, Neil Houlsby
To counter forgetting, we encourage the discriminator to maintain useful representations by adding a self-supervision.
2 code implementations • ICLR 2019 • Ting Chen, Mario Lucic, Neil Houlsby, Sylvain Gelly
Training Generative Adversarial Networks (GANs) is notoriously challenging.
no code implementations • NIPS Workshop CDNNRIA 2018 • Ting Chen, Ji Lin, Tian Lin, Song Han, Chong Wang, Denny Zhou
Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.
3 code implementations • WSDM '19 Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining 2019 • Yunsheng Bai, Hao Ding, Song Bian, Ting Chen, Yizhou Sun, Wei Wang
Our model achieves better generalization on unseen graphs, and in the worst case runs in quadratic time with respect to the number of nodes in two graphs.
Ranked #1 on Graph Similarity on IMDb
no code implementations • ACL 2018 • Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuanjing Huang, Kam-Fai Wong, Xiangying Dai
In this paper, we make a move to build a dialogue system for automatic diagnosis.
1 code implementation • ICML 2018 • Ting Chen, Martin Renqiang Min, Yizhou Sun
Conventional embedding methods directly associate each symbol with a continuous embedding vector, which is equivalent to applying a linear transformation based on a "one-hot" encoding of the discrete symbols.
no code implementations • 22 Apr 2018 • Anahita Hosseini, Ting Chen, Wenjun Wu, Yizhou Sun, Majid Sarrafzadeh
To the best of our knowledge, this is the first study to use Heterogeneous Information Network for modeling clinical data and disease diagnosis.
no code implementations • 8 Nov 2017 • Ting Chen, Martin Renqiang Min, Yizhou Sun
Conventional embedding methods directly associate each symbol with a continuous embedding vector, which is equivalent to applying linear transformation based on "one-hot" encoding of the discrete symbols.
no code implementations • 23 Jun 2017 • Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong
In this paper, we propose a general neural network-based recommendation framework, which subsumes several existing state-of-the-art recommendation algorithms, and address the efficiency issue by investigating sampling strategies in the stochastic gradient descent training for the framework.
no code implementations • 4 Jun 2017 • Ting Chen, Liangjie Hong, Yue Shi, Yizhou Sun
While latent factors of items can be learned effectively from user interaction data, in many cases, such data is not available, especially for newly emerged items.
no code implementations • 24 Dec 2016 • Yupeng Gu, Ting Chen, Yizhou Sun, Bingyu Wang
The problem of ideology detection is to study the latent (political) placement for people, which is traditionally studied on politicians according to their voting behaviors.
Social and Information Networks
1 code implementation • 8 Dec 2016 • Ting Chen, Yizhou Sun
To address the challenges, we propose a task-guided and path-augmented heterogeneous network embedding model.
no code implementations • 26 Aug 2016 • Ting Chen, Lu-An Tang, Yizhou Sun, Zhengzhang Chen, Kai Zhang
Anomaly detection plays an important role in modern data-driven security applications, such as detecting suspicious access to a socket from a process.
no code implementations • 10 Aug 2015 • Ning Chen, Jun Zhu, Jianfei Chen, Ting Chen
Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of both linear and nonlinear SVMs.