no code implementations • 20 Oct 2014 • Chang Liu, Yi Xu
We propose a filter method for unsupervised feature selection which is based on the Confidence Machine.
no code implementations • 29 Oct 2014 • Xudong Liu, Bin Zhang, Ting Zhang, Chang Liu
Rating Prediction is a basic problem in Recommender System, and one of the most widely used method is Factorization Machines(FM).
1 code implementation • 17 Jun 2016 • Chang Liu, Yu Cao, Yan Luo, Guanling Chen, Vinod Vokkarane, Yunsheng Ma
We applied our proposed approach to two real-world food image data sets (UEC-256 and Food-101) and achieved impressive results.
no code implementations • 7 Aug 2016 • Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea
The effectiveness of supervised learning techniques has made them ubiquitous in research and practice.
no code implementations • NeurIPS 2016 • Xinyun Chen, Chang Liu, Richard Shin, Dawn Song, Mingcheng Chen
Automatic translation from natural language descriptions into programs is a longstanding challenging problem.
2 code implementations • 8 Nov 2016 • Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song
In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels.
no code implementations • NeurIPS 2016 • Chang Liu, Jun Zhu, Yang song
We propose two stochastic gradient MCMC methods for sampling from Bayesian posterior distributions defined on Riemann manifolds with a known geodesic flow, e. g. hyperspheres.
no code implementations • 11 Dec 2016 • Xiaolin Wu, Xi Zhang, Chang Liu
This article is a sequel to our earlier work [25].
no code implementations • 18 Feb 2017 • Chang Liu, Fuchun Sun, Changhu Wang, Feng Wang, Alan Yuille
In this way, the sequential representation of an image can be naturally translated to a sequence of words, as the target sequence of the RNN model.
2 code implementations • 22 Feb 2017 • Feng Wang, Xiang Xiang, Chang Liu, Trac. D. Tran, Austin Reiter, Gregory D. Hager, Harry Quon, Jian Cheng, Alan L. Yuille
In this way, the expression intensity regression task can benefit from the rich feature representations trained on a huge amount of data for face verification.
no code implementations • 28 Mar 2017 • Shun Yang, Wenshuo Wang, Chang Liu, Kevin Deng, J. Karl Hedrick
We collect a large set of data using The Open Racing Car Simulator (TORCS) and classify the image features into three categories (sky-related, roadside-related, and road-related features). We then design two experimental frameworks to investigate the importance of each single feature for training a CNN controller. The first framework uses the training data with all three features included to train a controller, which is then tested with data that has one feature removed to evaluate the feature's effects.
no code implementations • ICLR 2018 • Xinyun Chen, Chang Liu, Dawn Song
In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500x longer than the training samples.
no code implementations • 23 Jun 2017 • Wenshuo Wang, Chang Liu, Ding Zhao
For projects that cost millions of dollars, it is critical to determine the right amount of data needed.
no code implementations • 20 Jul 2017 • Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry, Thomas L. Griffiths, Anca D. Dragan
In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users' objectives as they go.
1 code implementation • 22 Aug 2017 • Xiaojun Xu, Chang Liu, Qian Feng, Heng Yin, Le Song, Dawn Song
The problem of cross-platform binary code similarity detection aims at detecting whether two binary functions coming from different platforms are similar or not.
no code implementations • 11 Sep 2017 • Sattar Vakili, Qing Zhao, Chang Liu, Chen-Nee Chuah
We consider the problem of detecting a few targets among a large number of hierarchical data streams.
no code implementations • CVPR 2018 • Xiaojun Xu, Xinyun Chen, Chang Liu, Anna Rohrbach, Trevor Darrell, Dawn Song
Our work sheds new light on understanding adversarial attacks on vision systems which have a language component and shows that attention, bounding box localization, and compositional internal structures are vulnerable to adversarial attacks.
no code implementations • ICML 2018 • Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, Bo Zhang
Stein variational gradient descent (SVGD) is a recently proposed particle-based Bayesian inference method, which has attracted a lot of interest due to its remarkable approximation ability and particle efficiency compared to traditional variational inference and Markov Chain Monte Carlo methods.
13 code implementations • ICLR 2018 • Xiaojun Xu, Chang Liu, Dawn Song
Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations.
1 code implementation • 30 Nov 2017 • Chang Liu, Jun Zhu
The benefits are two-folds: (i) for inference tasks in Euclidean spaces, RSVGD has the advantage over SVGD of utilizing information geometry, and (ii) for inference tasks on Riemann manifolds, RSVGD brings the unique advantages of SVGD to the Riemannian world.
2 code implementations • 15 Dec 2017 • Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, Dawn Song
In this work, we consider a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.
no code implementations • 19 Jan 2018 • He-Liang Huang, Xi-Lin Wang, Peter P. Rohde, Yi-Han Luo, You-Wei Zhao, Chang Liu, Li Li, Nai-Le Liu, Chao-Yang Lu, Jian-Wei Pan
Topological data analysis offers a robust way to extract useful information from noisy, unstructured data by identifying its underlying structure.
no code implementations • 6 Feb 2018 • Chang Liu, Jessica B. Hamrick, Jaime F. Fisac, Anca D. Dragan, J. Karl Hedrick, S. Shankar Sastry, Thomas L. Griffiths
The study of human-robot interaction is fundamental to the design and use of robotics in real-world applications.
no code implementations • ICLR 2018 • Xinyun Chen, Chang Liu, Dawn Song
We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step.
no code implementations • 12 Feb 2018 • Chang Liu, Xiangrui Zeng, Ruogu Lin, Xiaodan Liang, Zachary Freyberg, Eric Xing, Min Xu
Cellular Electron Cryo-Tomography (CECT) is a powerful imaging technique for the 3D visualization of cellular structure and organization at submolecular resolution.
no code implementations • 14 Feb 2018 • Jaime F. Fisac, Chang Liu, Jessica B. Hamrick, S. Shankar Sastry, J. Karl Hedrick, Thomas L. Griffiths, Anca D. Dragan
We introduce $t$-\ACty{}: a measure that quantifies the accuracy and confidence with which human observers can predict the remaining robot plan from the overall task goal and the observed initial $t$ actions in the plan.
no code implementations • 20 Feb 2018 • Qiuyuan Huang, Li Deng, Dapeng Wu, Chang Liu, Xiaodong He
This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models.
no code implementations • 22 Feb 2018 • Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, Dawn Song
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model.
no code implementations • 5 Mar 2018 • Chang Liu, Xiaolin Wu, Xiao Shu
All existing image enhancement methods, such as HDR tone mapping, cannot recover A/D quantization losses due to insufficient or excessive lighting, (underflow and overflow problems).
1 code implementation • 1 Apr 2018 • Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li
As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms.
2 code implementations • 13 May 2018 • Qi-Zhi Cai, Min Du, Chang Liu, Dawn Song
The existence of adversarial examples hinders such applications.
no code implementations • 16 May 2018 • Chang Liu, Xiangrui Zeng, Kaiwen Wang, Qiang Guo, Min Xu
Cellular Electron Cryo-Tomography (CECT) is a powerful 3D imaging tool for studying the native structure and organization of macromolecules inside single cells.
1 code implementation • 4 Jul 2018 • Chang Liu, Jingwei Zhuo, Pengyu Cheng, Ruiyi Zhang, Jun Zhu, Lawrence Carin
Particle-based variational inference methods (ParVIs) have gained attention in the Bayesian inference literature, for their capacity to yield flexible and accurate approximations.
no code implementations • ECCV 2018 • Chang Liu, Wei Ke, Fei Qin, Qixiang Ye
Hinted by this, we formalize a Linear Span framework, and propose Linear Span Network (LSN) modified by Linear Span Units (LSUs), which minimize the reconstruction error of convolutional network.
1 code implementation • 5 Nov 2018 • Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze
This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.
no code implementations • 12 Nov 2018 • Sophia Collet, Robert Dadashi, Zahi N. Karam, Chang Liu, Parinaz Sobhani, Yevgeniy Vahlis, Ji Chao Zhang
In this work, two approaches for private model aggregation are proposed that enable the transfer of knowledge from existing models trained on other companies' datasets to a new company with limited labeled data while protecting each client company's underlying individual sensitive information.
no code implementations • 12 Dec 2018 • Zhe Li, Caiwen Ding, Siyue Wang, Wujie Wen, Youwei Zhuo, Chang Liu, Qinru Qiu, Wenyao Xu, Xue Lin, Xuehai Qian, Yanzhi Wang
It is a challenging task to have real-time, efficient, and accurate hardware RNN implementations because of the high sensitivity to imprecision accumulation and the requirement of special activation function implementations.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 21 Dec 2018 • Chang Liu, Zhaowei Shang, Anyong Qin
To address this issue, here we propose a novel deep residual learning model that combines the dilated residual convolution and multi-scale convolution groups.
1 code implementation • 1 Feb 2019 • Chang Liu, Jingwei Zhuo, Jun Zhu
It is known that the Langevin dynamics used in MCMC is the gradient flow of the KL divergence on the Wasserstein space, which helps convergence analysis and inspires recent particle-based variational inference methods (ParVIs).
no code implementations • 10 Mar 2019 • Xing Hu, Ling Liang, Lei Deng, Shuangchen Li, Xinfeng Xie, Yu Ji, Yufei Ding, Chang Liu, Timothy Sherwood, Yuan Xie
As neural networks continue their reach into nearly every aspect of software operations, the details of those networks become an increasingly sensitive subject.
Cryptography and Security Hardware Architecture
no code implementations • 29 Mar 2019 • Weiwei Zong, Joon Lee, Chang Liu, Eric Carver, Aharon Feldman, Branislava Janic, Mohamed Elshaikh, Milan Pantelic, David Hearshen, Indrin Chetty, Benjamin Movsas, Ning Wen
Deep learning models have had a great success in disease classifications using large data pools of skin cancer images or lung X-rays.
no code implementations • 4 Apr 2019 • Zhenzhen Dai, Eric Carver, Chang Liu, Joon Lee, Aharon Feldman, Weiwei Zong, Milan Pantelic, Mohamed Elshaikh, Ning Wen
Prostate cancer (PCa) is the most common cancer in men in the United States.
1 code implementation • CVPR 2019 • Fang Wan, Chang Liu, Wei Ke, Xiangyang Ji, Jianbin Jiao, Qixiang Ye
Weakly supervised object detection (WSOD) is a challenging task when provided with image category supervision but required to simultaneously learn object locations and object detectors.
no code implementations • ICLR 2019 • Xinyun Chen, Chang Liu, Dawn Song
Most existing neural program synthesis approaches employ an encoder-decoder architecture, which uses an encoder to compute the embedding of the given input-output examples, as well as a decoder to generate the program from the embedding following a given syntax.
1 code implementation • 13 May 2019 • Huichu Zhang, Siyuan Feng, Chang Liu, Yaoyao Ding, Yichen Zhu, Zihan Zhou, Wei-Nan Zhang, Yong Yu, Haiming Jin, Zhenhui Li
The most commonly used open-source traffic simulator SUMO is, however, not scalable to large road network and large traffic flow, which hinders the study of reinforcement learning on traffic scenarios.
Multi-agent Reinforcement Learning reinforcement-learning +1
2 code implementations • 17 May 2019 • Hao Liu, Chang Liu, Jason T. L. Wang, Haimin Wang
The essence of our approach is to model data samples in an AR as time series and use LSTMs to capture temporal information of the data samples.
no code implementations • CVPR 2019 • Chang Liu, Fang Wan, Wei Ke, Zhuowei Xiao, Yuan Yao, Xiaosong Zhang, Qixiang Ye
The weight sharing scheme and spatial pooling operations in Convolutional Neural Networks (CNNs) introduce semantic correlation to neighboring pixels on feature maps and therefore deteriorate their pixel-wise classification performance.
no code implementations • WS 2019 • Xinze Guo, Chang Liu, Xiaolong Li, Yiran Wang, Guoliang Li, Feng Wang, Zhitao Xu, Liuyi Yang, Li Ma, Changliang Li
This paper describes the Kingsoft AI Lab{'}s submission to the WMT2019 news translation shared task.
no code implementations • 27 Aug 2019 • Eddie S. J. Du, Chang Liu, David H. Wayne
The ability to accurately predict the fit of fashion items and recommend the correct size is key to reducing merchandise returns in e-commerce.
no code implementations • 30 Aug 2019 • Chang Liu, Yi Dong, Han Yu, Zhiqi Shen, Zhanning Gao, Pan Wang, Changgong Zhang, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
Video contents have become a critical tool for promoting products in E-commerce.
4 code implementations • NeurIPS 2019 • Xiaosong Zhang, Fang Wan, Chang Liu, Rongrong Ji, Qixiang Ye
In this study, we propose a learning-to-match approach to break IoU restriction, allowing objects to match anchors in a flexible manner.
Ranked #125 on Object Detection on COCO test-dev
no code implementations • 13 Sep 2019 • Xianlong Zeng, Soheil Moosavinasab, En-Ju D Lin, Simon Lin, Razvan Bunescu, Chang Liu
Efficient representation of patients is very important in the healthcare domain and can help with many tasks such as medical risk prediction.
1 code implementation • 24 Sep 2019 • Dongling Xiao, Chang Liu, Qi. Wang, Chao Wang, Xin Zhang
For general supervised deep learning classification algorithms, the pixel-by-pixel algorithm achieves precise yet inefficient classification with a small number of labeled pixels, whereas the pixel mapping algorithm achieves efficient yet edge-rough classification with more prior labels required.
no code implementations • 25 Sep 2019 • Chang Liu, Yanan Xu, Yanmin Zhu
In this paper, we study the problem of inferring fine-grained bike demands anywhere in a new city before the deployment of bikes.
no code implementations • 5 Oct 2019 • Pengyu Cheng, Chang Liu, Chunyuan Li, Dinghan Shen, Ricardo Henao, Lawrence Carin
The Straight-Through (ST) estimator is a widely used technique for back-propagating gradients through discrete random variables.
no code implementations • 16 Oct 2019 • Wenqiang Xu, Yanjun Fu, Yuchen Luo, Chang Liu, Cewu Lu
Fine-grained recognition task deals with sub-category classification problem, which is important for real-world applications.
1 code implementation • International Conference on Computer Vision Workshops 2019 • Dawei Du, Pengfei Zhu, Longyin Wen, Xiao Bian, Haibin Lin, QinGhua Hu, Tao Peng, Jiayu Zheng, Xinyao Wang, Yue Zhang, Liefeng Bo, Hailin Shi, Rui Zhu, Aashish Kumar, Aijin Li, Almaz Zinollayev, Anuar Askergaliyev, Arne Schumann, Binjie Mao, Byeongwon Lee, Chang Liu, Changrui Chen, Chunhong Pan, Chunlei Huo, Da Yu, Dechun Cong, Dening Zeng, Dheeraj Reddy Pailla, Di Li, Dong Wang, Donghyeon Cho, Dongyu Zhang, Furui Bai, George Jose, Guangyu Gao, Guizhong Liu, Haitao Xiong, Hao Qi, Haoran Wang, Heqian Qiu, Hongliang Li, Huchuan Lu, Ildoo Kim, Jaekyum Kim, Jane Shen, Jihoon Lee, Jing Ge, Jingjing Xu, Jingkai Zhou, Jonas Meier, Jun Won Choi, Junhao Hu, Junyi Zhang, Junying Huang, Kaiqi Huang, Keyang Wang, Lars Sommer, Lei Jin, Lei Zhang
Results of 33 object detection algorithms are presented.
no code implementations • 1 Nov 2019 • Xishan Zhang, Shaoli Liu, Rui Zhang, Chang Liu, Di Huang, Shiyi Zhou, Jiaming Guo, Yu Kang, Qi Guo, Zidong Du, Yunji Chen
Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers.
no code implementations • 23 Dec 2019 • Minoru Kusaba, Chang Liu, Yukinori Koyama, Kiyoyuki Terakura, Ryo Yoshida
In 1869, the first draft of the periodic table was published by Russian chemist Dmitri Mendeleev.
1 code implementation • 2 Jan 2020 • Dezhao Luo, Chang Liu, Yu Zhou, Dongbao Yang, Can Ma, Qixiang Ye, Weiping Wang
As a proxy task, it converts rich self-supervised representations into video clip operations (options), which enhances the flexibility and reduces the complexity of representation learning.
Ranked #11 on Self-supervised Video Retrieval on HMDB51
no code implementations • 3 Feb 2020 • Di Huang, Xishan Zhang, Rui Zhang, Tian Zhi, Deyuan He, Jiaming Guo, Chang Liu, Qi Guo, Zidong Du, Shaoli Liu, Tianshi Chen, Yunji Chen
In this paper, we propose a novel Decomposable Winograd Method (DWM), which breaks through the limitation of original Winograd's minimal filtering algorithm to a wide and general convolutions.
3 code implementations • 22 Feb 2020 • Hao Liu, Chang Liu, Jason T. L. Wang, Haimin Wang
We present two recurrent neural networks (RNNs), one based on gated recurrent units and the other based on long short-term memory, for predicting whether an active region (AR) that produces an M- or X-class flare will also produce a coronal mass ejection (CME).
no code implementations • 28 Feb 2020 • Yao Mu, Shengbo Eben Li, Chang Liu, Qi Sun, Bingbing Nie, Bo Cheng, Baiyu Peng
This paper presents a mixed reinforcement learning (mixed RL) algorithm by simultaneously using dual representations of environmental dynamics to search the optimal policy with the purpose of improving both learning accuracy and training speed.
1 code implementation • 10 Mar 2020 • Shen Gao, Xiuying Chen, Chang Liu, Li Liu, Dongyan Zhao, Rui Yan
Stickers with vivid and engaging expressions are becoming increasingly popular in online messaging apps, and some works are dedicated to automatically select sticker response by matching text labels of stickers with previous utterances.
2 code implementations • CVPR 2020 • Shi-Xue Zhang, Xiaobin Zhu, Jie-Bo Hou, Chang Liu, Chun Yang, Hongfa Wang, Xu-Cheng Yin
In this paper, we propose a novel unified relational reasoning graph network for arbitrary shape text detection.
no code implementations • 19 Apr 2020 • Chao Qu, Hui Li, Chang Liu, Junwu Xiong, James Zhang, Wei Chu, Weiqiang Wang, Yuan Qi, Le Song
We propose a \emph{collaborative} multi-agent reinforcement learning algorithm named variational policy propagation (VPP) to learn a \emph{joint} policy through the interactions over agents.
Multi-agent Reinforcement Learning reinforcement-learning +2
1 code implementation • CVPR 2020 • Guanlin Li, Shuya Ding, Jun Luo, Chang Liu
Whereas adversarial training is employed as the main defence strategy against specific adversarial samples, it has limited generalization capability and incurs excessive time complexity.
no code implementations • 8 May 2020 • Hao Liu, Yan Xu, Jiasheng Wang, Ju Jing, Chang Liu, Jason T. L. Wang, Haimin Wang
By learning the latent patterns in the training data prepared by the physics-based ME tool, the proposed CNN method is able to infer vector magnetic fields from the Stokes profiles of GST/NIRIS.
Solar and Stellar Astrophysics
5 code implementations • 11 May 2020 • Lingbo Yang, Chang Liu, Pan Wang, Shanshe Wang, Peiran Ren, Siwei Ma, Wen Gao
Existing face restoration researches typically relies on either the degradation prior or explicit guidance labels for training, which often results in limited generalization ability over real-world images with heterogeneous degradations and rich background contents.
10 code implementations • ECCV 2020 • Mingqing Xiao, Shuxin Zheng, Chang Liu, Yaolong Wang, Di He, Guolin Ke, Jiang Bian, Zhouchen Lin, Tie-Yan Liu
High-resolution digital images are usually downscaled to fit various display screens or save the cost of storage and bandwidth, meanwhile the post-upscaling is adpoted to recover the original resolutions or the details in the zoom-in images.
1 code implementation • 17 May 2020 • Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, Rui Yan
We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language.
no code implementations • CVPR 2020 • Gaurav Mittal, Chang Liu, Nikolaos Karianakis, Victor Fragoso, Mei Chen, Yun Fu
To reduce HPO time, we present HyperSTAR (System for Task Aware Hyperparameter Recommendation), a task-aware method to warm-start HPO for deep neural networks.
no code implementations • 26 May 2020 • Lingbo Yang, Pan Wang, Chang Liu, Zhanning Gao, Peiran Ren, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Xian-Sheng Hua, Wen Gao
Human pose transfer (HPT) is an emerging research topic with huge potential in fashion design, media production, online advertising and virtual reality.
no code implementations • 4 Jun 2020 • Weihao Jiang, Zhaozhi Xie, Yaoyi Li, Chang Liu, Hongtao Lu
Many of these applications need to perform a real-time and efficient prediction for semantic segmentation with a light-weighted network.
no code implementations • 20 Jun 2020 • Lixin Fan, Kam Woh Ng, Ce Ju, Tianyu Zhang, Chang Liu, Chee Seng Chan, Qiang Yang
This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks.
1 code implementation • 20 Jun 2020 • Yuan Yao, Chang Liu, Dezhao Luo, Yu Zhou, Qixiang Ye
The generative perception model acts as a feature decoder to focus on comprehending high temporal resolution and short-term representation by introducing a motion-attention mechanism.
no code implementations • 22 Jun 2020 • Gelu Nita, Manolis Georgoulis, Irina Kitiashvili, Viacheslav Sadykov, Enrico Camporeale, Alexander Kosovichev, Haimin Wang, Vincent Oria, Jason Wang, Rafal Angryk, Berkay Aydin, Azim Ahmadzadeh, Xiaoli Bai, Timothy Bastian, Soukaina Filali Boubrahimi, Bin Chen, Alisdair Davey, Sheldon Fereira, Gregory Fleishman, Dale Gary, Andrew Gerrard, Gregory Hellbourg, Katherine Herbert, Jack Ireland, Egor Illarionov, Natsuha Kuroda, Qin Li, Chang Liu, Yuexin Liu, Hyomin Kim, Dustin Kempton, Ruizhe Ma, Petrus Martens, Ryan McGranaghan, Edward Semones, John Stefan, Andrey Stejko, Yaireska Collado-Vega, Meiqi Wang, Yan Xu, Sijie Yu
The authors of this white paper met on 16-17 January 2020 at the New Jersey Institute of Technology, Newark, NJ, for a 2-day workshop that brought together a group of heliophysicists, data providers, expert modelers, and computer/data scientists.
no code implementations • 22 Jun 2020 • Yaolong Wang, Mingqing Xiao, Chang Liu, Shuxin Zheng, Tie-Yan Liu
Specifically, ILC introduces an invertible encoding module to replace the encoder-decoder structure to produce the low dimensional informative latent representation, meanwhile, transform the lost information into an auxiliary latent variable that won't be further coded or stored.
1 code implementation • 6 Jul 2020 • Yifei Zhang, Chang Liu, Yu Zhou, Wei Wang, Weiping Wang, Qixiang Ye
In this work, we propose a novel clustering based method, which, by iteratively excluding class inconsistent samples during progressive cluster formation, alleviates the impact of noise samples in a simple-yet-effective manner.
1 code implementation • 7 Jul 2020 • Yunjie Tian, Chang Liu, Lingxi Xie, Jianbin Jiao, Qixiang Ye
The search cost of neural architecture search (NAS) has been largely reduced by weight-sharing methods.
1 code implementation • 17 Jul 2020 • Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu
However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.
1 code implementation • 2 Aug 2020 • Guanlin Li, Chang Liu, Han Yu, Yanhong Fan, Libang Zhang, Zongyue Wang, Meiqin Wang
Information about system characteristics such as power consumption, electromagnetic leaks and sound can be exploited by the side-channel attack to compromise the system.
1 code implementation • ECCV 2020 • Boyu Yang, Chang Liu, Bohao Li, Jianbin Jiao, Qixiang Ye
Few-shot segmentation is challenging because objects within the support and query images could significantly differ in appearance and pose.
4 code implementations • 27 Aug 2020 • Haodi Jiang, Jiasheng Wang, Chang Liu, Ju Jing, Hao liu, Jason T. L. Wang, Haimin Wang
Deep learning has drawn a lot of interest in recent years due to its effectiveness in processing big and complex observational data gathered from diverse instruments.
1 code implementation • 3 Sep 2020 • Chang Liu, Xuemeng Liu, Derrick Wing Kwan Ng, Jinhong Yuan
To this end, we first develop a versatile DReL-based channel estimation framework where a deep residual network (DRN)-based MMSE estimator is derived in terms of Bayesian philosophy.
1 code implementation • 4 Sep 2020 • Yasser Abduallah, Jason T. L. Wang, Yang Nie, Chang Liu, Haimin Wang
Solar flare prediction plays an important role in understanding and forecasting space weather.
no code implementations • 4 Sep 2020 • Chang Liu, Jiahui Sun, Haiming Jin, Meng Ai, Qun Li, Cheng Zhang, Kehua Sheng, Guobin Wu, XiaoHu Qie, Xinbing Wang
Thus, in this paper, we exploit adaptive dispatching intervals to boost the platform's profit under a guarantee of the maximum passenger waiting time.
no code implementations • 11 Sep 2020 • Jianan Li, Jimei Yang, Jianming Zhang, Chang Liu, Christina Wang, Tingfa Xu
In this paper, we introduce Attribute-conditioned Layout GAN to incorporate the attributes of design elements for graphic layout generation by forcing both the generator and the discriminator to meet attribute conditions.
no code implementations • 11 Sep 2020 • Chang Liu, Zhiqiang Wei, Derrick Wing Kwan Ng, Jinhong Yuan, Ying-Chang Liang
To eliminate the requirement of channel estimation and to improve the system performance, in this paper, we adopt a deep transfer learning (DTL) approach to implicitly extract the features of channel and directly recover tag symbols.
no code implementations • 16 Sep 2020 • Chang Liu, Weijie Yuan, Zhiqiang Wei, Xuemeng Liu, Derrick Wing Kwan Ng
Unmanned aerial vehicle (UAV)-assisted communication becomes a promising technique to realize the beyond fifth generation (5G) wireless networks, due to the high mobility and maneuverability of UAVs which can adapt to heterogeneous requirements of different applications.
no code implementations • 17 Sep 2020 • Chang Liu, Huichu Zhang, Wei-Nan Zhang, Guanjie Zheng, Yong Yu
The heavy traffic congestion problem has always been a concern for modern cities.
no code implementations • 8 Oct 2020 • Yunfan Jiang, Jingjing Si, Rui Zhang, Godwin Enemali, Bin Zhou, Hugh McCann, Chang Liu
Chemical Species Tomography (CST) has been widely used for in situ imaging of critical parameters, e. g. species concentration and temperature, in reactive flows.
1 code implementation • NeurIPS 2021 • Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output.
no code implementations • 4 Nov 2020 • Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
To avoid spurious correlation, we propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
1 code implementation • 8 Nov 2020 • Chang Liu, Yunjie Tian, Jianbin Jiao, Qixiang Ye
Conventional networks for object skeleton detection are usually hand-crafted.
no code implementations • 10 Nov 2020 • Chang Liu, Xuemeng Liu, Zhiqiang Wei, Derrick Wing Kwan Ng, Jinhong Yuan, Ying-Chang Liang
Existing tag signal detection algorithms inevitably suffer from a high bit error rate (BER) due to the difficulties in estimating the channel state information (CSI).
no code implementations • 10 Nov 2020 • Chang Liu, Wenzhong Yan, Ankur Mehta
Based on an equivalent plate model, we develop and validate analytical formulas for the behavioral specifications of OADLC mechanisms; the analytical formulas can be described as expressions of design parameters.
Robotics
no code implementations • 19 Nov 2020 • Yuanqiang Cai, Chang Liu, Weiqiang Wang, Qixiang Ye
With only bounding-box annotations in the spatial domain, existing video scene text detection (VSTD) benchmarks lack temporal relation of text instances among video frames, which hinders the development of video text-related applications.
no code implementations • 20 Nov 2020 • Godwin Enemali, Rui Zhang, Hugh McCann, Chang Liu
Although a fully parallel data acquisition (DAQ) and signal processing system can achieve these functionalities with maximised temporal response, it leads to a highly complex, expensive and power-consuming instrumentation system with high potential for inconsistency between the sampled beams due to the electronics alone.
no code implementations • 29 Nov 2020 • Yan He, Jifang Qiu, Chang Liu, Yue Liu, Jian Wu
The latest theoretical advances in the field of unlimited sampling framework (USF) show the potential to avoid clipping problems of analog-to-digital converters (ADC).
1 code implementation • 1 Dec 2020 • Chang Liu, Xuemeng Liu, Derrick Wing Kwan Ng, Jinhong Yuan
Channel estimation is of great importance in realizing practical intelligent reflecting surface-assisted multi-user communication (IRS-MC) systems.
no code implementations • SEMEVAL 2020 • Chang Liu, Dong Yu
We demonstrate the effectiveness of our approaches, which achieves 0. 95 of subtask 1 in F1 while using only a subset of giving training set to fine-tune the BERT model, and our official submission achieves F1 0. 802, which ranks us 16th in the competition.
no code implementations • 2 Dec 2020 • Tangqing Cao, Wenqi Guo, Wang Lu, Yunfei Xue, Wenjun Lu, Jing Su, Christian H. Liebscher, Chang Liu, Gerhard Dehm
Such a softening behavior can be related to the interaction of dislocations with short-range clustering.
Materials Science
no code implementations • 7 Dec 2020 • Chang Liu, Yixing Huang, Joscha Maier, Laura Klein, Marc Kachelrieß, Andreas Maier
For organ-specific AEC, a preliminary CT reconstruction is necessary to estimate organ shapes for dose optimization, where only a few projections are allowed for real-time reconstruction.
2 code implementations • 16 Dec 2020 • Chang Liu, Zetian Jiang, Runzhong Wang, Junchi Yan, Lingxiao Huang, Pinyan Lu
As such, the agent can finish inlier matching timely when the affinity score stops growing, for which otherwise an additional parameter i. e. the number of inliers is needed to avoid matching outliers.
no code implementations • 1 Jan 2021 • Chang Liu, Kai Li, Yun Fu
Unsupervised domain adaptation (UDA) is to make predictions for unlabeled data in a target domain with labeled data from source domain available.
no code implementations • 4 Jan 2021 • Ya'nan Wang, Zhuqing Jiang, Chang Liu, Kai Li, Aidong Men, Haiying Wang
This paper proposes a neural network for multi-level low-light image enhancement, which is user-friendly to meet various requirements by selecting different images as brightness reference.
no code implementations • 6 Jan 2021 • Bernhard Kliem, Jeongwoo Lee, Rui Liu, Stephen M. White, Chang Liu, Satoshi Masuda
We present evidence that a magnetic flux rope was formed before a coronal mass ejection (CME) and its associated long-duration flare during a pair of preceding confined eruptions and associated impulsive flares in a compound event in NOAA Active Region 12371.
Solar and Stellar Astrophysics
no code implementations • 20 Jan 2021 • Zhuqing Jiang, Chang Liu, Ya'nan Wang, Kai Li, Aidong Men, Haiying Wang, Haiyong Luo
With the goal of tuning up the brightness, low-light image enhancement enjoys numerous applications, such as surveillance, remote sensing and computational photography.
no code implementations • 22 Jan 2021 • Chang Liu, Henghui Ding, Xudong Jiang
In this paper, we argue that recovering these microscopic details relies on low-level but high-definition texture features.
no code implementations • 10 Feb 2021 • Rui Zhang, Jingjing Si, Godwin Enemali, Yong Bao, Chang Liu
The proposed scheme was both numerically and experimentally validated using a CST sensor with 32 laser beams using a variety of computational tomographic algorithms.
1 code implementation • 2 Mar 2021 • Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu
Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.
no code implementations • 3 Mar 2021 • Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu
Being expensive and time-consuming to collect massive COVID-19 image samples to train deep classification models, transfer learning is a promising approach by transferring knowledge from the abundant typical pneumonia datasets for COVID-19 image classification.
no code implementations • 5 Mar 2021 • Chang Liu, Xiaoguang Li, Guohao Cai, Zhenhua Dong, Hong Zhu, Lifeng Shang
It is still an open question to leverage various types of information under the BERT framework.
no code implementations • 6 Mar 2021 • Jeremy Beauchamp, Razvan Bunescu, Cindy Marling, Zhongen Li, Chang Liu
In this work, we invert the "what-if" scenario and introduce a similar architecture based on chaining two LSTMs that can be trained to make either insulin or carbohydrate recommendations aimed at reaching a desired BG level in the future.
2 code implementations • CVPR 2021 • Bohao Li, Boyu Yang, Chang Liu, Feng Liu, Rongrong Ji, Qixiang Ye
Few-shot object detection has made substantial progressby representing novel class objects using the feature representation learned upon a set of base class objects.
Ranked #14 on Few-Shot Object Detection on MS-COCO (10-shot)
no code implementations • 16 Mar 2021 • Chang Liu, Lixin Fan, Kam Woh Ng, Yilun Jin, Ce Ju, Tianyu Zhang, Chee Seng Chan, Qiang Yang
This paper proposes a novel ternary hash encoding for learning to hash methods, which provides a principled more efficient coding scheme with performances better than those of the state-of-the-art binary hashing counterparts.
no code implementations • 17 Mar 2021 • Juntao Li, Chang Liu, Chongyang Tao, Zhangming Chan, Dongyan Zhao, Min Zhang, Rui Yan
To fill the gap between these up-to-date methods and the real-world applications, we incorporate user-specific dialogue history into the response selection and propose a personalized hybrid matching network (PHMN).
no code implementations • 22 Mar 2021 • Chang Liu, Xiaojuan Qi, Edmund Lam, Ngai Wong
The neuromorphic event cameras, which capture the optical changes of a scene, have drawn increasing attention due to their high speed and low power consumption.
no code implementations • 22 Mar 2021 • Hua Wei, Chacha Chen, Chang Liu, Guanjie Zheng, Zhenhui Li
Simulation of the real-world traffic can be used to help validate the transportation policies.
1 code implementation • CVPR 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
The existence of noisy labels in real-world data negatively impacts the performance of deep learning models.
no code implementations • 6 Apr 2021 • Boyu Yang, Mingbao Lin, Binghao Liu, Mengying Fu, Chang Liu, Rongrong Ji, Qixiang Ye
By tentatively expanding network nodes, LEC-Net enlarges the representation capacity of features, alleviating feature drift of old network from the perspective of model regularization.
1 code implementation • ICCV 2021 • Kai Li, Chang Liu, Handong Zhao, Yulun Zhang, Yun Fu
This paper studies Semi-Supervised Domain Adaptation (SSDA), a practical yet under-investigated research topic that aims to learn a model of good performance using unlabeled samples and a few labeled samples in the target domain, with the help of labeled samples from a source domain.
no code implementations • 26 Apr 2021 • Jie Chen, Jie Liu, Chang Liu, Jian Zhang, Bing Han
To overcome this issue and to further improve the recognition performance, we adopt a deep learning approach for underwater target recognition and propose a LOFAR spectrum enhancement (LSE)-based underwater target recognition scheme, which consists of preprocessing, offline training, and online testing.
no code implementations • 18 May 2021 • Chang Liu, Guanjie Zheng, Zhenhui Li
Therefore, in this paper, we propose to learn the human routing model, which is one of the most essential part in the traffic simulator.
no code implementations • 5 Jun 2021 • Jingjing Si, Guoliang Li, Yinbo Cheng, Rui Zhang, Godwin Enemali, Chang Liu
As an in situ combustion diagnostic tool, Tunable Diode Laser Absorption Spectroscopy (TDLAS) tomography has been widely used for imaging of two-dimensional temperature distributions in reactive flows.
1 code implementation • ICLR 2022 • Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, Tie-Yan Liu
Denoising diffusion probabilistic models have been recently proposed to generate high-quality samples by estimating the gradient of the data density.
no code implementations • 18 Jun 2021 • Chang Liu, Xiaolin Wu
Nighttime photographers are often troubled by light pollution of unwanted artificial lights.
2 code implementations • CVPR 2021 • Zonghao Guo, Chang Liu, Xiaosong Zhang, Jianbin Jiao, Xiangyang Ji, Qixiang Ye
Detecting oriented and densely packed objects remains challenging for spatial feature aliasing caused by the intersection of reception fields between objects.
Ranked #34 on Object Detection In Aerial Images on DOTA (using extra training data)
2 code implementations • ICLR 2022 • Jiaxin Shi, Chang Liu, Lester Mackey
We introduce a new family of particle evolution samplers suitable for constrained domains and non-Euclidean geometries.
no code implementations • 23 Jun 2021 • Xianlong Zeng, Simon Lin, Chang Liu
The claims data, containing medical codes, services information, and incurred expenditure, can be a good resource for estimating an individual's health condition and medical risk level.
no code implementations • 24 Jun 2021 • Xianlong Zeng, Simon Lin, Chang Liu
In addition, our framework showed a great generalizability potential to transfer learned knowledge from one institution to another, paving the way for future healthcare model pre-training across institutions.
1 code implementation • 24 Jun 2021 • Xianlong Zeng, Fanghao Song, Zhongen Li, Krerkkiat Chusap, Chang Liu
Our method can be divided into three stages: 1) a neighborhood generation stage, which generates instances based on the given sample; 2) a classification stage, which yields classifications on the generated instances to carve out the local decision boundary and delineate the model behavior; and 3) a human-in-the-loop stage, which involves human to refine and explore the neighborhood of interest.
BIG-bench Machine Learning Explainable artificial intelligence +1
no code implementations • 25 Jun 2021 • Tianle Yue, Hang Yang, Zongliang Du, Chang Liu, Khalil I. Elkhodary, Shan Tang, Xu Guo
During offline training, a mapping function is built between high and low resolution representations of a given design domain.
1 code implementation • NeurIPS 2021 • Chang Liu, Haoyue Tang, Tao Qin, Jintao Wang, Tie-Yan Liu
This is motivated by the observation that deep generative models, in addition to a likelihood model $p(x|z)$, often also use an inference model $q(z|x)$ for extracting representation, but they rely on a usually uninformative prior distribution $p(z)$ to define a joint distribution, which may render problems like posterior collapse and manifold mismatch.
no code implementations • 16 Jul 2021 • Haodi Jiang, Ju Jing, Jiasheng Wang, Chang Liu, Qin Li, Yan Xu, Jason T. L. Wang, Haimin Wang
Our method consists of a data pre-processing component that prepares training data from a threshold-based tool, a deep learning model implemented as a Bayesian convolutional neural network for probabilistic image segmentation with uncertainty quantification to predict fibrils, and a post-processing component containing a fibril-fitting algorithm to determine fibril orientations.
no code implementations • 3 Aug 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
Noisy labels are commonly found in real-world data, which cause performance degradation of deep neural networks.
1 code implementation • ICCV 2021 • Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang
We introduce transformer and multi-head attention to build a network with an encoder-decoder attention mechanism architecture that "queries" the given image with the language expression.
Generalized Referring Expression Comprehension Generalized Referring Expression Segmentation +1
no code implementations • ICML Workshop AML 2021 • Wenzhao Xiang, Chang Liu, Shibao Zheng
Traditional adversarial examples are typically generated by adding perturbation noise to the input image within a small matrix norm.
no code implementations • 26 Aug 2021 • Chang Liu, Weijie Yuan, Shuangyang Li, Xuemeng Liu, Husheng Li, Derrick Wing Kwan Ng, Yonghui Li
Specifically, the convolution and LSTM modules are successively adopted in the proposed HCL-Net to exploit the spatial and temporal dependencies of communication channels to further improve the learning performance.
no code implementations • 13 Sep 2021 • Wenzhao Xiang, Hang Su, Chang Liu, Yandong Guo, Shibao Zheng
As designers of artificial intelligence try to outwit hackers, both sides continue to hone in on AI's inherent vulnerabilities.
no code implementations • CVPR 2022 • Chang Liu, Xiang Yu, Yi-Hsuan Tsai, Ramin Moslemi, Masoud Faraki, Manmohan Chandraker, Yun Fu
Convolutional Neural Networks have achieved remarkable success in face recognition, in part due to the abundant availability of data.
no code implementations • 29 Sep 2021 • Qiwei Ye, Yuxuan Song, Chang Liu, Fangyun Wei, Tao Qin, Tie-Yan Liu
Stochastic polic have been widely applied for their good property in exploration and uncertainty quantification.
Ranked #1 on MuJoCo Games on Ant-v3
no code implementations • 30 Sep 2021 • Zijian Zhu, Hang Su, Chang Liu, Wenzhao Xiang, Shibao Zheng
Fortunately, most existing adversarial patches can be outwitted, disabled and rejected by a simple classification network called an adversarial patch detector, which distinguishes adversarial patches from original images.
no code implementations • 1 Oct 2021 • Chongyang Tao, Jiazhan Feng, Chang Liu, Juntao Li, Xiubo Geng, Daxin Jiang
For this task, the adoption of pre-trained language models (such as BERT) has led to remarkable progress in a number of benchmarks.
no code implementations • 11 Oct 2021 • Chang Liu
The principal knows the reward of the task and provides information to the agent over time in order to motivate effort.
no code implementations • 14 Oct 2021 • Chang Liu, Hairong Tang, Luyan Ji, Yongchao Zhao
Based on the mapping results, we analyzed the changes of Miyun Reservoir from 1984 to 2020 and the driving factors of them.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
1 code implementation • NeurIPS 2021 • Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu
Behavioral cloning has proven to be effective for learning sequential decision-making policies from expert demonstrations.
no code implementations • 1 Nov 2021 • Chang Liu, Chen Gao, Depeng Jin, Yong Li
We first conduct information propagation on two sub-graphs to learn the representations of POIs and users.
no code implementations • 1 Nov 2021 • Haoji Liu, Weichao Zhuang, Guodong Yin, Rongcan Li, Chang Liu, Shanxing Zhou
We first formulate the optimal merging control problem, which includes the constraints of safety and vehicle dynamics, with the objectives of minimizing travel time and energy consumption.
1 code implementation • 16 Nov 2021 • Hengzhi Pei, Kan Ren, Yuqing Yang, Chang Liu, Tao Qin, Dongsheng Li
In this paper, we propose a novel generative framework for RTS data - RTSGAN to tackle the aforementioned challenges.
no code implementations • 20 Nov 2021 • Long Gao, Chang Liu, Dooman Arefan, Ashok Panigrahy, Shandong Wu
These methods mainly focus on capturing either compact or descriptive features, where the information of the samples of a given one class is not sufficiently utilized.
no code implementations • 20 Nov 2021 • Long Gao, Chang Liu, Dooman Arefan, Ashok Panigrahy, Margarita L. Zuley, Shandong Wu
To address this challenge, we propose a medical-knowledge-guided one-class classification approach that leverages domain-specific knowledge of classification tasks to boost the model's performance.
1 code implementation • 29 Nov 2021 • Mengnan Shi, Chang Liu, Qixiang Ye, Jianbin Jiao
Gating modules have been widely explored in dynamic network pruning to reduce the run-time computational cost of deep neural networks while preserving the representation of features.
1 code implementation • NeurIPS 2021 • Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
To avoid such a spurious correlation, we propose \textbf{La}tent \textbf{C}ausal \textbf{I}nvariance \textbf{M}odels (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction.
no code implementations • 19 Jan 2022 • Zhongyuan Guo, Hong Zheng, Changhui You, Tianyu Wang, Chang Liu
We first analyze the production principle of anti-counterfeiting QR code, and convert the identification of copy forgery to device category forensics, and then a Dual-Branch Multi-Scale Feature Fusion network is proposed.
1 code implementation • 26 Jan 2022 • Minoru Kusaba, Chang Liu, Ryo Yoshida
The prediction of energetically stable crystal structures formed by a given chemical composition is a central problem in solid-state physics.
1 code implementation • 3 Feb 2022 • Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Yusong Wang, Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, Tie-Yan Liu
Molecular conformation generation aims to generate three-dimensional coordinates of all the atoms in a molecule and is an important task in bioinformatics and pharmacology.
no code implementations • 9 Feb 2022 • Jie Chen, Chang Liu, Jiawu Xie, Jie An, Nan Huang
In particular, this method breaks through the limitations of the existing methods, not only achieves good results in multivariate separation, but also effectively separates signals when mixed with 40dB Gaussian noise signals.
no code implementations • 26 Feb 2022 • Vikram Shree, Carlos Diaz-Ruiz, Chang Liu, Bharath Hariharan, Mark Campbell
This paper focuses on the problem of decentralized pedestrian tracking using a sensor network.
no code implementations • 28 Feb 2022 • Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu
This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.
no code implementations • 1 Mar 2022 • Qi Zhang, Chang Liu, Stephen Wu, Ryo Yoshida
The design variables consist of a set of reactants in a reaction network and its network topology.
no code implementations • 6 Mar 2022 • Jiayi Zhang, Chang Liu, Junchi Yan, Xijun Li, Hui-Ling Zhen, Mingxuan Yuan
This paper surveys the trend of leveraging machine learning to solve mixed integer programming (MIP) problems.
3 code implementations • 9 Mar 2022 • Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu
This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.
no code implementations • 10 Mar 2022 • Chang Liu, Chun Yang, Hai-Bo Qin, Xiaobin Zhu, Cheng-Lin Liu, Xu-Cheng Yin
Scene text recognition is a popular topic and extensively used in the industry.
no code implementations • 29 Mar 2022 • Chang Liu, Xiaoyan Qian, Xiaojuan Qi, Edmund Y. Lam, Siew-Chong Tan, Ngai Wong
While a few previous studies tried to automatically generate 3D bounding boxes from weak labels such as 2D boxes, the quality is sub-optimal compared to human annotators.
no code implementations • 6 Apr 2022 • Wenhan Cao, Jingliang Duan, Shengbo Eben Li, Chen Chen, Chang Liu, Yu Wang
Both the primal and dual estimators are learned from data using supervised learning techniques, and the explicit sample size is provided, which enables us to guarantee the quality of each learned estimator in terms of feasibility and optimality.
1 code implementation • CVPR 2022 • Chang Liu, Chun Yang, Xu-Cheng Yin
Contextual information can be decomposed into temporal information and linguistic information.
1 code implementation • 20 Apr 2022 • Tianyu Cui, Gaopeng Gou, Gang Xiong, Zhen Li, Mingxin Cui, Chang Liu
To do this, we propose an IPv6 address correlation model - SiamHAN.
1 code implementation • 21 Apr 2022 • Tianyu Cui, Gaopeng Gou, Gang Xiong, Chang Liu, Peipei Fu, Zhen Li
6GAN forces multiple generators to train with a multi-class discriminator and an alias detector to generate non-aliased active targets with different addressing pattern types.
no code implementations • 26 Apr 2022 • Chang Liu, Xudong Jiang, Henghui Ding
In this work, we propose a novel framework that simultaneously detects the target-of-interest via feature propagation and generates a fine-grained segmentation mask.
no code implementations • 28 Apr 2022 • Sijia Li, Gaopeng Gou, Chang Liu, Chengshang Hou, Zhenzhen Li, Gang Xiong
In this paper, we propose a Temporal Transaction Aggregation Graph Network (TTAGN) to enhance phishing scams detection performance on Ethereum.
1 code implementation • 13 May 2022 • Xingchen Zhao, Chang Liu, Anthony Sicilia, Seong Jae Hwang, Yun Fu
Thus, it is still possible that those methods can overfit to source domains and perform poorly on target domains.
no code implementations • 25 May 2022 • Eduardo Pérez-Pellitero, Sibi Catley-Chandar, Richard Shaw, Aleš Leonardis, Radu Timofte, Zexin Zhang, Cen Liu, Yunbo Peng, Yue Lin, Gaocheng Yu, Jin Zhang, Zhe Ma, Hongbin Wang, Xiangyu Chen, Xintao Wang, Haiwei Wu, Lin Liu, Chao Dong, Jiantao Zhou, Qingsen Yan, Song Zhang, Weiye Chen, Yuhang Liu, Zhen Zhang, Yanning Zhang, Javen Qinfeng Shi, Dong Gong, Dan Zhu, Mengdi Sun, Guannan Chen, Yang Hu, Haowei Li, Baozhu Zou, Zhen Liu, Wenjie Lin, Ting Jiang, Chengzhi Jiang, Xinpeng Li, Mingyan Han, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Juan Marín-Vega, Michael Sloth, Peter Schneider-Kamp, Richard Röttger, Chunyang Li, Long Bao, Gang He, Ziyao Xu, Li Xu, Gen Zhan, Ming Sun, Xing Wen, Junlin Li, Shuang Feng, Fei Lei, Rui Liu, Junxiang Ruan, Tianhong Dai, Wei Li, Zhan Lu, Hengyan Liu, Peian Huang, Guangyu Ren, Yonglin Luo, Chang Liu, Qiang Tu, Fangya Li, Ruipeng Gang, Chenghua Li, Jinjing Li, Sai Ma, Chenming Liu, Yizhen Cao, Steven Tel, Barthelemy Heyrman, Dominique Ginhac, Chul Lee, Gahyeon Kim, Seonghyun Park, An Gia Vien, Truong Thanh Nhat Mai, Howoon Yoon, Tu Vo, Alexander Holston, Sheir Zaheer, Chan Y. Park
The challenge is composed of two tracks with an emphasis on fidelity and complexity constraints: In Track 1, participants are asked to optimize objective fidelity scores while imposing a low-complexity constraint (i. e. solutions can not exceed a given number of operations).
no code implementations • 2 Jun 2022 • Chang Liu, Zhen-Hua Ling, Ling-Hui Chen
This paper proposes a multilingual speech synthesis method which combines unsupervised phonetic representations (UPR) and supervised phonetic representations (SPR) to avoid reliance on the pronunciation dictionaries of target languages.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 9 Jun 2022 • Si Shen, Jiangfeng Liu, Litao Lin, Ying Huang, Lin Zhang, Chang Liu, Yutong Feng, Dongbo Wang
The academic literature of social sciences records human civilization and studies human social problems.
no code implementations • 25 Jun 2022 • HongBing Zhang, Xinyi Liu, Chang Liu, HongTao Fan, YaJing Li, Xinyun Zhu
The proposed function is generalized to tensor cases, yielding tensor MLCP and weighted tensor $L\gamma$-norm.
no code implementations • 4 Jul 2022 • Chang Liu, Yugong Luo, Pengfei Li, Chunhui Xing, Weiwei Kong
To deal with this problem, this paper introduces a two-dimensional maneuver management framework with a fault-tolerant mechanism on the basis of the proposed hierarchical architecture for the platoon control system.
1 code implementation • 4 Jul 2022 • Chang Liu, Gang Yang, Shuo Wang, Hangxu Wang, Yunhua Zhang, Yutao Wang
We employ the powerful feature extraction capability of Transformer (PVTv2) to extract global semantic information from RGB data and design a lightweight CNN backbone (LWDepthNet) to extract spatial structure information from depth data without pre-training.
no code implementations • 6 Jul 2022 • Zhennan Wang, Kehan Li, Runyi Yu, Yian Zhao, Pengchong Qiao, Chang Liu, Fan Xu, Xiangyang Ji, Guoli Song, Jie Chen
In this paper, we analyze batch normalization from the perspective of discriminability and find the disadvantages ignored by previous studies: the difference in $l_2$ norms of sample features can hinder batch normalization from obtaining more distinguished inter-class features and more compact intra-class features.
1 code implementation • 20 Jul 2022 • Chang Liu, Xiaoyan Qian, Binxiao Huang, Xiaojuan Qi, Edmund Lam, Siew-Chong Tan, Ngai Wong
By enriching the sparse point clouds, our method achieves 4. 48\% and 4. 03\% better 3D AP on KITTI moderate and hard samples, respectively, versus the state-of-the-art autolabeler.
1 code implementation • 29 Aug 2022 • Chang Liu, Yujie Zhong, Andrew Zisserman, Weidi Xie
In this paper, we consider the problem of generalised visual object counting, with the goal of developing a computational model for counting the number of objects from arbitrary semantic categories, using arbitrary number of "exemplars", i. e. zero-shot or few-shot counting.
Ranked #3 on Object Counting on FSC147
no code implementations • 26 Sep 2022 • Chang Liu, Xuemeng Liu, Shuangyang Li, Weijie Yuan, Derrick Wing Kwan Ng
Predictive beamforming design is an essential task in realizing high-mobility integrated sensing and communication (ISAC), which highly depends on the accuracy of the channel prediction (CP), i. e., predicting the angular parameters of users.
no code implementations • 5 Oct 2022 • Wenhan Cao, Chang Liu, Zhiqian Lan, Shengbo Eben Li, Wei Pan, Angelo Alessandri
The accuracy of moving horizon estimation (MHE) suffers significantly in the presence of measurement outliers.
no code implementations • 7 Oct 2022 • Chang Liu, Terence Jie Chua, Jun Zhao
Therefore, we formulate a joint learning and communication optimization problem to minimize total model parameter communication and computation delay, by optimizing local iteration counts and edge iteration counts.
1 code implementation • Pattern Recognition and Computer Vision 2022 • Le Zhang, Qi Feng, Yao Lu, Chang Liu, and Guangming Lu
Attention mechanisms can effectively improve the performance of the mobile networks with a limited computational complexity cost.
1 code implementation • 9 Oct 2022 • Mingqing Xiao, Shuxin Zheng, Chang Liu, Zhouchen Lin, Tie-Yan Liu
To be specific, we develop invertible models to generate valid degraded images and meanwhile transform the distribution of lost contents to the fixed distribution of a latent variable during the forward degradation.
no code implementations • CVPR 2023 • Kehan Li, Zhennan Wang, Zesen Cheng, Runyi Yu, Yian Zhao, Guoli Song, Chang Liu, Li Yuan, Jie Chen
Recently, self-supervised large-scale visual pre-training models have shown great promise in representing pixel-level semantic relationships, significantly promoting the development of unsupervised dense prediction tasks, e. g., unsupervised semantic segmentation (USS).
no code implementations • 13 Oct 2022 • Chang Liu, Yuwen Yang, Yue Ding, Hongtao Lu
The normalizing layer has become one of the basic configurations of deep learning models, but it still suffers from computational inefficiency, interpretability difficulties, and low generality.
no code implementations • CVPR 2023 • Pengchong Qiao, Zhidan Wei, Yu Wang, Zhennan Wang, Guoli Song, Fan Xu, Xiangyang Ji, Chang Liu, Jie Chen
Semi-supervised learning (SSL) essentially pursues class boundary exploration with less dependence on human annotations.
1 code implementation • 28 Oct 2022 • Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang
We propose a Vision-Language Transformer (VLT) framework for referring segmentation to facilitate deep interactions among multi-modal information and enhance the holistic understanding to vision-language features.
Ranked #3 on Referring Video Object Segmentation on MeViS
Referring Expression Segmentation Referring Video Object Segmentation
no code implementations • 28 Oct 2022 • Chang Liu, Yuwen Yang, Xun Cai, Yue Ding, Hongtao Lu
Federated learning (FL) faces three major difficulties: cross-domain, heterogeneous models, and non-i. i. d.
no code implementations • 28 Oct 2022 • Ligen Shi, Chang Liu, Di He, Xing Zhao, Jun Qiu
A major challenge for matching-based depth estimation is to prevent mismatches in occlusion and smooth regions.
1 code implementation • 1 Nov 2022 • Chang Liu, Yuwen Yang, Zhe Xie, Hongtao Lu, Yue Ding
2) Prevailing graph augmentation methods for GEL, including rule-based, sample-based, adaptive, and automated methods, are not suitable for augmenting subgraphs because a subgraph contains fewer nodes but richer information such as position, neighbor, and structure.
no code implementations • 2 Nov 2022 • Yifei Zhang, Chang Liu, Yu Zhou, Weiping Wang, Qixiang Ye, Xiangyang Ji
In this paper, we present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations, i. e., global distribution relation and local interpolation relation, into the CSL framework in a plug-and-play fashion.
no code implementations • 5 Nov 2022 • Yu Yang, Wing Yin Cheung, Chang Liu, Xiangyang Ji
Multiview self-supervised representation learning roots in exploring semantic consistency across data of complex intra-class variation.
1 code implementation • 6 Nov 2022 • Yu Yang, Xiaotian Cheng, Chang Liu, Hakan Bilen, Xiangyang Ji
In recent years, generative adversarial networks (GANs) have been an actively studied topic and shown to successfully produce high-quality realistic images in various domains.