Search Results for author: Huan Wang

Found 112 papers, 49 papers with code

Don't Judge by the Look: Towards Motion Coherent Video Representation

1 code implementation14 Mar 2024 Yitian Zhang, Yue Bai, Huan Wang, Yizhou Wang, Yun Fu

Current training pipelines in object recognition neglect Hue Jittering when doing data augmentation as it not only brings appearance changes that are detrimental to classification, but also the implementation is inefficient in practice.

Data Augmentation Object Recognition +2

AgentLite: A Lightweight Library for Building and Advancing Task-Oriented LLM Agent System

1 code implementation23 Feb 2024 Zhiwei Liu, Weiran Yao, JianGuo Zhang, Liangwei Yang, Zuxin Liu, Juntao Tan, Prafulla K. Choubey, Tian Lan, Jason Wu, Huan Wang, Shelby Heinecke, Caiming Xiong, Silvio Savarese

Thus, we open-source a new AI agent library, AgentLite, which simplifies this process by offering a lightweight, user-friendly platform for innovating LLM agent reasoning, architectures, and applications with ease.

AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning

2 code implementations23 Feb 2024 JianGuo Zhang, Tian Lan, Rithesh Murthy, Zhiwei Liu, Weiran Yao, Juntao Tan, Thai Hoang, Liangwei Yang, Yihao Feng, Zuxin Liu, Tulika Awalgaonkar, Juan Carlos Niebles, Silvio Savarese, Shelby Heinecke, Huan Wang, Caiming Xiong

It meticulously standardizes and unifies these trajectories into a consistent format, streamlining the creation of a generic data loader optimized for agent training.

Causal Layering via Conditional Entropy

no code implementations19 Jan 2024 Itai Feigenbaum, Devansh Arpit, Huan Wang, Shelby Heinecke, Juan Carlos Niebles, Weiran Yao, Caiming Xiong, Silvio Savarese

Under appropriate assumptions and conditioning, we can separate the sources or sinks from the remainder of the nodes by comparing their conditional entropy to the unconditional entropy of their noise.

Causal Discovery

Editing Arbitrary Propositions in LLMs without Subject Labels

no code implementations15 Jan 2024 Itai Feigenbaum, Devansh Arpit, Huan Wang, Shelby Heinecke, Juan Carlos Niebles, Weiran Yao, Caiming Xiong, Silvio Savarese

On datasets of binary propositions derived from the CounterFact dataset, we show that our method -- without access to subject labels -- performs close to state-of-the-art L\&E methods which has access subject labels.

Language Modelling Large Language Model +1

Brain-Inspired Spiking Neural Networks for Industrial Fault Diagnosis: A Survey, Challenges, and Opportunities

no code implementations13 Nov 2023 Huan Wang, Yan-Fu Li, Konstantinos Gryllias

To address these limitations, the third-generation Spiking Neural Network (SNN), founded on principles of Brain-inspired computing, has surfaced as a promising alternative.

MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning

1 code implementation4 Nov 2023 Bingchang Liu, Chaoyu Chen, Cong Liao, Zi Gong, Huan Wang, Zhichao Lei, Ming Liang, Dajun Chen, Min Shen, Hailian Zhou, Hang Yu, Jianguo Li

Code LLMs have emerged as a specialized research field, with remarkable studies dedicated to enhancing model's coding capabilities through fine-tuning on pre-trained models.

Multi-Task Learning

How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations

no code implementations16 Oct 2023 Tianyu Guo, Wei Hu, Song Mei, Huan Wang, Caiming Xiong, Silvio Savarese, Yu Bai

Through extensive probing and a new pasting experiment, we further reveal several mechanisms within the trained transformers, such as concrete copying behaviors on both the inputs and the representations, linear ICL capability of the upper layers alone, and a post-ICL representation selection mechanism in a harder mixture setting.

In-Context Learning

Latent Graph Inference with Limited Supervision

no code implementations NeurIPS 2023 Jianglin Lu, Yi Xu, Huan Wang, Yue Bai, Yun Fu

We begin by defining the pivotal nodes as $k$-hop starved nodes, which can be identified based on a given adjacency matrix.

Enhancing Performance on Seen and Unseen Dialogue Scenarios using Retrieval-Augmented End-to-End Task-Oriented System

no code implementations16 Aug 2023 JianGuo Zhang, Stephen Roller, Kun Qian, Zhiwei Liu, Rui Meng, Shelby Heinecke, Huan Wang, Silvio Savarese, Caiming Xiong

End-to-end task-oriented dialogue (TOD) systems have achieved promising performance by leveraging sophisticated natural language understanding and natural language generation capabilities of pre-trained models.

Natural Language Understanding Retrieval +1

Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization

no code implementations4 Aug 2023 Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, JianGuo Zhang, Devansh Arpit, ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese

This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.

Language Modelling

DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI

1 code implementation19 Jul 2023 JianGuo Zhang, Kun Qian, Zhiwei Liu, Shelby Heinecke, Rui Meng, Ye Liu, Zhou Yu, Huan Wang, Silvio Savarese, Caiming Xiong

Despite advancements in conversational AI, language models encounter challenges to handle diverse conversational tasks, and existing dialogue dataset collections often lack diversity and comprehensiveness.

Few-Shot Learning Language Modelling +1

Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight

no code implementations6 Jul 2023 Jiacheng Guo, Minshuo Chen, Huan Wang, Caiming Xiong, Mengdi Wang, Yu Bai

This paper studies the sample-efficiency of learning in Partially Observable Markov Decision Processes (POMDPs), a challenging problem in reinforcement learning that is known to be exponentially hard in the worst-case.

Zero-shot Item-based Recommendation via Multi-task Product Knowledge Graph Pre-Training

no code implementations12 May 2023 Ziwei Fan, Zhiwei Liu, Shelby Heinecke, JianGuo Zhang, Huan Wang, Caiming Xiong, Philip S. Yu

This paper presents a novel paradigm for the Zero-Shot Item-based Recommendation (ZSIR) task, which pre-trains a model on product knowledge graph (PKG) to refine the item features from PLMs.

Recommendation Systems

ChatGPT-Like Large-Scale Foundation Models for Prognostics and Health Management: A Survey and Roadmaps

no code implementations10 May 2023 Yan-Fu Li, Huan Wang, Muxia Sun

Prognostics and health management (PHM) technology plays a critical role in industrial production and equipment maintenance by identifying and predicting possible equipment failures and damages, thereby allowing necessary maintenance measures to be taken to enhance equipment service life and reliability while reducing production costs and downtime.

Management Natural Language Understanding

Towards More Robust and Accurate Sequential Recommendation with Cascade-guided Adversarial Training

no code implementations11 Apr 2023 Juntao Tan, Shelby Heinecke, Zhiwei Liu, Yongjun Chen, Yongfeng Zhang, Huan Wang

Two properties unique to the nature of sequential recommendation models may impair their robustness - the cascade effects induced during training and the model's tendency to rely too heavily on temporal information.

Sequential Recommendation

Frame Flexible Network

2 code implementations CVPR 2023 Yitian Zhang, Yue Bai, Chang Liu, Huan Wang, Sheng Li, Yun Fu

To fix this issue, we propose a general framework, named Frame Flexible Network (FFN), which not only enables the model to be evaluated at different frames to adjust its computation, but also reduces the memory costs of storing multiple models significantly.

Video Recognition

ABC: Attention with Bilinear Correlation for Infrared Small Target Detection

1 code implementation18 Mar 2023 Peiwen Pan, Huan Wang, Chenyi Wang, Chang Nie

Infrared small target detection (ISTD) has a wide range of applications in early warning, rescue, and guidance.

HIVE: Harnessing Human Feedback for Instructional Visual Editing

1 code implementation16 Mar 2023 Shu Zhang, Xinyi Yang, Yihao Feng, Can Qin, Chia-Chih Chen, Ning Yu, Zeyuan Chen, Huan Wang, Silvio Savarese, Stefano Ermon, Caiming Xiong, ran Xu

Incorporating human feedback has been shown to be crucial to align text generated by large language models to human preferences.

Text-based Image Editing

Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution

2 code implementations ICCV 2023 Jiamian Wang, Huan Wang, Yulun Zhang, Yun Fu, Zhiqiang Tao

Second, existing pruning methods generally operate upon a pre-trained network for the sparse structure determination, hard to get rid of dense model training in the traditional SR paradigm.

Image Super-Resolution Network Pruning

On the Unlikelihood of D-Separation

no code implementations10 Mar 2023 Itai Feigenbaum, Huan Wang, Shelby Heinecke, Juan Carlos Niebles, Weiran Yao, Caiming Xiong, Devansh Arpit

We then provide an analytic average case analysis of the PC Algorithm for causal discovery, as well as a variant of the SGS Algorithm we call UniformSGS.

Causal Discovery

Image as Set of Points

2 code implementations2 Mar 2023 Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, Yun Fu

Context clusters (CoCs) view an image as a set of unorganized points and extract features via simplified clustering algorithm.

Clustering

Improved Online Conformal Prediction via Strongly Adaptive Online Learning

2 code implementations15 Feb 2023 Aadyot Bhatnagar, Huan Wang, Caiming Xiong, Yu Bai

We prove that our methods achieve near-optimal strongly adaptive regret for all interval lengths simultaneously, and approximately valid coverage.

Conformal Prediction Image Classification +4

Lower Bounds for Learning in Revealing POMDPs

no code implementations2 Feb 2023 Fan Chen, Huan Wang, Caiming Xiong, Song Mei, Yu Bai

However, the fundamental limits for learning in revealing POMDPs are much less understood, with existing lower bounds being rather preliminary and having substantial gaps from the current best upper bounds.

Reinforcement Learning (RL)

Local Contrast and Global Contextual Information Make Infrared Small Object Salient Again

2 code implementations28 Jan 2023 Chenyi Wang, Huan Wang, Peiwen Pan

On the other hand, FFC can gain image-level receptive fields and extract global information while preventing small objects from being overwhelmed. Experiments on several public datasets demonstrate that our method significantly outperforms the state-of-the-art ISOS models, and can provide useful guidelines for designing better ISOS deep models.

object-detection Small Object Detection

Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning

2 code implementations12 Jan 2023 Huan Wang, Can Qin, Yue Bai, Yun Fu

The state of neural network pruning has been noticed to be unclear and even confusing for a while, largely due to "a lack of standardized benchmarks and metrics" [3].

Fairness Network Pruning

A Close Look at Spatial Modeling: From Attention to Convolution

1 code implementation23 Dec 2022 Xu Ma, Huan Wang, Can Qin, Kunpeng Li, Xingchen Zhao, Jie Fu, Yun Fu

Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism.

Instance Segmentation object-detection +2

Real-Time Neural Light Field on Mobile Devices

1 code implementation CVPR 2023 Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun Fu, Denys Makoviichuk, Sergey Tulyakov, Jian Ren

Nevertheless, to reach a similar rendering quality as NeRF, the network in NeLF is designed with intensive computation, which is not mobile-friendly.

Neural Rendering Novel View Synthesis

Look More but Care Less in Video Recognition

1 code implementation18 Nov 2022 Yitian Zhang, Yue Bai, Huan Wang, Yi Xu, Yun Fu

To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation.

Action Recognition Video Recognition

Parameter-Efficient Masking Networks

1 code implementation13 Oct 2022 Yue Bai, Huan Wang, Xu Ma, Yitian Zhang, Zhiqiang Tao, Yun Fu

We validate the potential of PEMN learning masks on random weights with limited unique values and test its effectiveness for a new compression paradigm based on different network architectures.

Model Compression

Generating Negative Samples for Sequential Recommendation

no code implementations7 Aug 2022 Yongjun Chen, Jia Li, Zhiwei Liu, Nitish Shirish Keskar, Huan Wang, Julian McAuley, Caiming Xiong

Due to the dynamics of users' interests and model updates during training, considering randomly sampled items from a user's non-interacted item set as negatives can be uninformative.

Sequential Recommendation

Trainability Preserving Neural Pruning

1 code implementation25 Jul 2022 Huan Wang, Yun Fu

Moreover, results on ImageNet-1K with ResNets suggest that TPP consistently performs more favorably against other top-performing structured pruning approaches.

Network Pruning

Policy Optimization for Markov Games: Unified Framework and Faster Convergence

no code implementations6 Jun 2022 Runyu Zhang, Qinghua Liu, Huan Wang, Caiming Xiong, Na Li, Yu Bai

Next, we show that this framework instantiated with the Optimistic Follow-The-Regularized-Leader (OFTRL) algorithm at each state (and smooth value updates) can find an $\mathcal{\widetilde{O}}(T^{-5/6})$ approximate NE in $T$ iterations, and a similar algorithm with slightly modified value update rule achieves a faster $\mathcal{\widetilde{O}}(T^{-1})$ convergence rate.

Multi-agent Reinforcement Learning

STN: Scalable Tensorizing Networks via Structure-Aware Training and Adaptive Compression

no code implementations30 May 2022 Chang Nie, Huan Wang, Lu Zhao

Deep neural networks (DNNs) have delivered a remarkable performance in many tasks of computer vision.

Tensor Decomposition

R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis

1 code implementation31 Mar 2022 Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, Yun Fu, Sergey Tulyakov

On the other hand, Neural Light Field (NeLF) presents a more straightforward representation over NeRF in novel view synthesis -- the rendering of a pixel amounts to one single forward pass without ray-marching.

Novel View Synthesis

CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis

5 code implementations25 Mar 2022 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong

To democratize this, we train and release a family of large language models up to 16. 1B parameters, called CODEGEN, on natural language and programming language data, and open source the training library JAXFORMER.

Code Generation Language Modelling +2

Dual Lottery Ticket Hypothesis

1 code implementation ICLR 2022 Yue Bai, Huan Wang, Zhiqiang Tao, Kunpeng Li, Yun Fu

In this work, we regard the winning ticket from LTH as the subnetwork which is in trainable condition and its performance as our benchmark, then go from a complementary direction to articulate the Dual Lottery Ticket Hypothesis (DLTH): Randomly selected subnetworks from a randomly initialized dense network can be transformed into a trainable condition and achieve admirable performance compared with LTH -- random tickets in a given lottery pool can be transformed into winning tickets.

Efficient and Differentiable Conformal Prediction with General Function Classes

1 code implementation ICLR 2022 Yu Bai, Song Mei, Huan Wang, Yingbo Zhou, Caiming Xiong

Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly over existing approaches in several applications such as prediction intervals with improved length, minimum-volume prediction sets for multi-output regression, and label prediction sets for image classification.

Conformal Prediction Image Classification +2

Semi-supervised Domain Adaptive Structure Learning

1 code implementation12 Dec 2021 Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, Yun Fu

Semi-supervised domain adaptation (SSDA) is quite a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.

Domain Adaptation Representation Learning +1

Aligned Structured Sparsity Learning for Efficient Image Super-Resolution

1 code implementation NeurIPS 2021 Yulun Zhang, Huan Wang, Can Qin, Yun Fu

To address the above issues, we propose aligned structured sparsity learning (ASSL), which introduces a weight normalization layer and applies $L_2$ regularization to the scale parameters for sparsity.

Image Super-Resolution Knowledge Distillation +3

Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

1 code implementation NeurIPS 2021 Can Qin, Handong Zhao, Lichen Wang, Huan Wang, Yulun Zhang, Yun Fu

For slow learning of graph similarity, this paper proposes a novel early-fusion approach by designing a co-attention-based feature fusion network on multilevel GNN features.

Anomaly Detection Graph Similarity +3

Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization

1 code implementation21 Oct 2021 Devansh Arpit, Huan Wang, Yingbo Zhou, Caiming Xiong

We first show that this chaotic behavior exists even along the training optimization trajectory of a single model, and propose a simple model averaging protocol that both significantly boosts domain generalization and diminishes the impact of stochasticity by improving the rank correlation between the in-domain validation accuracy and out-domain test accuracy, which is crucial for reliable early stopping.

Domain Generalization Model Selection

Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent Space Distribution Matching in WAE

no code implementations19 Oct 2021 Devansh Arpit, Aadyot Bhatnagar, Huan Wang, Caiming Xiong

Wasserstein autoencoder (WAE) shows that matching two distributions is equivalent to minimizing a simple autoencoder (AE) loss under the constraint that the latent space of this AE matches a pre-specified prior distribution.

Contrastive Learning Representation Learning

Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles

no code implementations19 Oct 2021 Bram Wallace, Devansh Arpit, Huan Wang, Caiming Xiong

Pretraining convolutional neural networks via self-supervision, and applying them in transfer learning, is an incredibly fast-growing field that is rapidly and iteratively improving performance across practically all image domains.

Transfer Learning

Continuous Conditional Random Field Convolution for Point Cloud Segmentation

1 code implementation12 Oct 2021 Fei Yang, Franck Davoine, Huan Wang, Zhong Jin

Furthermore, we build an encoder-decoder network based on the proposed continuous CRF graph convolution (CRFConv), in which the CRFConv embedded in the decoding layers can restore the details of high-level features that were lost in the encoding stage to enhance the location ability of the network, thereby benefiting segmentation.

Image Segmentation Point Cloud Segmentation +2

Rethinking Again the Value of Network Pruning -- A Dynamical Isometry Perspective

no code implementations29 Sep 2021 Huan Wang, Can Qin, Yue Bai, Yun Fu

Several recent works questioned the value of inheriting weight in structured neural network pruning because they empirically found training from scratch can match or even outperform finetuning a pruned model.

Network Pruning

Structured Pruning Meets Orthogonality

no code implementations29 Sep 2021 Huan Wang, Yun Fu

In this paper, we present \emph{orthogonality preserving pruning} (OPP), a regularization-based structured pruning method that maintains the dynamical isometry during pruning.

Network Pruning

Understanding the Success of Knowledge Distillation -- A Data Augmentation Perspective

no code implementations29 Sep 2021 Huan Wang, Suhas Lohit, Michael Jeffrey Jones, Yun Fu

We achieve new state-of-the-art accuracy by using the original KD loss armed with stronger augmentation schemes, compared to existing state-of-the-art methods that employ more advanced distillation losses.

Active Learning Data Augmentation +1

Multi-Tensor Network Representation for High-Order Tensor Completion

no code implementations9 Sep 2021 Chang Nie, Huan Wang, Zhihui Lai

In particular, each component can be represented as multilinear connections over several latent factors and naturally mapped to a specific tensor network (TN) topology.

Tensor Decomposition Vocal Bursts Intensity Prediction

WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

3 code implementations31 Aug 2021 Tian Lan, Sunil Srinivasa, Huan Wang, Stephan Zheng

We present WarpDrive, a flexible, lightweight, and easy-to-use open-source RL framework that implements end-to-end deep multi-agent RL on a single GPU (Graphics Processing Unit), built on PyCUDA and PyTorch.

Decision Making Multi-agent Reinforcement Learning +2

Rethinking Adam: A Twofold Exponential Moving Average Approach

1 code implementation22 Jun 2021 Yizhou Wang, Yue Kang, Can Qin, Huan Wang, Yi Xu, Yulun Zhang, Yun Fu

The intuition is that gradient with momentum contains more accurate directional information and therefore its second moment estimation is a more favorable option for learning rate scaling than that of the raw gradient.

Understanding the Under-Coverage Bias in Uncertainty Estimation

no code implementations NeurIPS 2021 Yu Bai, Song Mei, Huan Wang, Caiming Xiong

Estimating the data uncertainty in regression tasks is often done by learning a quantile function or a prediction interval of the true label conditioned on the input.

regression

Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning

no code implementations NeurIPS 2021 Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai

This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL.

Offline RL Open-Ended Question Answering +2

Evaluating State-of-the-Art Classification Models Against Bayes Optimality

1 code implementation NeurIPS 2021 Ryan Theisen, Huan Wang, Lav R. Varshney, Caiming Xiong, Richard Socher

Moreover, we show that by varying the temperature of the learned flow models, we can generate synthetic datasets that closely resemble standard benchmark datasets, but with almost any desired Bayes error.

Dynamical Isometry: The Missing Ingredient for Neural Network Pruning

no code implementations12 May 2021 Huan Wang, Can Qin, Yue Bai, Yun Fu

This paper is meant to explain it through the lens of dynamical isometry [42].

Network Pruning

Recent Advances on Neural Network Pruning at Initialization

2 code implementations11 Mar 2021 Huan Wang, Can Qin, Yue Bai, Yulun Zhang, Yun Fu

Neural network pruning typically removes connections or neurons from a pretrained converged model; while a new pruning paradigm, pruning at initialization (PaI), attempts to prune a randomly initialized network.

Benchmarking Network Pruning

Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games

no code implementations NeurIPS 2021 Yu Bai, Chi Jin, Huan Wang, Caiming Xiong

Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently asymmetric and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is general-sum.

Local Calibration: Metrics and Recalibration

no code implementations22 Feb 2021 Rachel Luo, Aadyot Bhatnagar, Yu Bai, Shengjia Zhao, Huan Wang, Caiming Xiong, Silvio Savarese, Stefano Ermon, Edward Schmerling, Marco Pavone

In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability.

Decision Making Fairness

Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification

no code implementations15 Feb 2021 Yu Bai, Song Mei, Huan Wang, Caiming Xiong

Modern machine learning models with high accuracy are often miscalibrated -- the predicted top probability does not reflect the actual accuracy, and tends to be over-confident.

Binary Classification

Automatic Segmentation of Organs-at-Risk from Head-and-Neck CT using Separable Convolutional Neural Network with Hard-Region-Weighted Loss

1 code implementation3 Feb 2021 Wenhui Lei, Haochen Mei, Zhengwentai Sun, Shan Ye, Ran Gu, Huan Wang, Rui Huang, Shichuan Zhang, Shaoting Zhang, Guotai Wang

Despite the stateof-the-art performance achieved by Convolutional Neural Networks (CNNs) for automatic segmentation of OARs, existing methods do not provide uncertainty estimation of the segmentation results for treatment planning, and their accuracy is still limited by several factors, including the low contrast of soft tissues in CT, highly imbalanced sizes of OARs and large inter-slice spacing.

Computed Tomography (CT) Segmentation

Use or Misuse of NMR to Test Molecular Mobility during Chemical Reaction

no code implementations28 Jan 2021 Huan Wang, Tian Huang, Steve Granick

With raw NMR spectra available in a public depository, we confirm boosted mobility during the click chemical reaction (Science 2020, 369, 537) regardless of the order of magnetic field gradient (linearly-increasing, linearly-decreasing, random sequence).

Soft Condensed Matter

Improved Uncertainty Post-Calibration via Rank Preserving Transforms

no code implementations1 Jan 2021 Yu Bai, Tengyu Ma, Huan Wang, Caiming Xiong

In this paper, we propose Neural Rank Preserving Transforms (NRPT), a new post-calibration method that adjusts the output probabilities of a trained classifier using a calibrator of higher capacity, while maintaining its prediction accuracy.

text-classification Text Classification

Momentum Contrastive Autoencoder

no code implementations1 Jan 2021 Devansh Arpit, Aadyot Bhatnagar, Huan Wang, Caiming Xiong

Quantitatively, we show that our algorithm achieves a new state-of-the-art FID of 54. 36 on CIFAR-10, and performs competitively with existing models on CelebA in terms of FID score.

Contrastive Learning Representation Learning

Neural Bayes: A Generic Parameterization Method for Unsupervised Learning

no code implementations1 Jan 2021 Devansh Arpit, Huan Wang, Caiming Xiong, Richard Socher, Yoshua Bengio

Disjoint Manifold Separation: Neural Bayes allows us to formulate an objective which can optimally label samples from disjoint manifolds present in the support of a continuous distribution.

Clustering Representation Learning

Context Reasoning Attention Network for Image Super-Resolution

no code implementations ICCV 2021 Yulun Zhang, Donglai Wei, Can Qin, Huan Wang, Hanspeter Pfister, Yun Fu

However, the basic convolutional layer in CNNs is designed to extract local patterns, lacking the ability to model global context.

Image Super-Resolution

Szegő kernel asymptotics on some non-compact complete CR manifolds

no code implementations21 Dec 2020 Chin-Yu Hsiao, George Marinescu, Huan Wang

We establish Szeg\H{o} kernel asymptotic expansions on non-compact strictly pseudoconvex complete CR manifolds with transversal CR $\mathbb{R}$-action under certain natural geometric conditions.

Complex Variables Differential Geometry

An Event Correlation Filtering Method for Fake News Detection

no code implementations10 Dec 2020 Hao Li, Huan Wang, Guanghua Liu

To improve the detection performance of fake news, we take advantage of the event correlations of news and propose an event correlation filtering method (ECFM) for fake news detection, mainly consisting of the news characterizer, the pseudo label annotator, the event credibility updater, and the news entropy selector.

Fake News Detection Pseudo Label

Multi-head Knowledge Distillation for Model Compression

no code implementations5 Dec 2020 Huan Wang, Suhas Lohit, Michael Jones, Yun Fu

We add loss terms for training the student that measure the dissimilarity between student and teacher outputs of the auxiliary classifiers.

Image Classification Knowledge Distillation +1

Unsupervised Paraphrasing with Pretrained Language Models

no code implementations EMNLP 2021 Tong Niu, Semih Yavuz, Yingbo Zhou, Nitish Shirish Keskar, Huan Wang, Caiming Xiong

To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step.

Blocking Language Modelling +3

How Important is the Train-Validation Split in Meta-Learning?

no code implementations12 Oct 2020 Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham Kakade, Huan Wang, Caiming Xiong

A common practice in meta-learning is to perform a train-validation split (\emph{train-val method}) where the prior adapts to the task on one split of the data, and the resulting predictor is evaluated on another split.

Meta-Learning

Towards Understanding Hierarchical Learning: Benefits of Neural Representations

no code implementations NeurIPS 2020 Minshuo Chen, Yu Bai, Jason D. Lee, Tuo Zhao, Huan Wang, Caiming Xiong, Richard Socher

When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-$p$ polynomial ($p \geq 4$) in $d$ dimension, neural representation requires only $\tilde{O}(d^{\lceil p/2 \rceil})$ samples, while the best-known sample complexity upper bound for the raw input is $\tilde{O}(d^{p-1})$.

Collaborative Distillation for Ultra-Resolution Universal Style Transfer

1 code implementation CVPR 2020 Huan Wang, Yijun Li, Yuehai Wang, Haoji Hu, Ming-Hsuan Yang

In this work, we present a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the convolutional filters.

Knowledge Distillation Style Transfer

Neural Bayes: A Generic Parameterization Method for Unsupervised Representation Learning

1 code implementation20 Feb 2020 Devansh Arpit, Huan Wang, Caiming Xiong, Richard Socher, Yoshua Bengio

Disjoint Manifold Labeling: Neural Bayes allows us to formulate an objective which can optimally label samples from disjoint manifolds present in the support of a continuous distribution.

Clustering Representation Learning

Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width

no code implementations10 Feb 2020 Yu Bai, Ben Krause, Huan Wang, Caiming Xiong, Richard Socher

We propose \emph{Taylorized training} as an initiative towards better understanding neural network training at finite width.

Contradictory Structure Learning for Semi-supervised Domain Adaptation

1 code implementation6 Feb 2020 Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, Yun Fu

Current adversarial adaptation methods attempt to align the cross-domain features, whereas two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.

Clustering Domain Adaptation +1

Global Capacity Measures for Deep ReLU Networks via Path Sampling

no code implementations22 Oct 2019 Ryan Theisen, Jason M. Klusowski, Huan Wang, Nitish Shirish Keskar, Caiming Xiong, Richard Socher

Classical results on the statistical complexity of linear models have commonly identified the norm of the weights $\|w\|$ as a fundamental capacity measure.

Generalization Bounds Multi-class Classification

Miss Detection vs. False Alarm: Adversarial Learning for Small Object Segmentation in Infrared Images

no code implementations ICCV 2019 Huan Wang, Luping Zhou, Lei Wang

Second, the adversarial training of the two models naturally produces a delicate balance of MD and FA, and low rates for both MD and FA could be achieved at Nash equilibrium.

Generative Adversarial Network Segmentation +1

On the Generalization Gap in Reparameterizable Reinforcement Learning

no code implementations29 May 2019 Huan Wang, Stephan Zheng, Caiming Xiong, Richard Socher

For this problem class, estimating the expected return is efficient and the trajectory can be computed deterministically given peripheral random variables, which enables us to study reparametrizable RL using supervised learning and transfer learning theory.

Learning Theory reinforcement-learning +2

Triplet Distillation for Deep Face Recognition

1 code implementation11 May 2019 Yushu Feng, Huan Wang, Daniel T. Yi, Roland Hu

Convolutional neural networks (CNNs) have achieved a great success in face recognition, which unfortunately comes at the cost of massive computation and storage consumption.

Face Recognition

Multi-Task Learning for Semantic Parsing with Cross-Domain Sketch

no code implementations ICLR 2019 Huan Wang, Yuxiang Hu, Li Dong, Feijun Jiang, Zaiqing Nie

Semantic parsing which maps a natural language sentence into a formal machine-readable representation of its meaning, is highly constrained by the limited annotated training data.

Multi-Task Learning Semantic Parsing +1

Structured Pruning for Efficient ConvNets via Incremental Regularization

no code implementations NIPS Workshop CDNNRIA 2018 Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss.

Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method

no code implementations NIPS Workshop CDNNRIA 2018 Yuxin Zhang, Huan Wang, Yang Luo, Lu Yu, Haoji Hu, Hangguan Shan, Tony Q. S. Quek

Despite enjoying extensive applications in video analysis, three-dimensional convolutional neural networks (3D CNNs)are restricted by their massive computation and storage consumption.

Model Compression Network Pruning

Shubnikov-de Haas and de Haas-van Alphen oscillations in topological semimetal CaAl4

no code implementations15 Nov 2018 Sheng Xu, Jian-Feng Zhang, Yi-Yan Wang, Lin-Lin Sun, Huan Wang, Yuan Su, Xiao-Yan Wang, Kai Liu, Tian-Long Xia

An electron-type quasi-2D Fermi surface is found by the angle-dependent Shubnikov-de Haas oscillations, de Haas-van Alphen oscillations and the first-principles calculations.

Materials Science Mesoscale and Nanoscale Physics

Identifying Generalization Properties in Neural Networks

no code implementations ICLR 2019 Huan Wang, Nitish Shirish Keskar, Caiming Xiong, Richard Socher

In particular, we prove that model generalization ability is related to the Hessian, the higher-order "smoothness" terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters.

Structured Pruning for Efficient ConvNets via Incremental Regularization

1 code implementation25 Apr 2018 Huan Wang, Qiming Zhang, Yuehai Wang, Yu Lu, Haoji Hu

Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance degrade.

Network Pruning

Adaptive Dropout with Rademacher Complexity Regularization

no code implementations ICLR 2018 Ke Zhai, Huan Wang

We propose a novel framework to adaptively adjust the dropout rates for the deep neural network based on a Rademacher complexity bound.

Document Classification

Structured Probabilistic Pruning for Convolutional Neural Network Acceleration

2 code implementations20 Sep 2017 Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

Unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities.

Transfer Learning

Exploiting Color Name Space for Salient Object Detection

no code implementations27 Mar 2017 Jing Lou, Huan Wang, Longtao Chen, Fenglei Xu, Qingyuan Xia, Wei Zhu, Mingwu Ren

In this paper, we will investigate the contribution of color names for the task of salient object detection.

Object object-detection +2

A Batchwise Monotone Algorithm for Dictionary Learning

no code implementations31 Jan 2015 Huan Wang, John Wright, Daniel Spielman

Unlike the state-of-the-art dictionary learning algorithms which impose sparsity constraints on a sample-by-sample basis, we instead treat the samples as a batch, and impose the sparsity constraint on the whole.

Dictionary Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.