no code implementations • 28 Apr 2025 • Dandan Chen Kaptur, Yue Huang, Xuejun Ryan Ji, Yanhui Guo, Bradley Kaptur
This research delved into GPT-4 and Kimi, two Large Language Models (LLMs), for systematic reviews.
no code implementations • 31 Mar 2025 • Zhongnan Cai, Yingying Wang, Yunlong Lin, Hui Zheng, Ge Meng, Zixu Lin, Jiaxin Xie, Junbin Lu, Yue Huang, Xinghao Ding
To address these challenges, we propose Pan-LUT, a novel learnable look-up table (LUT) framework for pan-sharpening that strikes a balance between performance and computational efficiency for high-resolution remote sensing images.
1 code implementation • 24 Mar 2025 • Luyao Tang, Yuxuan Yuan, Chaoqi Chen, Zeyu Zhang, Yue Huang, Kun Zhang
Although foundation models (FMs) claim to be powerful, their generalization ability significantly decreases when faced with distribution shifts, weak supervision, or malicious attacks in the open world.
no code implementations • 19 Mar 2025 • Qihui Zhang, Munan Ning, Zheyuan Liu, Yanbo Wang, Jiayi Ye, Yue Huang, Shuo Yang, Xiao Chen, Yibing Song, Li Yuan
Multimodal Large Language Models (MLLMs) have emerged to tackle the challenges of Visual Question Answering (VQA), sparking a new research focus on conducting objective evaluations of these models.
1 code implementation • 5 Mar 2025 • Linyu Fan, Che Wang, Ming Ye, Qizhi Yang, Zejun Wu, Xinghao Ding, Yue Huang, Jianfeng Bao, Shuhui Cai, Congbo Cai
Data-centric artificial intelligence (AI) has remarkably advanced medical imaging, with emerging methods using synthetic data to address data scarcity while introducing synthetic-to-real gaps.
no code implementations • 19 Feb 2025 • Yicheng Lang, Kehan Guo, Yue Huang, Yujun Zhou, Haomin Zhuang, Tianyu Yang, Yao Su, Xiangliang Zhang
Due to the widespread use of LLMs and the rising critical ethical and safety concerns, LLM unlearning methods have been developed to remove harmful knowledge and undesirable capabilities.
no code implementations • 16 Feb 2025 • Jiaxiang Wang, Haote Xu, Xiaolu Chen, Haodi Xu, Yue Huang, Xinghao Ding, Xiaotong Tu
Additionally, based on the characteristics of point cloud data, we propose a pseudo 3D anomaly generation method (Ano3D) to improve the model's detection capabilities in an unsupervised setting.
1 code implementation • 3 Feb 2025 • Dawei Li, Renliang Sun, Yue Huang, Ming Zhong, Bohan Jiang, Jiawei Han, Xiangliang Zhang, Wei Wang, Huan Liu
All of these findings imply that preference leakage is a widespread and challenging problem in the area of LLM-as-a-judge.
no code implementations • 21 Dec 2024 • Jieyi Wang, Yue Huang, Zeming Liu, Dexuan Xu, Chuan Wang, Xiaoming Shi, Ruiyuan Guan, Hongxing Wang, Weihua Yue, Yu Huang
Online psychological counseling dialogue systems are trending, offering a convenient and accessible alternative to traditional in-person therapy.
no code implementations • 26 Nov 2024 • Dongping Chen, Ruoxi Chen, Shu Pu, Zhaoyi Liu, Yanru Wu, Caixi Chen, Benlin Liu, Yue Huang, Yao Wan, Pan Zhou, Ranjay Krishna
While compositional approaches that combine separate language and image models show a 111% improvement over unified models at the holistic level, their performance remains suboptimal at both block and image levels.
no code implementations • 30 Oct 2024 • Yue Huang, Zhengqing Yuan, Yujun Zhou, Kehan Guo, Xiangqi Wang, Haomin Zhuang, Weixiang Sun, Lichao Sun, Jindong Wang, Yanfang Ye, Xiangliang Zhang
To address this, we introduce TrustSim, an evaluation dataset covering 10 CSS-related topics, to systematically investigate the reliability of the LLM simulation.
1 code implementation • 28 Oct 2024 • Han Bao, Yue Huang, Yanbo Wang, Jiayi Ye, Xiangqi Wang, Xiuying Chen, Yue Zhao, Tianyi Zhou, Mohamed Elhoseiny, Xiangliang Zhang
Large Vision-Language Models (LVLMs) have become essential for advancing the integration of visual and linguistic information.
no code implementations • 17 Oct 2024 • Yue Huang, Zhaoxian Wu, Shiqian Ma, Qing Ling
Stochastic approximation (SA) that involves multiple coupled sequences, known as multiple-sequence SA (MSSA), finds diverse applications in the fields of signal processing and machine learning.
no code implementations • 3 Oct 2024 • Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, Nitesh V Chawla, Xiangliang Zhang
LLM-as-a-Judge has been widely utilized as an evaluation method in various benchmarks and served as supervised rewards in model training.
no code implementations • 27 Sep 2024 • Yunlong Lin, Zhenqi Fu, Kairun Wen, Tian Ye, Sixiang Chen, Ge Meng, Yingying Wang, Yue Huang, Xiaotong Tu, Xinghao Ding
As diffusion models are sensitive to noise, diffusion priors are introduced to achieve high-performance noise suppression.
1 code implementation • 29 Aug 2024 • Luyao Tang, Yuxuan Yuan, Chaoqi Chen, Kunze Huang, Xinghao Ding, Yue Huang
Foundation models have made incredible strides in achieving zero-shot or few-shot generalization, leveraging prompt engineering to mimic the problem-solving approach of human intelligence.
1 code implementation • 7 Aug 2024 • Luyao Tang, Yuxuan Yuan, Chaoqi Chen, Xinghao Ding, Yue Huang
In this paper, we propose a novel and holistic framework based on causality, named InPer, designed to enhance model generalization by incorporating causal intervention during training and causal perturbation during testing.
no code implementations • 23 Jul 2024 • Yuanwei Wu, Yue Huang, Yixin Liu, Xiang Li, Pan Zhou, Lichao Sun
In our study, we introduce AutoJailbreak, an innovative automatic jailbreak technique inspired by prompt optimization.
no code implementations • 10 Jul 2024 • Zhenyu Kuang, Hongyang Zhang, Lidong Cheng, Yinhao Liu, Yue Huang, Xinghao Ding
To solve this complex and common problem, this paper proposes the two-stage Multi-expert Knowledge Confrontation and Collaboration (MiKeCoCo) method, which incorporates multiple experts with unique perspectives into Contrastive Language-Image Pretraining (CLIP) and fully leverages high-level semantic knowledge for comprehensive feature representation.
1 code implementation • 27 Jun 2024 • Siyuan Wu, Yue Huang, Chujie Gao, Dongping Chen, Qihui Zhang, Yao Wan, Tianyi Zhou, Xiangliang Zhang, Jianfeng Gao, Chaowei Xiao, Lichao Sun
Large Language Models (LLMs) such as GPT-4 and Llama3 have significantly impacted various fields by enabling high-quality synthetic data generation and reducing dependence on expensive human-generated datasets.
no code implementations • 25 Jun 2024 • Yuan Li, Yue Huang, Hongyi Wang, Xiangliang Zhang, James Zou, Lichao Sun
Inspired by psychometrics, this paper presents a framework for investigating psychology in LLMs, including psychological dimension identification, assessment dataset curation, and assessment with results validation.
no code implementations • 20 Jun 2024 • Yue Huang, Chenrui Fan, Yuan Li, Siyuan Wu, Tianyi Zhou, Xiangliang Zhang, Lichao Sun
This paper introduces a method to enhance the multilingual performance of LLMs by aggregating knowledge from diverse languages.
1 code implementation • 19 Jun 2024 • Yue Huang, Jingyu Tang, Dongping Chen, Bingda Tang, Yao Wan, Lichao Sun, Philip S. Yu, Xiangliang Zhang
Recently, Large Language Models (LLMs) have garnered significant attention for their exceptional natural language processing capabilities.
1 code implementation • 16 Jun 2024 • Dongping Chen, Yue Huang, Siyuan Wu, Jingyu Tang, Liuyi Chen, Yilin Bai, Zhigang He, Chenlong Wang, Huichi Zhou, Yiqiang Li, Tianshuo Zhou, Yue Yu, Chujie Gao, Qihui Zhang, Yi Gui, Zhen Li, Yao Wan, Pan Zhou, Jianfeng Gao, Lichao Sun
We evaluate the capabilities of current state-of-the-art MLLMs, including Image LLMs and Video LLMs, in understanding various types of GUI content, especially dynamic and sequential content.
1 code implementation • 1 Jun 2024 • Chujie Gao, Siyuan Wu, Yue Huang, Dongping Chen, Qihui Zhang, Zhengyan Fu, Yao Wan, Lichao Sun, Xiangliang Zhang
Subsequently, we present two approaches to augmenting honesty and helpfulness in LLMs: a training-free enhancement and a fine-tuning-based improvement.
1 code implementation • 26 Mar 2024 • Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong
In this work, we propose JudgeDeceiver, an optimization-based prompt injection attack to LLM-as-a-Judge.
1 code implementation • 27 Feb 2024 • Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jianfeng Gao, Lifang He, Lichao Sun
Sora is a text-to-video generative AI model, released by OpenAI in February 2024.
2 code implementations • 31 Jan 2024 • Yuan Li, Yue Huang, Yuli Lin, Siyuan Wu, Yao Wan, Lichao Sun
Do large language models (LLMs) exhibit any forms of awareness similar to humans?
2 code implementations • 11 Jan 2024 • Qihui Zhang, Chujie Gao, Dongping Chen, Yue Huang, Yixin Huang, Zhenyang Sun, Shilin Zhang, Weiye Li, Zhengyan Fu, Yao Wan, Lichao Sun
With the rapid development and widespread application of Large Language Models (LLMs), the use of Machine-Generated Text (MGT) has become increasingly common, bringing with it potential risks, especially in terms of quality and integrity in fields like news, education, and science.
Ranked #2 on
Binary text classification
on MixSet (Binary)
2 code implementations • 10 Jan 2024 • Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
1 code implementation • 30 Nov 2023 • Xiao Liu, Xuanyu Lei, Shengyuan Wang, Yue Huang, Zhuoer Feng, Bosi Wen, Jiale Cheng, Pei Ke, Yifan Xu, Weng Lam Tam, Xiaohan Zhang, Lichao Sun, Xiaotao Gu, Hongning Wang, Jing Zhang, Minlie Huang, Yuxiao Dong, Jie Tang
Alignment has become a critical step for instruction-tuned Large Language Models (LLMs) to become helpful assistants.
no code implementations • 8 Oct 2023 • Yue Huang, Lichao Sun
The rampant spread of fake news has adversely affected society, resulting in extensive research on curbing its spread.
no code implementations • ICCV 2023 • Chaoqi Chen, Luyao Tang, Leitian Tao, Hong-Yu Zhou, Yue Huang, Xiaoguang Han, Yizhou Yu
Albeit the notable performance on in-domain test points, it is non-trivial for deep neural networks to attain satisfactory accuracy when deploying in the open world, where novel domains and object classes often occur.
1 code implementation • 4 Oct 2023 • Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, Lichao Sun
However, in scenarios where LLMs serve as intelligent agents, as seen in applications like AutoGPT and MetaGPT, LLMs are expected to engage in intricate decision-making processes that involve deciding whether to employ a tool and selecting the most suitable tool(s) from a collection of available tools to fulfill user requests.
1 code implementation • 8 Sep 2023 • JinYuan Wang, Hai Zhao, Zhong Wang, Zeyang Zhu, Jinhao Xie, Yong Yu, Yongjian Fei, Yue Huang, Dawei Cheng
In recent years, great advances in pre-trained language models (PLMs) have sparked considerable research focus and achieved promising performance on the approach of dense passage retrieval, which aims at retrieving relative passages from massive corpus with given questions.
no code implementations • 4 Jul 2023 • Zhijie Rao, Jingcai Guo, Luyao Tang, Yue Huang, Xinghao Ding, Song Guo
In this paper, we introduce Semantic Reasoning with Compound Domains (SRCD) for Single-DGOD.
Ranked #4 on
Robust Object Detection
on DWD
no code implementations • 20 Jun 2023 • Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun
Through the implementation of TrustGPT, this research aims to enhance our understanding of the performance of conversation generation models and promote the development of language models that are more ethical and socially responsible.
1 code implementation • IEEE Journal of Oceanic Engineering 2023 • Zhenqi Fu, Ruizhe Chen, Yue Huang, En Cheng, Xinghao Ding, Kai-Kuang Ma
Specifically, we design a new data augmentation strategy to randomly change the degradation and camouflage attributes of the original objects.
Ranked #3 on
Image Segmentation
on RMAS
1 code implementation • 20 Mar 2023 • Zhengliang Liu, Yue Huang, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Yiwei Li, Peng Shu, Fang Zeng, Lichao Sun, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, Dajiang Zhu, Xiang Li
The digitization of healthcare has facilitated the sharing and re-using of medical data but has also raised concerns about confidentiality and privacy.
no code implementations • 20 Jan 2023 • Zoé Berenger, Loïc Denis, Florence Tupin, Laurent Ferro-Famil, Yue Huang
Synthetic aperture radar tomographic imaging reconstructs the three-dimensional reflectivity of a scene from a set of coherent acquisitions performed in an interferometric configuration.
1 code implementation • CVPR 2023 • Zhenqi Fu, Yan Yang, Xiaotong Tu, Yue Huang, Xinghao Ding, Kai-Kuang Ma
Those solutions, however, often fail in revealing image details due to the limited information in a single image and the poor adaptability of handcrafted priors.
Ranked #3 on
Low-Light Image Enhancement
on VV
no code implementations • 30 Nov 2022 • Yiyang Liu, Chenxin Li, Xiaotong Tu, Xinghao Ding, Yue Huang
Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher model to promote a smaller student model.
no code implementations • 14 Oct 2022 • Chaoqi Chen, Luyao Tang, Feng Liu, Gangming Zhao, Yue Huang, Yizhou Yu
Domain generalization (DG) enables generalizing a learning machine from multiple seen source domains to an unseen target one.
1 code implementation • 20 Jul 2022 • Zhenqi Fu, Wu Wang, Yue Huang, Xinghao Ding, Kai-Kuang Ma
After that, we adopt a consensus process to predict a deterministic result based on a set of samples from the distribution.
2 code implementations • 12 Jul 2022 • Chenxin Li, Mingbao Lin, Zhiyuan Ding, Nie Lin, Yihong Zhuang, Yue Huang, Xinghao Ding, Liujuan Cao
Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher network to strengthen a smaller student.
no code implementations • 6 Jun 2022 • Chaoqi Chen, Jiongcheng Li, Hong-Yu Zhou, Xiaoguang Han, Yue Huang, Xinghao Ding, Yizhou Yu
However, both the global and local alignment approaches fail to capture the topological relations among different foreground objects as the explicit dependencies and interactions between and within domains are neglected.
no code implementations • 22 Apr 2022 • Changxing Jing, Yan Huang, Yihong Zhuang, Liyan Sun, Yue Huang, Zhenlong Xiao, Xinghao Ding
This paper shows that it is possible to achieve flexible personalization after the convergence of the global model by introducing representation learning.
no code implementations • 17 Apr 2022 • Haote Xu, Yunlong Zhang, Liyan Sun, Chenxin Li, Yue Huang, Xinghao Ding
Data augmentation based methods construct pseudo-healthy images by "pasting" fake lesions on real healthy ones, and a network is trained to predict healthy images in a supervised manner.
no code implementations • 31 Mar 2022 • Guanxing Zhou, Hao Liang, Xinghao Ding, Yue Huang, Xiaotong Tu, Saqlain Abbas
Acoustic source localization has been applied in different fields, such as aeronautics and ocean science, generally using multiple microphones array data to reconstruct the source location.
1 code implementation • 29 Mar 2022 • Yunlong Zhang, Xin Lin, Yihong Zhuang, LiyanSun, Yue Huang, Xinghao Ding, Guisheng Wang, Lin Yang, Yizhou Yu
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
no code implementations • 1 Nov 2021 • Huangxing Lin, Yihong Zhuang, Delu Zeng, Yue Huang, Xinghao Ding, John Paisley
Specifically, we treat the output of the network as a ``prior'' that we denoise again after ``re-noising''.
no code implementations • 13 Jun 2021 • Chenxin Li, Qi Qi, Xinghao Ding, Yue Huang, Dong Liang, Yizhou Yu
In this paper, we propose a novel DG scheme of episodic training with task augmentation on medical imaging classification.
no code implementations • 11 Jun 2021 • Jiajun Fan, Changnan Xiao, Yue Huang
Deep Q Network (DQN) firstly kicked the door of deep reinforcement learning (DRL) via combining deep learning (DL) with reinforcement learning (RL), which has noticed that the distribution of the acquired data would change during the training process.
Ranked #1 on
Atari Games
on Atari 2600 Freeway
no code implementations • 31 May 2021 • Chenxin Li, Wenao Ma, Liyan Sun, Xinghao Ding, Yue Huang, Guisheng Wang, Yizhou Yu
In this paper, to address the above issues, we propose a hierarchical deep network where an attention mechanism localizes the low-contrast capillary regions guided by the whole vessels, and enhance the spatial activation in those areas for the sub-type vessels.
no code implementations • 31 Mar 2021 • Wu Wang, Yue Huang, Xinhao Ding
However, in real applications, the observation model involved are often complicated and unknown, which leads to the serious performance drop of many advanced HIF methods.
1 code implementation • CVPR 2021 • Chaoqi Chen, Zebiao Zheng, Yue Huang, Xinghao Ding, Yizhou Yu
Motivated by this, we propose an Implicit Instance-Invariant Network (I3Net), which is tailored for adapting one-stage detectors and implicitly learns instance-invariant features via exploiting the natural characteristics of deep features in different layers.
no code implementations • 16 Mar 2021 • Chenxin Li, Yunlong Zhang, Zhehan Liang, Wenao Ma, Yue Huang, Xinghao Ding
In this paper, we propose a novel vessel-mixing based consistency regularization framework, for cross-domain learning in retinal A/V classification.
no code implementations • 16 Mar 2021 • Chenxin Li, Yunlong Zhang, Jiongcheng Li, Yue Huang, Xinghao Ding
In this paper, to alleviate this issue, we introduce the semantic space of healthy anatomy in the process of modeling healthy-data distribution.
no code implementations • 12 Mar 2021 • Clément Rambour, Loïc Denis, Florence Tupin, Hélène Oriot, Yue Huang, Laurent Ferro-Famil
This segmentation process can be included within the 3-D reconstruction framework in order to improve the recovery of urban surfaces.
1 code implementation • 1 Feb 2021 • Zhenqi Fu, Xiaopeng Lin, Wu Wang, Yue Huang, Xinghao Ding
Specifically, we apply whitening to de-correlate activations across spatial dimensions for each instance in a mini-batch.
1 code implementation • 1 Feb 2021 • Zhenqi Fu, Xueyang Fu, Yue Huang, Xinghao Ding
Our approach, termed Twice Mixing, is motivated by the observation that a mid-quality image can be generated by mixing a high-quality image with its low-quality version.
1 code implementation • 1 Jan 2021 • Dongyang Zhao, Yue Huang, Changnan Xiao, Yue Li, Shihong Deng
To address the problem brought by the environment, we propose a Meta Soft Hierarchical reinforcement learning framework (MeSH), in which each low-level sub-policy focuses on a specific sub-task respectively and high-level policy automatically learns to utilize low-level sub-policies through meta-gradients.
Deep Reinforcement Learning
Hierarchical Reinforcement Learning
+3
no code implementations • ICCV 2021 • Chaoqi Chen, Jiongcheng Li, Zebiao Zheng, Yue Huang, Xinghao Ding, Yizhou Yu
Domain Adaptive Object Detection (DAOD) relieves the reliance on large-scale annotated data by transferring the knowledge learned from a labeled source domain to a new unlabeled target domain.
no code implementations • 10 Dec 2020 • Liyan Sun, Chenxin Li, Xinghao Ding, Yue Huang, Guisheng Wang, Yizhou Yu
Motivated by the spatial consistency and regularity in medical images, we developed an efficient global correlation module to capture the correlation between a support and query image and incorporate it into the deep network called global correlation network.
no code implementations • 30 Nov 2020 • Huangxing Lin, Yihong Zhuang, Yue Huang, Xinghao Ding, Yizhou Yu, Xiaoqing Liu, John Paisley
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
1 code implementation • 23 Oct 2020 • Liyan Sun, Jianxiong Wu, Xinghao Ding, Yue Huang, Guisheng Wang, Yizhou Yu
We further proposed a localization branch realized via an aggregation of high-level features in a deep decoder to predict locations of organ and lesion, which enriches student segmentor with precise localization information.
1 code implementation • 8 Aug 2020 • Yunlong Zhang, Changxing Jing, Huangxing Lin, Chaoqi Chen, Yue Huang, Xinghao Ding, Yang Zou
Second, we further consider that the predictions of target samples belonging to the hard class are vulnerable to perturbations.
Semi-supervised Domain Adaptation
Unsupervised Domain Adaptation
1 code implementation • CVPR 2020 • Chaoqi Chen, Zebiao Zheng, Xinghao Ding, Yue Huang, Qi Dou
Recent advances in adaptive object detection have achieved compelling results in virtue of adversarial feature adaptation to mitigate the distributional shifts along the detection pipeline.
no code implementations • 3 Dec 2019 • Huangxing Lin, Weihong Zeng, Xinghao Ding, Xueyang Fu, Yue Huang, John Paisley
Using the new image pair, the denoising network learns to generate clean and high-quality images from noisy observations.
1 code implementation • 30 Nov 2019 • Huangxing Lin, Weihong Zeng, Xinghao Ding, Yue Huang, Chenxi Huang, John Paisley
The uncertainty of the descent path helps the model avoid saddle points and bad local minima.
no code implementations • 28 Oct 2019 • Jiexiang Wang, Hongyu Huang, Chaoqi Chen, Wenao Ma, Yue Huang, Xinghao Ding
Automatic and accurate segmentation of the ventricles and myocardium from multi-sequence cardiac MRI (CMR) is crucial for the diagnosis and treatment management for patients suffering from myocardial infarction (MI).
no code implementations • 1 Jul 2019 • Chaoqi Chen, Weiping Xie, Tingyang Xu, Yu Rong, Wenbing Huang, Xinghao Ding, Yue Huang, Junzhou Huang
In this paper, we propose an Unsupervised Adversarial Graph Alignment (UAGA) framework to learn a cross-graph alignment between two embedding spaces of different graphs in a fully unsupervised fashion (\emph{i. e.,} no existing anchor links and no users' personal profile or attribute information is available).
no code implementations • 9 Apr 2019 • Huangxing Lin, Yanlong Li, Xinghao Ding, Weihong Zeng, Yue Huang, John Paisley
We present a supervised technique for learning to remove rain from images without using synthetic rain software.
no code implementations • 24 Nov 2018 • Huangxing Lin, Xueyang Fu, Changxing Jing, Xinghao Ding, Yue Huang
Existing methods for single images raindrop removal either have poor robustness or suffer from parameter burdens.
no code implementations • CVPR 2019 • Chaoqi Chen, Weiping Xie, Wenbing Huang, Yu Rong, Xinghao Ding, Yue Huang, Tingyang Xu, Junzhou Huang
Unsupervised domain adaptation (UDA) transfers knowledge from a label-rich source domain to a fully-unlabeled target domain.
Ranked #8 on
Domain Adaptation
on SVHN-to-MNIST
no code implementations • 21 Nov 2018 • Xueyang Fu, Qi Qi, Yue Huang, Xinghao Ding, Feng Wu, John Paisley
We propose a simple yet effective deep tree-structured fusion model based on feature aggregation for the deraining problem.
no code implementations • 25 Oct 2018 • Liyan Sun, Jiexiang Wang, Yue Huang, Xinghao Ding, Hayit Greenspan, John Paisley
Being able to provide a "normal" counterpart to a medical image can provide useful side information for medical imaging tasks like lesion segmentation or classification validated by our experiments.
no code implementations • 16 May 2018 • Xueyang Fu, Borong Liang, Yue Huang, Xinghao Ding, John Paisley
In this paper, we propose a lightweight pyramid of networks (LPNet) for single image deraining.
no code implementations • 15 May 2018 • Xinghao Ding, Zhirui Lin, Fujin He, Yu Wang, Yue Huang
The estimation of crowd count in images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning.
no code implementations • 6 May 2018 • Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley
The need for fast acquisition and automatic analysis of MRI data is growing in the age of big data.
no code implementations • 10 Apr 2018 • Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley
In multi-contrast magnetic resonance imaging (MRI), compressed sensing theory can accelerate imaging by sampling fewer measurements within each contrast.
no code implementations • ECCV 2018 • Zhiwen Fan, Liyan Sun, Xinghao Ding, Yue Huang, Congbo Cai, John Paisley
In this paper, we proposed a segmentation-aware deep fusion network called SADFN for compressed sensing MRI.
no code implementations • 27 Mar 2018 • Liyan Sun, Zhiwen Fan, Xinghao Ding, Congbo Cai, Yue Huang, John Paisley
Compressed sensing (CS) theory assures us that we can accurately reconstruct magnetic resonance images using fewer k-space measurements than the Nyquist sampling rate requires.
no code implementations • 23 Mar 2018 • Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley
Existing CS-MRI algorithms can serve as the template module for guiding the reconstruction.
no code implementations • ICCV 2017 • Junfeng Yang, Xueyang Fu, Yuwen Hu, Yue Huang, Xinghao Ding, John Paisley
We incorporate domain-specific knowledge to design our PanNet architecture by focusing on the two aims of the pan-sharpening problem: spectral and spatial preservation.
no code implementations • CVPR 2017 • Xueyang Fu, Jia-Bin Huang, Delu Zeng, Yue Huang, Xinghao Ding, John Paisley
We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN).
no code implementations • CVPR 2016 • Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, Xinghao Ding
We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image.
no code implementations • 17 Mar 2016 • Tong Zhao, Lin Li, Xinghao Ding, Yue Huang, Delu Zeng
In this letter, an effective image saliency detection method is proposed by constructing some novel spaces to model the background and redefine the distance of the salient patches away from the background.
no code implementations • ICCV 2015 • Yiyong Jiang, Xinghao Ding, Delu Zeng, Yue Huang, John Paisley
Our objective incorporates the L1/2-norm in a way that can leverage recent computationally efficient methods, and L1 for which the alternating direction method of multipliers can be used.
no code implementations • 28 Sep 2013 • Chuan Shi, Xiangnan Kong, Yue Huang, Philip S. Yu, Bin Wu
Similarity search is an important function in many applications, which usually focuses on measuring the similarity between objects with the same type.
no code implementations • 12 Feb 2013 • Yue Huang, John Paisley, Qin Lin, Xinghao Ding, Xueyang Fu, Xiao-Ping Zhang
The size of the dictionary and the patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables.