1 code implementation • 13 Feb 2024 • Chongyang Gao, Kezhen Chen, Jinmeng Rao, Baochen Sun, Ruibo Liu, Daiyi Peng, Yawen Zhang, Xiaoyuan Guo, Jie Yang, VS Subrahmanian
In this paper, we introduce a novel parameter-efficient MoE method, \textit{\textbf{M}oE-L\textbf{o}RA with \textbf{L}ayer-wise Expert \textbf{A}llocation (MoLA)} for Transformer-based models, where each model layer has the flexibility to employ a varying number of LoRA experts.
no code implementations • 7 Sep 2023 • Jiaying Lu, Jinmeng Rao, Kezhen Chen, Xiaoyuan Guo, Yawen Zhang, Baochen Sun, Carl Yang, Jie Yang
Large Vision-Language Models (LVLMs) offer remarkable benefits for a variety of vision-language tasks.
no code implementations • 25 Aug 2023 • Zhifang Liu, Baochen Sun, Xue-Cheng Tai, Qi Wang, Huibin Chang
A host of numerical experiments are conducted to show that the new algorithm produces good results with much-improved efficiency compared to other state-of-the-art algorithms for the EE model.
no code implementations • 31 May 2023 • Xiaoyuan Guo, Kezhen Chen, Jinmeng Rao, Yawen Zhang, Baochen Sun, Jie Yang
To train LOWA, we propose a hybrid vision-language training strategy to learn object detection and recognition with class names as well as attribute information.
no code implementations • 18 Mar 2023 • Thanh Vu, Baochen Sun, Bodi Yuan, Alex Ngai, Yueqi Li, Jan-Michael Frahm
The success of data mixing augmentations in image classification tasks has been well-received.
4 code implementations • 6 Dec 2016 • Baochen Sun, Jiashi Feng, Kate Saenko
In contrast to subspace manifold methods, it aligns the original feature distributions of the source and target domains, rather than the bases of lower-dimensional subspaces.
Ranked #8 on Domain Adaptation on Office-Caltech
9 code implementations • 6 Jul 2016 • Baochen Sun, Kate Saenko
CORAL is a "frustratingly easy" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation.
Ranked #4 on Domain Generalization on NICO Animal
1 code implementation • 17 Nov 2015 • Baochen Sun, Jiashi Feng, Kate Saenko
Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions.
Ranked #4 on Domain Adaptation on Synth Digits-to-SVHN
no code implementations • 9 Apr 2015 • Xingchao Peng, Baochen Sun, Karim Ali, Kate Saenko
Deep convolutional neural networks learn extremely powerful image representations, yet most of that power is hidden in the millions of deep-layer parameters.
no code implementations • ICCV 2015 • Xingchao Peng, Baochen Sun, Karim Ali, Kate Saenko
Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain.
no code implementations • 27 Nov 2013 • Ayan Chakrabarti, Ying Xiong, Baochen Sun, Trevor Darrell, Daniel Scharstein, Todd Zickler, Kate Saenko
To produce images that are suitable for display, tone-mapping is widely used in digital cameras to map linear color measurements into narrow gamuts with limited dynamic range.