no code implementations • 31 Dec 2024 • Yixuan Li, Can Chen, Jiajun Li, Jiahui Duan, Xiongwei Han, Tao Zhong, Vincent Chau, Weiwei Wu, Wanyuan Wang
By exploiting the correlation between the structure and the solution of Mixed-Integer Linear Programming (MILP), Machine Learning (ML) has become a promising method for solving large-scale MILP problems.
no code implementations • 27 Dec 2024 • Jiang Liu, Bolin Li, Haoyuan Li, Tianwei Lin, Wenqiao Zhang, Tao Zhong, Zhelun Yu, Jinghao Wei, Hao Cheng, Hao Jiang, Zheqi Lv, Juncheng Li, Siliang Tang, Yueting Zhuang
Efficient multimodal large language models (EMLLMs), in contrast to multimodal large language models (MLLMs), reduce model size and computational costs and are often deployed on resource-constrained devices.
no code implementations • 26 Nov 2024 • Teng Wang, Wing-Yin Yu, Zhenqi He, Zehua Liu, Xiongwei Han, Hailei Gong, Han Wu, Wei Shi, Ruifeng She, Fangzhou Zhu, Tao Zhong
LLMs exhibit advanced reasoning capabilities, offering the potential to transform natural language questions into mathematical models.
1 code implementation • 28 Aug 2024 • Fangxun Shu, Yue Liao, Le Zhuo, Chenning Xu, Lei Zhang, Guanghao Zhang, Haonan Shi, Long Chen, Tao Zhong, Wanggui He, Siming Fu, Haoyuan Li, Bolin Li, Zhelun Yu, Si Liu, Hongsheng Li, Hao Jiang
We introduce LLaVA-MoD, a novel framework designed to enable the efficient training of small-scale Multimodal Language Models (s-MLLM) by distilling knowledge from large-scale MLLM (l-MLLM).
1 code implementation • 5 May 2024 • Zhixiang Chi, Li Gu, Tao Zhong, Huan Liu, Yuanhao Yu, Konstantinos N Plataniotis, Yang Wang
In this work, we propose an approach on top of the pre-computed features of the foundation model.
Ranked #2 on
Domain Generalization
on DomainNet
no code implementations • 11 Jan 2024 • Xijun Li, Fangzhou Zhu, Hui-Ling Zhen, Weilin Luo, Meng Lu, Yimin Huang, Zhenan Fan, Zirui Zhou, Yufei Kuang, Zhihai Wang, Zijie Geng, Yang Li, Haoyang Liu, Zhiwu An, Muming Yang, Jianshu Li, Jie Wang, Junchi Yan, Defeng Sun, Tao Zhong, Yong Zhang, Jia Zeng, Mingxuan Yuan, Jianye Hao, Jun Yao, Kun Mao
To this end, we present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI Solver, which aims to mitigate the scarcity of real-world mathematical programming instances, and to surpass the capabilities of traditional optimization techniques.
1 code implementation • 8 Oct 2022 • Tao Zhong, Zhixiang Chi, Li Gu, Yang Wang, Yuanhao Yu, Jin Tang
Most existing methods perform training on multiple source domains using a single model, and the same trained model is used on all unseen target domains.
Ranked #31 on
Domain Generalization
on DomainNet
no code implementations • CVPR 2021 • Shao-Ping Lu, Rong Wang, Tao Zhong, Paul L. Rosin
Many attempts have been made to hide information in images, where the main challenge is how to increase the payload capacity without the container image being detected as containing a message.
1 code implementation • 16 Oct 2020 • Tao Zhong, Wonjik Kim, Masayuki Tanaka, Masatoshi Okutomi
It has a two-branch structure, i. e., the spatial segmentation branch and the temporal velocity estimation branch.
no code implementations • 4 Jul 2020 • Yue Sun, Kun Gao, Zhengwang Wu, Zhihao Lei, Ying WEI, Jun Ma, Xiaoping Yang, Xue Feng, Li Zhao, Trung Le Phan, Jitae Shin, Tao Zhong, Yu Zhang, Lequan Yu, Caizi Li, Ramesh Basnet, M. Omair Ahmad, M. N. S. Swamy, Wenao Ma, Qi Dou, Toan Duc Bui, Camilo Bermudez Noguera, Bennett Landman, Ian H. Gotlib, Kathryn L. Humphreys, Sarah Shultz, Longchuan Li, Sijie Niu, Weili Lin, Valerie Jewells, Gang Li, Dinggang Shen, Li Wang
Deep learning-based methods have achieved state-of-the-art performance; however, one of major limitations is that the learning-based methods may suffer from the multi-site issue, that is, the models trained on a dataset from one site may not be applicable to the datasets acquired from other sites with different imaging protocols/scanners.