1 code implementation • 21 May 2025 • Tong Zheng, Lichang Chen, Simeng Han, R. Thomas McCoy, Heng Huang
To fill in this gap, we propose Mixture-of-Thought (MoT), a framework that enables LLMs to reason across three complementary modalities: natural language, code, and a newly introduced symbolic modality, truth-table, which systematically enumerates logical cases and partially mitigates key failure modes in natural language reasoning.
1 code implementation • 9 Mar 2025 • Yingfeng Luo, Tong Zheng, Yongyu Mu, Bei Li, Qinghong Zhang, Yongqi Gao, Ziqiang Xu, Peinan Feng, Xiaoqian Liu, Tong Xiao, Jingbo Zhu
The field of neural machine translation (NMT) has changed with the advent of large language models (LLMs).
no code implementations • 26 Feb 2025 • Zhengmian Hu, Tong Zheng, Vignesh Viswanathan, Ziyi Chen, Ryan A. Rossi, Yihan Wu, Dinesh Manocha, Heng Huang
For a fixed draft sampling method, the optimal acceptance rate is a solution to an optimal transport problem, but the complexity of this problem makes it difficult to solve for the optimal acceptance rate and measure the gap between existing verification algorithms and the theoretical upper bound.
no code implementations • 16 Feb 2025 • Tong Zheng, Yan Wen, Huiwen Bao, Junfeng Guo, Heng Huang
The emergence of Large Language Models (LLMs) has advanced the multilingual machine translation (MMT), yet the Curse of Multilinguality (CoM) remains a major challenge.
no code implementations • 2 Dec 2024 • Weiqiao Shan, Long Meng, Tong Zheng, Yingfeng Luo, Bei Li, junxin Wang, Tong Xiao, Jingbo Zhu
Large language models (LLMs) exhibit exceptional performance across various downstream tasks.
no code implementations • 5 Nov 2024 • Bei Li, Tong Zheng, Rui Wang, Jiahao Liu, Qingyan Guo, Junliang Guo, Xu Tan, Tong Xiao, Jingbo Zhu, Jingang Wang, Xunliang Cai
First, we introduce a predictor-corrector learning framework to minimize truncation errors, which consists of a high-order predictor and a multistep corrector.
no code implementations • 29 Oct 2024 • Zhengmian Hu, Tong Zheng, Heng Huang
Authorship attribution aims to identify the origin or author of a document.
no code implementations • 11 Oct 2024 • Qingchuan Ma, Shiao Wang, Tong Zheng, Xiaodong Dai, Yifeng Wang, Qingquan Yang, Xiao Wang
This study addresses the critical challenge of predicting the Q-distribution in long-term stable nuclear fusion task, a key component for advancing clean energy solutions.
no code implementations • 4 Mar 2024 • Tong Zheng, Shusaku Sone, Yoshitaka Ushiku, Yuki Oba, Jiaxin Ma
This paper presents a Tri-branch Neural Fusion (TNF) approach designed for classifying multimodal medical images and tabular data.
1 code implementation • 26 Oct 2023 • Yuxin Zuo, Bei Li, Chuanhao Lv, Tong Zheng, Tong Xiao, Jingbo Zhu
This paper presents an in-depth study of multimodal machine translation (MMT), examining the prevailing understanding that MMT systems exhibit decreased sensitivity to visual information when text inputs are complete.
1 code implementation • 23 Oct 2023 • Tong Zheng, Bei Li, Huiwen Bao, Jiale Wang, Weiqiao Shan, Tong Xiao, Jingbo Zhu
In this work, we emphasize the importance of hidden dimensions in designing lightweight FFNs, a factor often overlooked in previous architectures.
Ranked #23 on
Machine Translation
on WMT2014 English-German
2 code implementations • 20 Dec 2022 • Tong Zheng, Bei Li, Huiwen Bao, Tong Xiao, Jingbo Zhu
Two principles: the complementary principle and the consensus principle are widely acknowledged in the literature of multi-view learning.
1 code implementation • 19 Jun 2022 • Bei Li, Tong Zheng, Yi Jing, Chengbo Jiao, Tong Xiao, Jingbo Zhu
In this work, we define those scales in different linguistic units, including sub-words, words and phrases.
no code implementations • 9 Jan 2022 • Masahiro Oda, Tong Zheng, Yuichiro Hayashi, Yoshito Otake, Masahiro Hashimoto, Toshiaki Akashi, Shigeki Aoki, Kensaku MORI
We utilize the scale uncertainty among various receptive field sizes of a segmentation FCN to obtain infection regions.
no code implementations • 20 Oct 2020 • Tong Zheng, Hirohisa ODA, Masahiro Oda, Shota NAKAMURA, Masaki MORI, Hirotsugu TAKABATAKE, Hiroshi NATORI, Kensaku MORI
Unsupervised SR methods are required that do not need paired LR and HR images.
no code implementations • 7 Apr 2020 • Tong ZHENG, Hirohisa ODA, Takayasu MORIYA, Takaaki SUGINO, Shota NAKAMURA, Masahiro Oda, Masaki MORI, Hirotsugu TAKABATAKE, Hiroshi NATORI, Kensaku MORI
This paper presents a super-resolution (SR) method with unpaired training dataset of clinical CT and micro CT volumes.
no code implementations • 30 Dec 2019 • Tong Zheng, Hirohisa ODA, Takayasu MORIYA, Shota NAKAMURA, Masahiro Oda, Masaki MORI, Horitsugu Takabatake, Hiroshi NATORI, Kensaku MORI
This paper newly introduces multi-modality loss function for GAN-based super-resolution that can maintain image structure and intensity on unpaired training dataset of clinical CT and micro CT volumes.
1 code implementation • 12 May 2019 • Jun Zhang, Tong Zheng, Shengping Zhang, Meng Wang
First, the contextual net with a center-surround architecture extracts local contextual features from image patches, and generates initial illuminant estimates and the corresponding color corrected patches.