1 code implementation • 17 Dec 2024 • Mingjia Shi, Yuhao Zhou, Ruiji Yu, Zekai Li, Zhiyuan Liang, Xuanlei Zhao, Xiaojiang Peng, Tanmay Rajpurohit, Shanmukha Ramakrishna Vedantam, Wangbo Zhao, Kai Wang, Yang You
Re-training the token-reduced model enhances the performance of Mamba, by effectively rebuilding the key knowledge.
1 code implementation • 16 Oct 2024 • Zhaoyang Wang, Weilei He, Zhiyuan Liang, Xuchao Zhang, Chetan Bansal, Ying WEI, Weitong Zhang, Huaxiu Yao
To address this issue, we first formulate and analyze the generalized iterative preference fine-tuning framework for self-rewarding language model.
1 code implementation • CVPR 2022 • Zhiyuan Liang, Tiancai Wang, Xiangyu Zhang, Jian Sun, Jianbing Shen
The tree energy loss is effective and easy to be incorporated into existing frameworks by combining it with a traditional segmentation loss.
1 code implementation • CVPR 2021 • Tianfei Zhou, Wenguan Wang, Zhiyuan Liang, Jianbing Shen
On existing public benchmarks, face forgery detection techniques have achieved great success.
no code implementations • CVPR 2020 • Tao Li, Zhiyuan Liang, Sanyuan Zhao, Jiahao Gong, Jianbing Shen
For the global error, we first transform category-wise features into a high-level graph model with coarse-grained structural information, and then decouple the high-level graph to reconstruct the category features.