2 code implementations • 5 Oct 2024 • Houcheng Jiang, Junfeng Fang, Tianyu Zhang, An Zhang, Ruipeng Wang, Tao Liang, Xiang Wang
This work explores sequential model editing in large language models (LLMs), a critical task that involves modifying internal knowledge within LLMs continuously through multi-round editing, each incorporating updates or corrections to adjust the model outputs without the need for costly retraining.
1 code implementation • 16 Jun 2024 • Xiaoxiao Ma, Mohan Zhou, Tao Liang, Yalong Bai, Tiejun Zhao, Huaian Chen, Yi Jin
We present STAR, a text-to-image model that employs scale-wise auto-regressive paradigm.
no code implementations • 16 Jun 2022 • Lianyang Ma, Yu Yao, Tao Liang, Tongliang Liu
On the whole, the "multi-scale" mechanism is capable of exploiting the different levels of semantic information of each modality which are used for fine-grained crossmodal interactions.
no code implementations • CVPR 2022 • Tao Liang, Guosheng Lin, Mingyang Wan, Tianrui Li, Guojun Ma, Fengmao Lv
Through the proposed MI2P unit, we can inject the language information into the vision backbone by attending the word-wise textual features to different visual channels, as well as inject the visual information into the language backbone by attending the channel-wise visual features to different textual words.
no code implementations • 26 Jul 2021 • Weiwei Liao, Tao Liang
A novel method for computing reachable sets is proposed in this paper.
no code implementations • 7 Apr 2021 • Jin Liu, Peng Chen, Tao Liang, Zhaoxing Li, Cai Yu, Shuqiao Zou, Jiao Dai, Jizhong Han
Face reenactment is a challenging task, as it is difficult to maintain accurate expression, pose and identity simultaneously.
no code implementations • ICCV 2021 • Tao Liang, Guosheng Lin, Lei Feng, Yan Zhang, Fengmao Lv
To this end, both the marginal distribution and the elements with high-confidence correlations are aligned over the common space of the query and key vectors which are computed from different modalities.
no code implementations • 16 Jun 2020 • Tao Liang, Wenya Wang, Fengmao Lv
Specifically, the aspect category information is used to construct pivot knowledge for transfer with assumption that the interactions between sentence-level aspect category and token-level aspect terms are invariant across domains.