no code implementations • 15 Mar 2025 • Yihao Wang, Raphael Memmesheimer, Sven Behnke
The availability of large language models and open-vocabulary object perception methods enables more flexibility for domestic service robots.
1 code implementation • 20 Feb 2025 • Mengyang Sun, Yihao Wang, Tao Feng, Dan Zhang, Yifan Zhu, Jie Tang
In order to streamline the fine-tuning of foundation models, Low-Rank Adapters (LoRAs) have been substantially adopted across various fields, including instruction tuning and domain adaptation.
no code implementations • 8 Feb 2025 • Yihao Wang, Lingxiao Li, Yifan Tang, Ru Zhang, Jianyi Liu
Furthermore, ITSmark can also customize the watermark embedding position and proportion according to user needs, making embedding more flexible.
no code implementations • 1 Jan 2025 • Jiaxin Song, Xinyu Wang, Yihao Wang, Yifan Tang, Ru Zhang, Jianyi Liu, Gongshen Liu
While manual content moderation is still prevalent, the overwhelming volume of content and the psychological strain on human moderators underscore the need for automated toxic speech detection.
no code implementations • 29 Nov 2024 • Yihao Wang, Marcus Klasson, Matias Turkulainen, Shuzhe Wang, Juho Kannala, Arno Solin
Gaussian splatting enables fast novel view synthesis in static 3D environments.
no code implementations • 2 Nov 2024 • Yang Yan, Yihao Wang, Chi Zhang, Wenyuan Hou, Kang Pan, Xingkai Ren, Zelun Wu, Zhixin Zhai, Enyun Yu, Wenwu Ou, Yang song
In this study, we introduce a novel paradigm named Large Language Models for Post-Ranking in search engine (LLM4PR), which leverages the capabilities of LLMs to accomplish the post-ranking task in SE.
no code implementations • 3 Sep 2024 • Yihao Wang, Ru Zhang, Yifan Tang, Jianyi Liu
With the evolution of generative linguistic steganography techniques, conventional steganalysis falls short in robustly quantifying the alterations induced by steganography, thereby complicating detection.
no code implementations • 28 Aug 2024 • Wei Chen, Zhiyuan Li, Shuo Xin, Yihao Wang
Our work contributes to the development of more sustainable and scalable language models for on-device applications, addressing the critical need for energy-efficient and responsive AI technologies in resource-constrained environments while maintaining the accuracy to understand long contexts.
no code implementations • 28 Jul 2024 • Yihao Wang, Lizhi Chen, Zhong Qian, Peifeng Li
To address this issue, we construct a dataset named Official-NV, comprising officially published news videos.
no code implementations • 22 Jul 2024 • Peng Cheng, Huimu Wang, Jinyuan Zhao, Yihao Wang, Enqiang Xu, Yu Zhao, Zhuojian Xiao, Songlin Wang, Guoyu Tang, Lin Liu, Sulong Xu
Existing methods based on learning to rank neglect the long-term value of traffic allocation, whereas approaches of reinforcement learning suffer from balancing multiple objectives and the difficulties of cold starts within realworld data environments.
no code implementations • 6 Jun 2024 • Yifan Tang, Yihao Wang, Ru Zhang, Jianyi Liu
In this mode, LSGC deleted the LS-task "description" and used the "causalLM" LLMs to extract steganographic features.
no code implementations • 28 Jan 2024 • Yihao Wang, Ruiqi Song, Lingxiao Li, Ru Zhang, Jianyi Liu
Based on the invisible characteristics of LS, we give three constraints that need to be met to design the function.
no code implementations • 3 Nov 2023 • Yihao Wang, Ruiqi Song, Lingxiao Li, Yifan Tang, Ru Zhang, Jianyi Liu
The extracted features are mapped to high-dimensional user features via the deep-learning model of the method to be improved.
no code implementations • CVPR 2023 • Yihao Wang, Zhigang Wang, Bin Zhao, Dong Wang, Mulin Chen, Xuelong Li
In contrast, we propose a purely passive method to track a person walking in an invisible room by only observing a relay wall, which is more in line with real application scenarios, e. g., security.
1 code implementation • Findings (ACL) 2022 • Rui Cao, Yihao Wang, Yuxin Liang, Ling Gao, Jie Zheng, Jie Ren, Zheng Wang
We define a maximum traceable distance metric, through which we learn to what extent the text contrastive learning benefits from the historical information of negative samples.
no code implementations • 20 Oct 2021 • Yihao Wang, Ling Gao, Jie Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao
In detail, we train a DNN model (termed as pre-model) to predict which object detection model to use for the coming task and offloads to which edge servers by physical characteristics of the image task (e. g., brightness, saturation).
no code implementations • 7 Oct 2021 • Yihao Wang
Based on the snapshot ensemble, we present a new method that is easier to implement: unlike original snapshot ensemble that seeks for local minima, our snapshot ensemble focuses on the last few iterations of a training and stores the sets of parameters from them.
1 code implementation • 15 Mar 2021 • Xuequan Lu, Yihao Wang, Sheldon Fung, Xue Qing
In this paper, we identify two main bottlenecks: (1) the lack of a publicly available imaging dataset for diverse species of nematodes (especially the species only found in natural environment) which requires considerable human resources in field work and experts in taxonomy, and (2) the lack of a standard benchmark of state-of-the-art deep learning techniques on this dataset which demands the discipline background in computer science.
no code implementations • 23 Dec 2019 • Yihao Wang, Katsushi Hashimoto, Toru Tomimatsu, Yoshiro Hirayama
While the disorder-induced quantum Hall (QH) effect has been studied previously, the effect ofdisorder potential on microscopic features of the integer QH effect remains unclear, particularly forthe incompressible (IC) strip.
Mesoscale and Nanoscale Physics