1 code implementation • 22 Sep 2024 • Navid Ayoobi, Lily Knab, Wen Cheng, David Pantoja, Hamidreza Alikhani, Sylvain Flamant, Jin Kim, Arjun Mukherjee
In response to the identified shortcomings of existing AI text detectors, we present a countermeasure to improve the robustness against this form of manipulation.
1 code implementation • 20 Aug 2024 • Wen Cheng, Ke Sun, Xinyu Zhang, Wei Wang
The rapid development of large language models (LLMs) has significantly advanced code completion capabilities, giving rise to a new generation of LLM-based Code Completion Tools (LCCTs).
no code implementations • 5 Jul 2024 • Han Wang, Yuman Nie, Yun Li, Hongjie Liu, Min Liu, Wen Cheng, Yaoxiong Wang
Event-based cameras, inspired by the biological retina, have evolved into cutting-edge sensors distinguished by their minimal power requirements, negligible latency, superior temporal resolution, and expansive dynamic range.
1 code implementation • 7 Mar 2024 • Shichen Dong, Wen Cheng, Jiayu Qin, Wei Wang
The emergence of LLMs has ignited a fresh surge of breakthroughs in NLP applications, particularly in domains such as question-answering systems and text generation.
no code implementations • 26 Aug 2023 • Jian Zhu, Wen Cheng, Yu Cui, Chang Tang, Yuyang Dai, Yong Li, Lingfang Zeng
Hash representation learning of multi-view heterogeneous data is the key to improving the accuracy of multimedia retrieval.
no code implementations • 25 Jul 2023 • Xutian Deng, Junnan Jiang, Wen Cheng, Miao Li
As medical ultrasound is becoming a prevailing examination approach nowadays, robotic ultrasound systems can facilitate the scanning process and prevent professional sonographers from repetitive and tedious work.
no code implementations • 22 Mar 2023 • Wen Cheng, Shichen Dong, Wei Wang
This paper describes our submission to ICASSP 2023 MUG Challenge Track 4, Keyphrase Extraction, which aims to extract keyphrases most relevant to the conference theme from conference materials.
no code implementations • 8 May 2020 • Hian Hian See, Brian Lim, Si Li, Haicheng Yao, Wen Cheng, Harold Soh, Benjamin C. K. Tee
We anticipate that our ST-MNIST dataset will be of interest and useful to the neuromorphic and robotics research communities.