no code implementations • 25 Nov 2024 • Wang Bill Zhu, Deqing Fu, Kai Sun, Yi Lu, Zhaojiang Lin, Seungwhan Moon, Kanika Narang, Mustafa Canim, Yue Liu, Anuj Kumar, Xin Luna Dong
We hypothesize that a user's visual history with images reflecting their daily life, offers valuable insights into their interests and preferences, and can be leveraged for personalization.
1 code implementation • 13 Nov 2024 • Xin Jin, Qianqian Qiao, Yi Lu, Huaye Wang, Heng Huang, Shan Gao, Jianfei Liu, Rui Li
Datasets play a pivotal role in training visual models, facilitating the development of abstract understandings of visual features through diverse image samples and multidimensional attributes.
1 code implementation • 18 Sep 2024 • Yi Lu, Jing Nathan Yan, Songlin Yang, Justin T. Chiu, Siyu Ren, Fei Yuan, Wenting Zhao, Zhiyong Wu, Alexander M. Rush
Broad textual understanding and in-context learning require language models that utilize full document contexts.
no code implementations • 18 Sep 2024 • Xin Qi, Ruibo Fu, Zhengqi Wen, Tao Wang, Chunyu Qiang, JianHua Tao, Chenxing Li, Yi Lu, Shuchen Shi, Zhiyong Wang, Xiaopeng Wang, Yuankun Xie, Yukun Liu, Xuefei Liu, Guanjun Li
In recent years, speech diffusion models have advanced rapidly.
no code implementations • 26 Aug 2024 • Cherag Aroraa, Tracy Holloway King, Jayant Kumar, Yi Lu, Sanat Sharma, Arvind Srikantan, David Uvalle, Josep Valls-Vargas, Harsha Vardhan
As user content and queries become increasingly multi-modal, the need for effective multi-modal search systems has grown.
1 code implementation • 20 Aug 2024 • Yuankun Xie, Chenxu Xiong, Xiaopeng Wang, Zhiyong Wang, Yi Lu, Xin Qi, Ruibo Fu, Yukun Liu, Zhengqi Wen, JianHua Tao, Guanjun Li, Long Ye
Currently, Audio Language Models (ALMs) are rapidly advancing due to the developments in large language models and audio neural codecs.
no code implementations • 7 Jul 2024 • Ruibo Fu, Xin Qi, Zhengqi Wen, JianHua Tao, Tao Wang, Chunyu Qiang, Zhiyong Wang, Yi Lu, Xiaopeng Wang, Shuchen Shi, Yukun Liu, Xuefei Liu, Shuai Zhang
The results indicate that the ASRRL method significantly outperforms traditional fine-tuning approaches, achieving higher speaker similarity and better overall speech quality with limited reference speeches.
no code implementations • 25 Jun 2024 • Yongliang Wu, Bozheng Li, Jiawang Cao, Wenbo Zhu, Yi Lu, Weiheng Chi, Chuyun Xie, Haolin Zheng, Ziyue Su, Jay Wu, Xu Yang
The Long-form Video Question-Answering task requires the comprehension and analysis of extended video content to respond accurately to questions by utilizing both temporal and contextual information.
no code implementations • 22 Jun 2024 • Xiaopeng Wang, Yi Lu, Xin Qi, Zhiyong Wang, Yuankun Xie, Shuchen Shi, Ruibo Fu
The objective of the challenge is to establish a multi-speaker, multi-lingual Indic Text-to-Speech system with voice cloning capabilities, covering seven Indian languages with both male and female speakers.
1 code implementation • 15 Jun 2024 • Ruibo Fu, Shuchen Shi, Hongming Guo, Tao Wang, Chunyu Qiang, Zhengqi Wen, JianHua Tao, Xin Qi, Yi Lu, Xiaopeng Wang, Zhiyong Wang, Yukun Liu, Xuefei Liu, Shuai Zhang, Guanjun Li
Despite advancements in AIGC technologies for text and image generation, the foley audio dubbing remains rudimentary due to difficulties in cross-modal scene matching and content correlation.
no code implementations • 12 Jun 2024 • Yi Lu, Yuankun Xie, Ruibo Fu, Zhengqi Wen, JianHua Tao, Zhiyong Wang, Xin Qi, Xuefei Liu, Yongwei Li, Yukun Liu, Xiaopeng Wang, Shuchen Shi
To effectively detect LLM-based deepfake audio, we focus on the core of the generation process, the conversion from neural codec to waveform.
1 code implementation • 8 May 2024 • Yuankun Xie, Yi Lu, Ruibo Fu, Zhengqi Wen, Zhiyong Wang, JianHua Tao, Xin Qi, Xiaopeng Wang, Yukun Liu, Haonan Cheng, Long Ye, Yi Sun
With the proliferation of Audio Language Model (ALM) based deepfake audio, there is an urgent need for generalized detection methods.
no code implementations • 5 May 2024 • Xin Jin, Qianqian Qiao, Yi Lu, Shan Gao, Heng Huang, Guangdong Li
This dataset solely comprises overall scores for high-quality artistic images.
no code implementations • 14 Apr 2024 • Yi Lu, Jianguo Wang, Huihua Xie
This paper develops a generalized framework for identifying causal impacts in a reduced-form manner under kinked settings when agents can manipulate their choices around the threshold.
1 code implementation • 1 Apr 2024 • wei he, Shichun Liu, Jun Zhao, Yiwen Ding, Yi Lu, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang
The generated demos strategically interpolate between existing demos and the given query, transforming the query from OOD to ID.
1 code implementation • 18 Feb 2024 • Jun Zhao, Can Zu, Hao Xu, Yi Lu, wei he, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang
Large language models (LLMs) have demonstrated impressive performance in understanding language and executing complex reasoning tasks.
1 code implementation • 16 Feb 2024 • Yi Lu, Xin Zhou, wei he, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang
Instead of allowing each head to attend to the full sentence, which struggles with generalizing to longer sequences due to out-of-distribution (OOD) issues, we allow each head to process in-distribution length by selecting and attending to important context chunks.
no code implementations • 11 Jan 2024 • Chenghao Li, Chaoning Zhang, Boheng Zeng, Yi Lu, Pengbo Shi, Qingzi Chen, Jirui Liu, Lingyun Zhu, Yang Yang, Heng Tao Shen
These findings highlight the effectiveness of LKCA in bridging local and global feature modeling, offering a practical and robust solution for real-world applications with limited data and resources.
no code implementations • 2 Nov 2023 • Xin Zhou, Yi Lu, Ruotian Ma, Tao Gui, Qi Zhang, Xuanjing Huang
Specifically, we introduce ``security vectors'', a few new parameters that can be separated from the LLM, to ensure LLM's responses are consistent with the harmful behavior.
no code implementations • 1 Feb 2023 • Chenglong Wang, Yi Lu, Yongyu Mu, Yimin Hu, Tong Xiao, Jingbo Zhu
Knowledge distillation addresses the problem of transferring knowledge from a teacher model to a student model.
1 code implementation • 28 Dec 2022 • Bayram Cevdet Akdeniz, Oleksandr Frei, Espen Hagen, Tahir Tekin Filiz, Sandeep Karthikeyan, Joelle Pasman, Andreas Jangmo, Jacob Bergsted, John R. Shorter, Richard Zetterberg, Joeri Meijsen, Ida Elken Sonderby, Alfonso Buil, Martin Tesli, Yi Lu, Patrick Sullivan, Ole Andreassen, Eivind Hovig
The analyses can be done by running these auto-generated scripts with the software containers.
no code implementations • 8 Dec 2022 • Yi Lu, Hui Chen, Jukka Talvitie, Henk Wymeersch, Mikko Valkama
Reconfigurable intelligent surfaces (RISs) are expected to be a key component enabling the mobile network evolution towards a flexible and intelligent 6G wireless platform.
1 code implementation • 5 Oct 2022 • Hyunji Lee, Jaeyoung Kim, Hoyeon Chang, Hanseok Oh, Sohee Yang, Vlad Karpukhin, Yi Lu, Minjoon Seo
The generative retrieval model depends solely on the information encoded in its model parameters without external memory, its information capacity is limited and fixed.
no code implementations • 5 Jul 2022 • Yuan Liang, Bingjie Yu, Xiaojian Zhang, Yi Lu, Linchuan Yang
To this end, this study applies difference-in-differences (i. e., a regression-based causal inference approach) to empirically evaluate the effects of the congestion tax policy on ridesourcing demand and traffic congestion in Chicago.
no code implementations • 16 Mar 2022 • Cheng Ge, Yi Lu, Jia Qu, Liangxu Xie, Feng Wang, Hong Zhang, Ren Kong, Shan Chang
De novo peptide sequencing from mass spectrometry data is an important method for protein identification.
no code implementations • 1 Feb 2022 • Shayne Longpre, Julia Reisler, Edward Greg Huang, Yi Lu, Andrew Frank, Nikhil Ramesh, Chris DuBois
Studies of active learning traditionally assume the target and source data stem from a single domain.
no code implementations • 29 Sep 2021 • Changhuai Chen, Xile Shen, Mengyu Ye, Yi Lu, Jun Che, ShiLiang Pu
We figure out that the background class should be treated differently from the classes of interest during training.
no code implementations • 15 Jan 2021 • Yi Lu, Mike Koivisto, Jukka Talvitie, Elizaveta Rastorgueva-Foi, Toni Levanen, Elena Simona Lohan, Mikko Valkama
The fifth generation (5G) mobile networks with enhanced connectivity and positioning capabilities play an increasingly important role in the development of automated vehicle-to-everything (V2X) and other advanced industrial Internet of Things (IoT) systems.
1 code implementation • 19 Oct 2020 • Jie Lian, Jingyu Liu, Yizhou Yu, Mengyuan Ding, Yaoci Lu, Yi Lu, Jie Cai, Deshou Lin, Miao Zhang, Zhe Wang, Kai He, Yijie Yu
The detection of thoracic abnormalities challenge is organized by the Deepwise AI Lab.
no code implementations • NAACL 2021 • Shayne Longpre, Yi Lu, Christopher DuBois
In the context of question answering, we investigate competing hypotheses for the existence of MPPIs, including poor posterior calibration of neural models, lack of pretraining, and "dataset bias" (where a model learns to attend to spurious, non-generalizable cues in the training data).
2 code implementations • 30 Jul 2020 • Shayne Longpre, Yi Lu, Joachim Daiber
Progress in cross-lingual modeling depends on challenging, realistic, and diverse evaluation sets.
no code implementations • 15 Jul 2020 • Junwen Chen, Yi Lu, Yaran Chen, Dongbin Zhao, Zhonghua Pang
A good object segmentation should contain clear contours and complete regions.
no code implementations • ICLR 2019 • Du Su, Ali Yekkehkhany, Yi Lu, Wenmiao Lu
We propose a hierarchical problem embedding algorithm, called Prob2Vec, that consists of abstraction and embedding steps.
1 code implementation • Radiology 2020 • Lin Li, Lixin Qin, Zeguo Xu, Youbing Yin, Xin Wang, Bin Kong, Junjie Bai, Yi Lu, Zhenghan Fang, Qi Song, Kunlin Cao, Daliang Liu, Guisheng Wang, Qizhong Xu, Xisheng Fang, Shiqin Zhang, Juan Xia, Jun Xia
Materials and Methods In this retrospective and multi-center study, a deep learning model, COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT exams for the detection of COVID-19.
no code implementations • 2 Jan 2020 • Yi Lu, Yaran Chen, Dongbin Zhao, Jianxin Chen
Then we apply graph convolutional network to solve this graph node classification problem.
no code implementations • WS 2019 • Shayne Longpre, Yi Lu, Zhucheng Tu, Chris DuBois
To produce a domain-agnostic question answering model for the Machine Reading Question Answering (MRQA) 2019 Shared Task, we investigate the relative benefits of large pre-trained language models, various data sampling strategies, as well as query and context paraphrases generated by back-translation.
no code implementations • WS 2019 • Lung-Hao Lee, Yi Lu, Po-Han Chen, Po-Lei Lee, Kuo-Kai Shyu
This study describes the model design of the NCUEE system for the MEDIQA challenge at the ACL-BioNLP 2019 workshop.
no code implementations • 25 Mar 2019 • Zhihui Guo, Junjie Bai, Yi Lu, Xin Wang, Kunlin Cao, Qi Song, Milan Sonka, Youbing Yin
The proposed method generates well-positioned centerlines, exhibiting lower number of missing branches and is more robust in the presence of minor imperfections of the object segmentation mask.
no code implementations • 29 Jan 2019 • Bin Kong, Xin Wang, Junjie Bai, Yi Lu, Feng Gao, Kunlin Cao, Qi Song, Shaoting Zhang, Siwei Lyu, Youbing Yin
In order to address these limitations, we present tree-structured ConvLSTM models for tree-structured image analysis tasks which can be trained end-to-end.
no code implementations • 21 Dec 2018 • Eric Wu, Bin Kong, Xin Wang, Junjie Bai, Yi Lu, Feng Gao, Shaoting Zhang, Kunlin Cao, Qi Song, Siwei Lyu, Youbing Yin
The hierarchical attention components of the residual attention subnet force our network to focus on the key components of the X-ray images and generate the final predictions as well as the associated visual supports, which is similar to the assessment procedure of clinicians.
no code implementations • LREC 2014 • Liang Tian, Derek F. Wong, Lidia S. Chao, Paulo Quaresma, Francisco Oliveira, Yi Lu, Shuo Li, Yiming Wang, Long-Yue Wang
This paper describes the acquisition of a large scale and high quality parallel corpora for English and Chinese.