no code implementations • 24 Oct 2024 • Jing Peng, Yucheng Wang, Yu Xi, Xu Li, Xizhuo Zhang, Kai Yu
The paper further delves into the training strategies for Speech LLMs, proposing potential solutions based on these findings, and offering valuable insights and references for future research in this domain, as well as LLM applications in multimodal contexts.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 13 Oct 2024 • Guorui Lu, Jing Peng, Bingyuan Huang, Chang Gao, Todor Stefanov, Yong Hao, Qinyu Chen
SlimSeiz operates in two states: the first stage selects the optimal channel set for seizure prediction using machine learning algorithms, and the second stage employs a lightweight neural network based on convolution and Mamba for prediction.
no code implementations • 6 Oct 2024 • Zhuoyan Li, Chen Liang, Jing Peng, Ming Yin
To understand how people perceive writings that are produced under this paradigm, in this paper, we conduct an experimental study to understand whether and how the disclosure of the level and type of AI assistance in the writing process would affect people's perceptions of the writing on various aspects, including their evaluation on the quality of the writing and their ranking of different writings.
no code implementations • 25 Jan 2024 • Patrick Lee, Alain Chirino Trujillo, Diana Cuevas Plancarte, Olumide Ebenezer Ojo, Xinyi Liu, Iyanuoluwa Shode, Yuan Zhao, Jing Peng, Anna Feldman
This study investigates the computational processing of euphemisms, a universal linguistic phenomenon, across multiple languages.
no code implementations • 28 Nov 2023 • Yizhuo Cai, Bo Lei, Qianying Zhao, Jing Peng, Min Wei, Yushun Zhang, Xing Zhang
In this paper, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization of federated learning for computing and network convergence of 6G networks, methods that gives decisions on its training process for different network conditions and arithmetic power of participating devices in federated learning.
no code implementations • 31 May 2023 • Patrick Lee, Iyanuoluwa Shode, Alain Chirino Trujillo, Yuan Zhao, Olumide Ebenezer Ojo, Diana Cuevas Plancarte, Anna Feldman, Jing Peng
Transformers have been shown to work well for the task of English euphemism disambiguation, in which a potentially euphemistic term (PET) is classified as euphemistic or non-euphemistic in a particular context.
1 code implementation • 18 May 2023 • Iyanuoluwa Shode, David Ifeoluwa Adelani, Jing Peng, Anna Feldman
Leveraging transfer learning, we compare the performance of cross-domain adaptation from Twitter domain, and cross-lingual adaptation from English language.
no code implementations • 23 Nov 2022 • Patrick Lee, Anna Feldman, Jing Peng
This paper presents The Shared Task on Euphemism Detection for the Third Workshop on Figurative Language Processing (FigLang 2022) held in conjunction with EMNLP 2022.
no code implementations • 10 Nov 2022 • Jing Peng, Pengyu Wei, Zuo Quan Xu
This paper studies a continuous-time optimal portfolio selection problem in the complete market for a behavioral investor whose preference is of the prospect type with probability distortion.
no code implementations • 5 Oct 2022 • Qiong Zhang, Jing Peng, Xin Zhang, Aline Talhouk, Gang Niu, Xiaoxiao Li
In federated learning (FL), classifiers (e. g., deep networks) are trained on datasets from multiple data centers without exchanging data across them, which improves the sample efficiency.
1 code implementation • NAACL (unimplicit) 2022 • Patrick Lee, Martha Gavidia, Anna Feldman, Jing Peng
This paper presents a linguistically driven proof of concept for finding potentially euphemistic terms, or PETs.
no code implementations • LREC 2022 • Martha Gavidia, Patrick Lee, Anna Feldman, Jing Peng
Euphemisms prove to be a difficult topic, not only because they are subject to language change, but also because humans may not agree on what is a euphemism and what is not.
no code implementations • 23 Jan 2020 • Kei Yin Ng, Anna Feldman, Jing Peng
The crowdsourcing results suggest that while humans tend to see censored blogposts as more controversial and more likely to trigger action in real life than the uncensored counterparts, they in general cannot make a better guess than our model when it comes to `reading the mind' of the censors in deciding whether a blogpost should be censored.
no code implementations • 26 Aug 2019 • Hankz Hankui Zhuo, Jing Peng, Subbarao Kambhampati
Our approach takes as input a set of plan traces with disordered actions and noise and outputs action models that can best explain the plan traces.
no code implementations • WS 2019 • Kei Yin Ng, Anna Feldman, Jing Peng, Chris Leberknight
According to Freedom House{'}s annual Freedom on the Net report, more than half the world{'}s Internet users now live in a place where the Internet is censored or restricted.
no code implementations • 15 Mar 2019 • Wei Feng, Wentao Liu, Tong Li, Jing Peng, Chen Qian, Xiaolin Hu
Human-object interactions (HOI) recognition and pose estimation are two closely related tasks.
no code implementations • COLING 2018 • Kei Yin Ng, Anna Feldman, Jing Peng, Chris Leberknight
This paper investigates censorship from a linguistic perspective.
1 code implementation • EMNLP 2014 • Jing Peng, Anna Feldman, Ekaterina Vylomova
Our starting point is that words in a given text segment, such as a paragraph, that are highranking representatives of a common topic of discussion are less likely to be a part of an idiomatic expression.
no code implementations • COLING 2016 • Jing Peng, Anna Feldman
Some expressions can be ambiguous between idiomatic and literal interpretations depending on the context they occur in, e. g., {`}sales hit the roof{'} vs. {`}hit the roof of the car{'}.