no code implementations • SIGDIAL (ACL) 2021 • Ye Tian, Tim Nieradzik, Sepehr Jalali, Da-Shan Shiu
Analysis on sentence embeddings of disfluent and fluent sentence pairs reveals that the deeper the layer, the more similar their representation (exp2).
no code implementations • EACL (AdaptNLP) 2021 • Jezabel Garcia, Federica Freddi, Jamie McGowan, Tim Nieradzik, Feng-Ting Liao, Ye Tian, Da-Shan Shiu, Alberto Bernacchia
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related.
Cross-Lingual Natural Language Inference
Cross-Lingual Transfer
+1
no code implementations • 14 Nov 2023 • Ye Tian, Xinwei Zhang, Zhiqiang Tan
Consider a semi-supervised setting with a labeled dataset of binary responses and predictors and an unlabeled dataset with only the predictors.
1 code implementation • 24 Oct 2023 • Dominic Petrak, Nafise Sadat Moosavi, Ye Tian, Nikolai Rozanov, Iryna Gurevych
Learning from free-text human feedback is essential for dialog systems, but annotated data is scarce and usually covers only a small fraction of error types known in conversational AI.
no code implementations • 23 Oct 2023 • Ye Tian, Haolei Weng, Yang Feng
While supervised federated learning approaches have enjoyed significant success, the domain of unsupervised federated learning remains relatively underexplored.
1 code implementation • 12 Oct 2023 • Jiaqi Li, Guilin Qi, Chuanyi Zhang, Yongrui Chen, Yiming Tan, Chenlong Xia, Ye Tian
Firstly we retrieve the relevant embedding from the knowledge graph by utilizing group relations in metadata and then integrate it with other modalities.
1 code implementation • 2 Oct 2023 • Shangshang Yang, Xiaoshan Yu, Ye Tian, Xueming Yan, Haiping Ma, Xingyi Zhang
Knowledge tracing (KT) aims to trace students' knowledge states by predicting whether students answer correctly on exercises.
no code implementations • 18 Sep 2023 • Baolin Peng, Linfeng Song, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu
Large Language Models (LLMs) have revolutionized natural language processing, yet aligning these models with human values and preferences using RLHF remains a significant challenge.
1 code implementation • ICCV 2023 • Ke Xu, Lei Han, Ye Tian, Shangshang Yang, Xingyi Zhang
In this paper, we explore a one-shot network quantization regime, named Elastic Quantization Neural Networks (EQ-Net), which aims to train a robust weight-sharing quantization supernet.
no code implementations • 9 Aug 2023 • Ye Tian, Mengyu Yang, Lanshan Zhang, Zhizhen Zhang, Yang Liu, Xiaohui Xie, Xirong Que, Wendong Wang
To this end, inspired by human cognition, we propose a novel recognition paradigm of "View while Moving" for efficient long-untrimmed video recognition.
1 code implementation • 4 Aug 2023 • Ling Yang, Ye Tian, Minkai Xu, Zhongyi Liu, Shenda Hong, Wei Qu, Wentao Zhang, Bin Cui, Muhan Zhang, Jure Leskovec
To address this issue, we propose to learn a new powerful graph representation space by directly labeling nodes' diverse local structures for GNN-to-MLP distillation.
no code implementations • 28 Jul 2023 • Zhizhen Zhang, Xiaohui Xie, Mengyu Yang, Ye Tian, Yong Jiang, Yong Cui
Social Media Popularity Prediction has drawn a lot of attention because of its profound impact on many different applications, such as recommendation systems and multimedia advertising.
no code implementations • 21 Jul 2023 • Zehan Zhu, Ye Tian, Yan Huang, Jinming Xu, Shibo He
Perfect synchronization in distributed machine learning problems is inefficient and even impossible due to the existence of latency, package losses and stragglers.
1 code implementation • 10 Jul 2023 • Shangshang Yang, Haiping Ma, Cheng Zhen, Ye Tian, Limiao Zhang, Yaochu Jin, Xingyi Zhang
Then, we propose multi-objective genetic programming (MOGP) to explore the NAS task's search space by maximizing model performance and interpretability.
no code implementations • 11 Jun 2023 • YiFan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
To address the practical challenges of tackling complex instructions, we propose RestGPT, which exploits the power of LLMs and conducts a coarse-to-fine online planning mechanism to enhance the abilities of task decomposition and API selection.
no code implementations • 19 May 2023 • Ye Tian, Zhengshuo Li
TPS and ADNs can deliver base point power bidirectionally and provide frequency regulation support bidirectionally, which extend the existing reserve assumption in ITD dispatch and enhance the operational security of the ITD system.
1 code implementation • 31 Mar 2023 • Ye Tian, Yuqi Gu, Yang Feng
With a known intrinsic dimension, we propose two algorithms that are \textit{adaptive} to the similarity structure and \textit{robust} to outlier tasks under both MTL and TL settings.
no code implementations • 7 Mar 2023 • Ye Tian, Zhengshuo Li, Wenchuan Wu, Miao Fan
The issues of uncertainty and frequency security could become significantly serious in power systems with the high penetration of volatile inverter-based renewables (IBRs).
no code implementations • 30 Sep 2022 • Ye Tian, Haolei Weng, Yang Feng
Unsupervised learning has been widely used in many real-world applications.
no code implementations • 8 Aug 2022 • Libin Liang, Ye Tian, Ge Cheng
Study of neural networks with infinite width is important for better understanding of the neural network in practical application.
no code implementations • 18 Jul 2022 • MingBin Xu, Congzheng Song, Ye Tian, Neha Agrawal, Filip Granqvist, Rogier Van Dalen, Xiao Zhang, Arturo Argueta, Shiyi Han, Yaqiao Deng, Leo Liu, Anmol Walia, Alex Jin
Our goal is to train a large neural network language model (NNLM) on compute-constrained devices while preserving privacy using FL and DP.
1 code implementation • 13 Jul 2022 • Qiang Li, Zhaoliang Yao, Jingjing Wang, Ye Tian, Pengju Yang, Di Xie, ShiLiang Pu
Based on this dataset, we propose a method to obtain the blur scores only with the pairwise rank labels as supervision.
no code implementations • 26 Jun 2022 • Junhao Zhang, Vishwanatha M. Rao, Ye Tian, Yanting Yang, Nicolas Acosta, Zihan Wan, Pin-Yu Lee, Chloe Zhang, Lawrence S. Kegeles, Scott A. Small, Jia Guo
Our finding corroborates that schizophrenia is associated with widespread alterations in subcortical brain structure and the subcortical structural information provides prominent features in diagnostic classification.
1 code implementation • 21 Jan 2022 • Vishwanatha M. Rao, Zihan Wan, Soroush Arabshahi, David J. Ma, Pin-Yu Lee, Ye Tian, Xuzhe Zhang, Andrew F. Laine, Jia Guo
Transformers have demonstrated success in natural image segmentation and have recently been applied to 3D medical image segmentation tasks due to their ability to capture long-distance relationships in the input where the local receptive fields of CNNs struggle.
no code implementations • 8 Nov 2021 • Ye Tian, Yang Feng
In this work, we study the multi-class NP problem by connecting it to the CS problem and propose two algorithms.
no code implementations • 24 Oct 2021 • Ye Tian, Gesualdo Scutari, Tianyu Cao, Alexander Gasnikov
In order to reduce the number of communications to reach a solution accuracy, we proposed a {\it preconditioned, accelerated} distributed method.
2 code implementations • 29 Sep 2021 • Ye Tian, Xiangxiang Chu, Hongpeng Wang
However, the transformer can model the global context easily.
1 code implementation • 10 Aug 2021 • Shangshang Yang, Ye Tian, Xiaoshu Xiang, Shichen Peng, Xingyi Zhang
Evolutionary neural architecture search (ENAS) has recently received increasing attention by effectively finding high-quality neural architectures, which however consumes high computational cost by training the architecture encoded by each individual for complete epochs in individual evaluation.
1 code implementation • 19 Jul 2021 • Dawei Du, Longyin Wen, Pengfei Zhu, Heng Fan, QinGhua Hu, Haibin Ling, Mubarak Shah, Junwen Pan, Ali Al-Ali, Amr Mohamed, Bakour Imene, Bin Dong, Binyu Zhang, Bouchali Hadia Nesma, Chenfeng Xu, Chenzhen Duan, Ciro Castiello, Corrado Mencar, Dingkang Liang, Florian Krüger, Gennaro Vessio, Giovanna Castellano, Jieru Wang, Junyu Gao, Khalid Abualsaud, Laihui Ding, Lei Zhao, Marco Cianciotta, Muhammad Saqib, Noor Almaadeed, Omar Elharrouss, Pei Lyu, Qi Wang, Shidong Liu, Shuang Qiu, Siyang Pan, Somaya Al-Maadeed, Sultan Daud Khan, Tamer Khattab, Tao Han, Thomas Golda, Wei Xu, Xiang Bai, Xiaoqing Xu, Xuelong Li, Yanyun Zhao, Ye Tian, Yingnan Lin, Yongchao Xu, Yuehan Yao, Zhenyu Xu, Zhijian Zhao, Zhipeng Luo, Zhiwei Wei, Zhiyuan Zhao
Crowd counting on the drone platform is an interesting topic in computer vision, which brings new challenges such as small object inference, background clutter and wide viewpoint.
no code implementations • 29 May 2021 • Ye Tian, Yang Feng
In this work, we study the transfer learning problem under high-dimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data.
no code implementations • 22 May 2021 • Ye Tian, Xingyi Zhang, Cheng He, Kay Chen Tan, Yaochu Jin
In the past three decades, a large number of metaheuristics have been proposed and shown high performance in solving complex optimization problems.
no code implementations • 21 May 2021 • Philipp Ennen, Yen-Ting Lin, Ali Girayhan Ozbay, Ferdinando Insalata, Maolin Li, Ye Tian, Sepehr Jalali, Da-Shan Shiu
In light of the recent success of data-driven approaches, we propose the novel future bridging NLG (FBNLG) concept for dialogue systems and simulators.
no code implementations • 10 May 2021 • Min Li, Yu Li, Ye Tian, Li Jiang, Qiang Xu
This paper presents AppealNet, a novel edge/cloud collaborative architecture that runs deep learning (DL) tasks more efficiently than state-of-the-art solutions.
no code implementations • 6 May 2021 • Anish Karpurapu, Adam Krekorian, Ye Tian, Leslie M. Collins, Ravi Karra, Aaron Franklin, Boyla O. Mainsah
Since a sequence of prior doses and INR better capture the variability in individual warfarin response, we hypothesized that longitudinal dose response data will improve maintenance dose predictions.
no code implementations • 8 Mar 2021 • Jezabel R. Garcia, Federica Freddi, Feng-Ting Liao, Jamie McGowan, Tim Nieradzik, Da-Shan Shiu, Ye Tian, Alberto Bernacchia
We show that TreeMAML improves the state of the art results for cross-lingual Natural Language Inference.
Cross-Lingual Natural Language Inference
Cross-Lingual Transfer
+2
1 code implementation • 7 Feb 2021 • Ye Tian, Yang Feng
Variable screening methods have been shown to be effective in dimension reduction under the ultra-high dimensional setting.
no code implementations • 4 Jan 2021 • Heng Lu, Chen Yang, Ye Tian, Jun Lu, Fanqi Xu, FengNan Chen, Yan Ying, Kevin G. Schädler, Chinhua Wang, Frank H. L. Koppens, Antoine Reserbat-Plantey, Joel Moser
With it we characterize the lowest frequency mode of a FLG resonator by measuring its frequency response as a function of position on the membrane.
Mesoscale and Nanoscale Physics
no code implementations • 2 Jan 2021 • Ye Tian
We proposed a novel multilayer correlated topic model (MCTM) to analyze how the main ideas inherit and vary between a document and its different segments, which helps understand an article's structure.
no code implementations • 1 Jan 2021 • Jezabel Garcia, Federica Freddi, Jamie McGowan, Tim Nieradzik, Da-Shan Shiu, Ye Tian, Alberto Bernacchia
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related, and sharing information between unrelated tasks might hurt performance.
no code implementations • 21 Sep 2020 • Yu Li, Min Li, Bo Luo, Ye Tian, Qiang Xu
The key to enabling such lightweight checking is that the smaller neural network only needs to produce approximate results for the initial task without sacrificing fault coverage much.
no code implementations • 15 Sep 2020 • Zhiwei Wei, Chenzhen Duan, Xinghao Song, Ye Tian, Hongpeng Wang
Specifically, we propose a scale adaptive module, which dynamically adjusts chip size to balance object scale, narrowing scale variation in training.
no code implementations • 10 Aug 2020 • Oliver Maier, Steven H. Baete, Alexander Fyrdahl, Kerstin Hammernik, Seb Harrevelt, Lars Kasper, Agah Karakuzu, Michael Loecher, Franz Patzig, Ye Tian, Ke Wang, Daniel Gallichan, Martin Uecker, Florian Knoll
The reference implementations were in good agreement, both visually and in terms of image similarity metrics.
1 code implementation • 13 Jul 2020 • Cong Chen, Shouyang Dong, Ye Tian, Kunlin Cao, Li Liu, Yuanhao Guo
(1) The teacher model serves a dual role as a teacher and a student, such that the teacher predictions on unlabeled images may be very close to those of student, which limits the upper-bound of the student.
no code implementations • 16 Jun 2020 • Ye Tian, Yang Feng
In addition, we show that in a high-dimensional framework, the number of random subspaces needs to be very large to guarantee that a subspace covering signals is selected.
1 code implementation • 8 Apr 2020 • Cheng Li, Ye Tian
We believe that PLMs can also be used to solve the relation extraction problem, but it is necessary to establish a specially designed downstream task model or even loss function for dealing with complicated relations.
Ranked #3 on
Relation Extraction
on SemEval-2010 Task 8
1 code implementation • 23 Oct 2019 • Jinming Xu, Ye Tian, Ying Sun, Gesualdo Scutari
This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications.
no code implementations • ACL 2020 • Wei Wang, Ye Tian, Jiquan Ngiam, Yinfei Yang, Isaac Caswell, Zarana Parekh
Most data selection research in machine translation focuses on improving a single domain.
no code implementations • 27 May 2019 • Ye Tian, Li Yang, Wei Wang, Jing Zhang, Qing Tang, Mili Ji, Yang Yu, Yu Li, Hong Yang, Airong Qian
Traditionally, the most indispensable diagnosis of cervix squamous carcinoma is histopathological assessment which is achieved under microscope by pathologist.
2 code implementations • 21 Feb 2019 • Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon
Lingvo is a Tensorflow framework offering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models.
no code implementations • 13 Feb 2019 • Chen Sun, Ye Tian, Liang Gao, Yishuai Niu, Tianlong Zhang, Hua Li, Yuqing Zhang, Zengqi Yue, Nicole Delepine-Gilon, Jin Yu
Machine learning has been used to develop the model.
no code implementations • 7 Nov 2018 • Ye Tian, Weiping Zhang
In this paper, we propose an effective THresholding method based on ORder Statistic, called THORS, to convert an arbitrary scoring-type classifier, which can induce a continuous cumulative distribution function of the score, into a cost-sensitive one.
no code implementations • WS 2018 • Isabel Groves, Ye Tian, Ioannis Douratsos
The current most popular method for automatic Natural Language Generation (NLG) evaluation is comparing generated text with human-written reference sentences using a metrics system, which has drawbacks around reliability and scalability.
no code implementations • COLING 2018 • Thiago Galery, Efstathios Charitos, Ye Tian
The system presented here took part in the 2018 Trolling, Aggression and Cyberbullying shared task (Forest and Trees team) and uses a Gated Recurrent Neural Network architecture (Cho et al., 2014) in an attempt to assess whether combining pre-trained English and Hindi fastText (Mikolov et al., 2018) word embeddings as a representation of the sequence input would improve classification performance.
Aggression Identification
Multi-Label Text Classification
+2
no code implementations • 24 Jul 2018 • Lin Shao, Ye Tian, Jeannette Bohg
We show that our method generalizes well on real-world data achieving visually better segmentation results.
no code implementations • 16 May 2018 • Yunhan Zhao, Ye Tian, Charless Fowlkes, Wei Shen, Alan Yuille
Experimental results verify that our approach significantly improves the ability of deep networks to resist large variations between training and testing data and achieves classification accuracy improvements on several benchmark datasets, including MNIST, affNIST, SVHN, CIFAR-10 and miniImageNet.
no code implementations • WS 2017 • Ye Tian, Thiago Galery, Giulio Dulcinati, Emilia Molimpakis, Chao Sun
FB reactions (e. g. {``}Love{''} and {``}Angry{''}) indicate the readers{'} overall sentiment, against which we can investigate the types of emojis used the comments under different reaction profiles.
no code implementations • 4 Jan 2017 • Ye Tian, Ran Cheng, Xingyi Zhang, Yaochu Jin
To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multi-objective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators.
no code implementations • LREC 2016 • Julian Hough, Ye Tian, Laura de Ruiter, Simon Betz, Spyros Kousidis, David Schlangen, Jonathan Ginzburg
We present the DUEL corpus, consisting of 24 hours of natural, face-to-face, loosely task-directed dialogue in German, French and Mandarin Chinese.