no code implementations • EACL (AdaptNLP) 2021 • Jezabel Garcia, Federica Freddi, Jamie McGowan, Tim Nieradzik, Feng-Ting Liao, Ye Tian, Da-Shan Shiu, Alberto Bernacchia
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related.
Cross-Lingual Natural Language Inference Cross-Lingual Transfer +1
no code implementations • SIGDIAL (ACL) 2021 • Ye Tian, Tim Nieradzik, Sepehr Jalali, Da-Shan Shiu
Analysis on sentence embeddings of disfluent and fluent sentence pairs reveals that the deeper the layer, the more similar their representation (exp2).
no code implementations • 15 Oct 2024 • Sijie Cheng, Kechen Fang, Yangyang Yu, Sicheng Zhou, Bohao Li, Ye Tian, Tingguang Li, Lei Han, Yang Liu
In conclusion, VidEgoThink reflects a research trend towards employing MLLMs for egocentric vision, akin to human capabilities, enabling active observation and interaction in the complex real-world environments.
no code implementations • 9 Oct 2024 • Xiyao Wang, Linfeng Song, Ye Tian, Dian Yu, Baolin Peng, Haitao Mi, Furong Huang, Dong Yu
Monte Carlo Tree Search (MCTS) has recently emerged as a powerful technique for enhancing the reasoning capabilities of LLMs.
1 code implementation • 3 Oct 2024 • Zhaowei Wang, Hongming Zhang, Tianqing Fang, Ye Tian, Yue Yang, Kaixin Ma, Xiaoman Pan, Yangqiu Song, Dong Yu
In this paper, we study a new task of navigating to diverse target objects in a large number of scene types.
no code implementations • 9 Sep 2024 • Yifan Jia, Yanbin Wang, Jianguo Sun, Yiwei Liu, Zhang Sheng, Ye Tian
To address these challenges, we propose TLMG4Eth that combines a transaction language model with graph-based methods to capture semantic, similarity, and structural features of transaction data in Ethereum.
no code implementations • 2 Sep 2024 • Tianxu Liu, Yanbin Wang, Jianguo Sun, Ye Tian, Yanyu Huang, Tao Xue, Peiyue Li, Yiwei Liu
As blockchain technology rapidly evolves, the demand for enhanced efficiency, security, and scalability grows. Transformer models, as powerful deep learning architectures, have shown unprecedented potential in addressing various blockchain challenges.
no code implementations • 28 Aug 2024 • Dian Yu, Baolin Peng, Ye Tian, Linfeng Song, Haitao Mi, Dong Yu
There is a growing trend of teaching large language models (LLMs) to solve mathematical problems through coding.
no code implementations • 28 Jul 2024 • Yuewen Mei, Tong Nie, Jian Sun, Ye Tian
Hence, Fault Injection (FI) testing is conducted by practitioners to evaluate the safety level of HAVs.
no code implementations • 15 Jul 2024 • Shuo Yang, Zhengshuo Li, Ye Tian
This model comprehensively considers the flexible devices in the FDN and the impact of uncertainty of photovoltaic power generation and load.
1 code implementation • 11 Jul 2024 • Jackson Hamel, Ming-Jun Lai, Zhaiming Shen, Ye Tian
The performance of our methods is shown to be very effective in classifying images.
no code implementations • 10 Jul 2024 • Yichun Ye, He Zhang, Ye Tian, Jian Sun, Karl Meinke
To solve it, we devise a method to represent, generate, and reweight the distribution of risky rare events.
no code implementations • 30 Jun 2024 • Yuheng Zhang, Dian Yu, Baolin Peng, Linfeng Song, Ye Tian, Mingyue Huo, Nan Jiang, Haitao Mi, Dong Yu
Specifically, we formulate the problem as a two-player game and propose a novel online algorithm, iterative Nash policy optimization (INPO).
no code implementations • 29 Jun 2024 • Ante Wang, Linfeng Song, Ye Tian, Baolin Peng, Dian Yu, Haitao Mi, Jinsong Su, Dong Yu
Recent research suggests that tree search algorithms (e. g. Monte Carlo Tree Search) can dramatically boost LLM performance on complex mathematical reasoning tasks.
no code implementations • 20 Jun 2024 • Ye Tian, Peng Wu, Zhiqiang Tan
In this paper, we present an inference framework for estimating regression coefficients in conditional mean models within both SSL and CSTL settings, while allowing for the misspecification of conditional mean models.
no code implementations • 10 Jun 2024 • Xiaoying Zhang, Baolin Peng, Ye Tian, Jingyan Zhou, YiPeng Zhang, Haitao Mi, Helen Meng
Motivated by the remarkable success of the Feynman Technique in efficient human learning, we introduce Self-Tuning, a learning framework aimed at improving an LLM's ability to effectively acquire new knowledge from raw documents through self-teaching.
1 code implementation • 6 Jun 2024 • Ye Tian, Ling Yang, Haotian Yang, Yuan Gao, Yufan Deng, Jingmin Chen, Xintao Wang, Zhaochen Yu, Xin Tao, Pengfei Wan, Di Zhang, Bin Cui
Diffusion models have demonstrated great success in text-to-video (T2V) generation.
no code implementations • 30 Apr 2024 • Chenyu Jiang, Ye Tian, Zhen Jia, Shuai Zheng, Chuan Wu, Yida Wang
The Mixture-of-Expert (MoE) technique plays a crucial role in expanding the size of DNN model parameters.
no code implementations • 18 Apr 2024 • Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, Dong Yu
Despite the impressive capabilities of Large Language Models (LLMs) on various tasks, they still struggle with scenarios that involves complex reasoning and planning.
Ranked #1 on GSM8K on GSM8K
no code implementations • 20 Mar 2024 • Mengyu Yang, Ye Tian, Lanshan Zhang, Xiao Liang, Xuming Ran, Wendong Wang
Recently, prompt-based methods have emerged as a new alternative `parameter-efficient fine-tuning' paradigm, which only fine-tunes a small number of additional parameters while keeping the original model frozen.
no code implementations • 17 Mar 2024 • Mengchu Li, Ye Tian, Yang Feng, Yi Yu
By investigating the minimax rates and identifying the costs of privacy for these problems, we show that federated differential privacy is an intermediate privacy model between the well-established local and central models of differential privacy.
no code implementations • 14 Mar 2024 • Ante Wang, Linfeng Song, Ye Tian, Baolin Peng, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu
Calibration, which establishes the correlation between accuracy and model confidence, is important for LLM development.
no code implementations • 28 Feb 2024 • Lifeng Jin, Baolin Peng, Linfeng Song, Haitao Mi, Ye Tian, Dong Yu
The most common training pipeline for large language models includes pretraining, finetuning and aligning phases, with their respective resulting models, such as the pretrained model and the finetuned model.
no code implementations • 23 Feb 2024 • Ante Wang, Linfeng Song, Baolin Peng, Ye Tian, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu
Experiments on Biographies show that our method can effectively improve the factuality of generations with simple and intuitive prompts across different scales of LLMs.
2 code implementations • 20 Feb 2024 • Xinchen Zhang, Ling Yang, Yaqi Cai, Zhaochen Yu, Kai-Ni Wang, Jiake Xie, Ye Tian, Minkai Xu, Yong Tang, Yujiu Yang, Bin Cui
In this paper, we propose RealCompo, a new training-free and transferred-friendly text-to-image generation framework, which aims to leverage the respective advantages of text-to-image models and spatial-aware image diffusion models (e. g., layout, keypoints and segmentation maps) to enhance both realism and compositionality of the generated images.
no code implementations • 14 Feb 2024 • Xiaoying Zhang, Baolin Peng, Ye Tian, Jingyan Zhou, Lifeng Jin, Linfeng Song, Haitao Mi, Helen Meng
Despite showing increasingly human-like abilities, large language models (LLMs) often struggle with factual inaccuracies, i. e. "hallucinations", even when they hold relevant knowledge.
1 code implementation • CVPR 2024 • Yiqi Shi, Duo Liu, Liguo Zhang, Ye Tian, Xuezhi Xia, Xiaojing Fu
This paper presents a novel zero-shot method for jointly denoising and enhancing real-word low-light images.
1 code implementation • 27 Dec 2023 • Weijun Chen, Heyuan Wang, Ye Tian, Shijie Guan, Ning Liu
Additionally, adopting a frequency-based perspective can effectively mitigate the influence of noise within MTS data, which helps capture more genuine dependencies.
no code implementations • 14 Nov 2023 • Ye Tian, Xinwei Zhang, Zhiqiang Tan
Consider a semi-supervised setting with a labeled dataset of binary responses and predictors and an unlabeled dataset with only the predictors.
1 code implementation • 24 Oct 2023 • Dominic Petrak, Nafise Sadat Moosavi, Ye Tian, Nikolai Rozanov, Iryna Gurevych
Learning from free-text human feedback is essential for dialog systems, but annotated data is scarce and usually covers only a small fraction of error types known in conversational AI.
no code implementations • 23 Oct 2023 • Ye Tian, Haolei Weng, Yang Feng
While supervised federated learning approaches have enjoyed significant success, the domain of unsupervised federated learning remains relatively underexplored.
1 code implementation • 12 Oct 2023 • Jiaqi Li, Guilin Qi, Chuanyi Zhang, Yongrui Chen, Yiming Tan, Chenlong Xia, Ye Tian
Firstly we retrieve the relevant embedding from the knowledge graph by utilizing group relations in metadata and then integrate it with other modalities.
1 code implementation • NeurIPS 2023 • Shangshang Yang, Xiaoshan Yu, Ye Tian, Xueming Yan, Haiping Ma, Xingyi Zhang
Knowledge tracing (KT) aims to trace students' knowledge states by predicting whether students answer correctly on exercises.
no code implementations • 18 Sep 2023 • Baolin Peng, Linfeng Song, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu
Large Language Models (LLMs) have revolutionized natural language processing, yet aligning these models with human values and preferences using RLHF remains a significant challenge.
1 code implementation • ICCV 2023 • Ke Xu, Lei Han, Ye Tian, Shangshang Yang, Xingyi Zhang
In this paper, we explore a one-shot network quantization regime, named Elastic Quantization Neural Networks (EQ-Net), which aims to train a robust weight-sharing quantization supernet.
no code implementations • 9 Aug 2023 • Ye Tian, Mengyu Yang, Lanshan Zhang, Zhizhen Zhang, Yang Liu, Xiaohui Xie, Xirong Que, Wendong Wang
To this end, inspired by human cognition, we propose a novel recognition paradigm of "View while Moving" for efficient long-untrimmed video recognition.
1 code implementation • 4 Aug 2023 • Ling Yang, Ye Tian, Minkai Xu, Zhongyi Liu, Shenda Hong, Wei Qu, Wentao Zhang, Bin Cui, Muhan Zhang, Jure Leskovec
To address this issue, we propose to learn a new powerful graph representation space by directly labeling nodes' diverse local structures for GNN-to-MLP distillation.
no code implementations • 28 Jul 2023 • Zhizhen Zhang, Xiaohui Xie, Mengyu Yang, Ye Tian, Yong Jiang, Yong Cui
Social Media Popularity Prediction has drawn a lot of attention because of its profound impact on many different applications, such as recommendation systems and multimedia advertising.
no code implementations • 21 Jul 2023 • Zehan Zhu, Ye Tian, Yan Huang, Jinming Xu, Shibo He
Perfect synchronization in distributed machine learning problems is inefficient and even impossible due to the existence of latency, package losses and stragglers.
1 code implementation • 10 Jul 2023 • Shangshang Yang, Haiping Ma, Cheng Zhen, Ye Tian, Limiao Zhang, Yaochu Jin, Xingyi Zhang
Then, we propose multi-objective genetic programming (MOGP) to explore the NAS task's search space by maximizing model performance and interpretability.
no code implementations • 11 Jun 2023 • YiFan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
To address the practical challenges of tackling complex instructions, we propose RestGPT, which exploits the power of LLMs and conducts a coarse-to-fine online planning mechanism to enhance the abilities of task decomposition and API selection.
no code implementations • 19 May 2023 • Ye Tian, Zhengshuo Li
TPS and ADNs can deliver base point power bidirectionally and provide frequency regulation support bidirectionally, which extend the existing reserve assumption in ITD dispatch and enhance the operational security of the ITD system.
1 code implementation • 31 Mar 2023 • Ye Tian, Yuqi Gu, Yang Feng
Assuming a known intrinsic dimension, we proposed a penalized empirical risk minimization method and a spectral method that are \textit{adaptive} to the similarity structure and \textit{robust} to outlier tasks.
no code implementations • 7 Mar 2023 • Ye Tian, Zhengshuo Li, Wenchuan Wu, Miao Fan
The issues of uncertainty and frequency security could become significantly serious in power systems with the high penetration of volatile inverter-based renewables (IBRs).
no code implementations • 30 Sep 2022 • Ye Tian, Haolei Weng, Lucy Xia, Yang Feng
Unsupervised learning has been widely used in many real-world applications.
no code implementations • 8 Aug 2022 • Libin Liang, Ye Tian, Ge Cheng
Study of neural networks with infinite width is important for better understanding of the neural network in practical application.
no code implementations • 18 Jul 2022 • MingBin Xu, Congzheng Song, Ye Tian, Neha Agrawal, Filip Granqvist, Rogier Van Dalen, Xiao Zhang, Arturo Argueta, Shiyi Han, Yaqiao Deng, Leo Liu, Anmol Walia, Alex Jin
Our goal is to train a large neural network language model (NNLM) on compute-constrained devices while preserving privacy using FL and DP.
1 code implementation • 13 Jul 2022 • Qiang Li, Zhaoliang Yao, Jingjing Wang, Ye Tian, Pengju Yang, Di Xie, ShiLiang Pu
Based on this dataset, we propose a method to obtain the blur scores only with the pairwise rank labels as supervision.
no code implementations • 26 Jun 2022 • Junhao Zhang, Vishwanatha M. Rao, Ye Tian, Yanting Yang, Nicolas Acosta, Zihan Wan, Pin-Yu Lee, Chloe Zhang, Lawrence S. Kegeles, Scott A. Small, Jia Guo
Our finding corroborates that schizophrenia is associated with widespread alterations in subcortical brain structure and the subcortical structural information provides prominent features in diagnostic classification.
1 code implementation • 21 Jan 2022 • Vishwanatha M. Rao, Zihan Wan, Soroush Arabshahi, David J. Ma, Pin-Yu Lee, Ye Tian, Xuzhe Zhang, Andrew F. Laine, Jia Guo
Transformers have demonstrated success in natural image segmentation and have recently been applied to 3D medical image segmentation tasks due to their ability to capture long-distance relationships in the input where the local receptive fields of CNNs struggle.
no code implementations • 8 Nov 2021 • Ye Tian, Yang Feng
In this work, we tackle the multi-class NP problem by establishing a connection with the CS problem via strong duality and propose two algorithms.
no code implementations • 24 Oct 2021 • Ye Tian, Gesualdo Scutari, Tianyu Cao, Alexander Gasnikov
In order to reduce the number of communications to reach a solution accuracy, we proposed a {\it preconditioned, accelerated} distributed method.
2 code implementations • 29 Sep 2021 • Ye Tian, Xiangxiang Chu, Hongpeng Wang
However, the transformer can model the global context easily.
1 code implementation • 10 Aug 2021 • Shangshang Yang, Ye Tian, Xiaoshu Xiang, Shichen Peng, Xingyi Zhang
Evolutionary neural architecture search (ENAS) has recently received increasing attention by effectively finding high-quality neural architectures, which however consumes high computational cost by training the architecture encoded by each individual for complete epochs in individual evaluation.
1 code implementation • 19 Jul 2021 • Dawei Du, Longyin Wen, Pengfei Zhu, Heng Fan, QinGhua Hu, Haibin Ling, Mubarak Shah, Junwen Pan, Ali Al-Ali, Amr Mohamed, Bakour Imene, Bin Dong, Binyu Zhang, Bouchali Hadia Nesma, Chenfeng Xu, Chenzhen Duan, Ciro Castiello, Corrado Mencar, Dingkang Liang, Florian Krüger, Gennaro Vessio, Giovanna Castellano, Jieru Wang, Junyu Gao, Khalid Abualsaud, Laihui Ding, Lei Zhao, Marco Cianciotta, Muhammad Saqib, Noor Almaadeed, Omar Elharrouss, Pei Lyu, Qi Wang, Shidong Liu, Shuang Qiu, Siyang Pan, Somaya Al-Maadeed, Sultan Daud Khan, Tamer Khattab, Tao Han, Thomas Golda, Wei Xu, Xiang Bai, Xiaoqing Xu, Xuelong Li, Yanyun Zhao, Ye Tian, Yingnan Lin, Yongchao Xu, Yuehan Yao, Zhenyu Xu, Zhijian Zhao, Zhipeng Luo, Zhiwei Wei, Zhiyuan Zhao
Crowd counting on the drone platform is an interesting topic in computer vision, which brings new challenges such as small object inference, background clutter and wide viewpoint.
no code implementations • 29 May 2021 • Ye Tian, Yang Feng
In this work, we study the transfer learning problem under high-dimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data.
no code implementations • 22 May 2021 • Ye Tian, Xingyi Zhang, Cheng He, Kay Chen Tan, Yaochu Jin
In the past three decades, a large number of metaheuristics have been proposed and shown high performance in solving complex optimization problems.
no code implementations • 21 May 2021 • Philipp Ennen, Yen-Ting Lin, Ali Girayhan Ozbay, Ferdinando Insalata, Maolin Li, Ye Tian, Sepehr Jalali, Da-Shan Shiu
In light of the recent success of data-driven approaches, we propose the novel future bridging NLG (FBNLG) concept for dialogue systems and simulators.
no code implementations • 10 May 2021 • Min Li, Yu Li, Ye Tian, Li Jiang, Qiang Xu
This paper presents AppealNet, a novel edge/cloud collaborative architecture that runs deep learning (DL) tasks more efficiently than state-of-the-art solutions.
no code implementations • 6 May 2021 • Anish Karpurapu, Adam Krekorian, Ye Tian, Leslie M. Collins, Ravi Karra, Aaron Franklin, Boyla O. Mainsah
Since a sequence of prior doses and INR better capture the variability in individual warfarin response, we hypothesized that longitudinal dose response data will improve maintenance dose predictions.
no code implementations • 8 Mar 2021 • Jezabel R. Garcia, Federica Freddi, Feng-Ting Liao, Jamie McGowan, Tim Nieradzik, Da-Shan Shiu, Ye Tian, Alberto Bernacchia
We show that TreeMAML improves the state of the art results for cross-lingual Natural Language Inference.
Cross-Lingual Natural Language Inference Cross-Lingual Transfer +2
1 code implementation • 7 Feb 2021 • Ye Tian, Yang Feng
Variable screening methods have been shown to be effective in dimension reduction under the ultra-high dimensional setting.
no code implementations • 4 Jan 2021 • Heng Lu, Chen Yang, Ye Tian, Jun Lu, Fanqi Xu, FengNan Chen, Yan Ying, Kevin G. Schädler, Chinhua Wang, Frank H. L. Koppens, Antoine Reserbat-Plantey, Joel Moser
With it we characterize the lowest frequency mode of a FLG resonator by measuring its frequency response as a function of position on the membrane.
Mesoscale and Nanoscale Physics
no code implementations • 2 Jan 2021 • Ye Tian
We proposed a novel multilayer correlated topic model (MCTM) to analyze how the main ideas inherit and vary between a document and its different segments, which helps understand an article's structure.
no code implementations • 1 Jan 2021 • Jezabel Garcia, Federica Freddi, Jamie McGowan, Tim Nieradzik, Da-Shan Shiu, Ye Tian, Alberto Bernacchia
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related, and sharing information between unrelated tasks might hurt performance.
no code implementations • 21 Sep 2020 • Yu Li, Min Li, Bo Luo, Ye Tian, Qiang Xu
The key to enabling such lightweight checking is that the smaller neural network only needs to produce approximate results for the initial task without sacrificing fault coverage much.
no code implementations • 15 Sep 2020 • Zhiwei Wei, Chenzhen Duan, Xinghao Song, Ye Tian, Hongpeng Wang
Specifically, we propose a scale adaptive module, which dynamically adjusts chip size to balance object scale, narrowing scale variation in training.
no code implementations • 10 Aug 2020 • Oliver Maier, Steven H. Baete, Alexander Fyrdahl, Kerstin Hammernik, Seb Harrevelt, Lars Kasper, Agah Karakuzu, Michael Loecher, Franz Patzig, Ye Tian, Ke Wang, Daniel Gallichan, Martin Uecker, Florian Knoll
The reference implementations were in good agreement, both visually and in terms of image similarity metrics.
1 code implementation • 13 Jul 2020 • Cong Chen, Shouyang Dong, Ye Tian, Kunlin Cao, Li Liu, Yuanhao Guo
(1) The teacher model serves a dual role as a teacher and a student, such that the teacher predictions on unlabeled images may be very close to those of student, which limits the upper-bound of the student.
no code implementations • 16 Jun 2020 • Ye Tian, Yang Feng
In addition, we show that in a high-dimensional framework, the number of random subspaces needs to be very large to guarantee that a subspace covering signals is selected.
1 code implementation • 8 Apr 2020 • Cheng Li, Ye Tian
We believe that PLMs can also be used to solve the relation extraction problem, but it is necessary to establish a specially designed downstream task model or even loss function for dealing with complicated relations.
Ranked #3 on Relation Extraction on SemEval-2010 Task-8
1 code implementation • 23 Oct 2019 • Jinming Xu, Ye Tian, Ying Sun, Gesualdo Scutari
This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications.
no code implementations • ACL 2020 • Wei Wang, Ye Tian, Jiquan Ngiam, Yinfei Yang, Isaac Caswell, Zarana Parekh
Most data selection research in machine translation focuses on improving a single domain.
no code implementations • 27 May 2019 • Ye Tian, Li Yang, Wei Wang, Jing Zhang, Qing Tang, Mili Ji, Yang Yu, Yu Li, Hong Yang, Airong Qian
Traditionally, the most indispensable diagnosis of cervix squamous carcinoma is histopathological assessment which is achieved under microscope by pathologist.
2 code implementations • 21 Feb 2019 • Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon
Lingvo is a Tensorflow framework offering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models.
no code implementations • 13 Feb 2019 • Chen Sun, Ye Tian, Liang Gao, Yishuai Niu, Tianlong Zhang, Hua Li, Yuqing Zhang, Zengqi Yue, Nicole Delepine-Gilon, Jin Yu
Machine learning has been used to develop the model.
no code implementations • 7 Nov 2018 • Ye Tian, Weiping Zhang
In this paper, we propose an effective THresholding method based on ORder Statistic, called THORS, to convert an arbitrary scoring-type classifier, which can induce a continuous cumulative distribution function of the score, into a cost-sensitive one.
no code implementations • WS 2018 • Isabel Groves, Ye Tian, Ioannis Douratsos
The current most popular method for automatic Natural Language Generation (NLG) evaluation is comparing generated text with human-written reference sentences using a metrics system, which has drawbacks around reliability and scalability.
no code implementations • COLING 2018 • Thiago Galery, Efstathios Charitos, Ye Tian
The system presented here took part in the 2018 Trolling, Aggression and Cyberbullying shared task (Forest and Trees team) and uses a Gated Recurrent Neural Network architecture (Cho et al., 2014) in an attempt to assess whether combining pre-trained English and Hindi fastText (Mikolov et al., 2018) word embeddings as a representation of the sequence input would improve classification performance.
Aggression Identification Multi-Label Text Classification +1
no code implementations • 24 Jul 2018 • Lin Shao, Ye Tian, Jeannette Bohg
We show that our method generalizes well on real-world data achieving visually better segmentation results.
no code implementations • 16 May 2018 • Yunhan Zhao, Ye Tian, Charless Fowlkes, Wei Shen, Alan Yuille
Experimental results verify that our approach significantly improves the ability of deep networks to resist large variations between training and testing data and achieves classification accuracy improvements on several benchmark datasets, including MNIST, affNIST, SVHN, CIFAR-10 and miniImageNet.
no code implementations • WS 2017 • Ye Tian, Thiago Galery, Giulio Dulcinati, Emilia Molimpakis, Chao Sun
FB reactions (e. g. {``}Love{''} and {``}Angry{''}) indicate the readers{'} overall sentiment, against which we can investigate the types of emojis used the comments under different reaction profiles.
no code implementations • 4 Jan 2017 • Ye Tian, Ran Cheng, Xingyi Zhang, Yaochu Jin
To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multi-objective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators.
no code implementations • LREC 2016 • Julian Hough, Ye Tian, Laura de Ruiter, Simon Betz, Spyros Kousidis, David Schlangen, Jonathan Ginzburg
We present the DUEL corpus, consisting of 24 hours of natural, face-to-face, loosely task-directed dialogue in German, French and Mandarin Chinese.