no code implementations • EMNLP 2020 • Jinghui Yan, Yining Wang, Lu Xiang, Yu Zhou, Chengqing Zong
Medical entity normalization, which links medical mentions in the text to entities in knowledge bases, is an important research topic in medical natural language processing.
no code implementations • 21 Sep 2024 • Jinman Zhao, Zifan Qian, Linbo Cao, Yining Wang, Yitian Ding
Role-play in the Large Language Model (LLM) is a crucial technique that enables models to adopt specific perspectives, enhancing their ability to generate contextually relevant and accurate responses.
no code implementations • 8 Jul 2024 • Xi Chen, Mo Liu, Yining Wang, Yuan Zhou
In this paper, we consider a multi-stage dynamic assortment optimization problem with multi-nomial choice modeling (MNL) under resource knapsack constraints.
no code implementations • 2 Jul 2024 • Zixiang Wang, Hao Yan, Yining Wang, Zhengjia Xu, Zhuoyue Wang, Zhizhong Wu
We use the Deep Q Network (DQN) and Proximal Policy Optimization (PPO) models to optimize the path planning and decision-making process through the continuous interaction between the robot and the environment, and the reward signals with real-time feedback.
no code implementations • 28 Jun 2024 • Qian Yu, Yining Wang, Baihe Huang, Qi Lei, Jason D. Lee
Optimization of convex functions under stochastic zeroth-order feedback has been a major and challenging question in online learning.
no code implementations • 13 Jun 2024 • Yining Wang, Wanli Ni, Wenqiang Yi, Xiaodong Xu, Ping Zhang, Arumugam Nallanathan
Simulation results verify the superiority of the proposed FedCL framework compared to other distributed learning benchmarks in terms of task performance and robustness under different numbers of clients and channel conditions, especially in low signal-to-noise ratio and highly heterogeneous data scenarios.
no code implementations • CVPR 2024 • Yining Wang, Junjie Sun, Chenyue Wang, Mi Zhang, Min Yang
Recent studies have noted an intriguing phenomenon termed Neural Collapse, that is, when the neural networks establish the right correlation between feature spaces and the training targets, their last-layer features, together with the classifier weights, will collapse into a stable and symmetric structure.
no code implementations • 15 Apr 2024 • Jiachun Li, David Simchi-Levi, Yining Wang
Contextual bandit with linear reward functions is among one of the most extensively studied models in bandit and online learning research.
no code implementations • 6 Apr 2024 • Sentao Miao, Yining Wang
This paper proposes a practically efficient algorithm with optimal theoretical regret which solves the classical network revenue management (NRM) problem with unknown, nonparametric demand.
no code implementations • 1 Mar 2024 • Jinman Zhao, Yitian Ding, Chen Jia, Yining Wang, Zifan Qian
We investigate the outputs of the GPT series of LLMs in various languages using our three measurement methods.
no code implementations • 21 Feb 2024 • YuHeng Chen, Pengfei Cao, Yubo Chen, Yining Wang, Shengping Liu, Kang Liu, Jun Zhao
Large language models (LLMs) store extensive factual knowledge, but the underlying mechanisms remain unclear.
no code implementations • 28 Nov 2023 • Xi Chen, David Simchi-Levi, Yining Wang
This paper introduces a novel contextual bandit algorithm for personalized pricing under utility fairness constraints in scenarios with uncertain demand, achieving an optimal regret upper bound.
no code implementations • 17 Oct 2023 • Rudolf L. M. van Herten, Nils Hampe, Richard A. P. Takx, Klaas Jan Franssen, Yining Wang, Dominika Suchá, José P. Henriques, Tim Leiner, R. Nils Planken, Ivana Išgum
This requires analysis of the coronary lumen and plaque.
no code implementations • NeurIPS 2023 • Qian Yu, Yining Wang, Baihe Huang, Qi Lei, Jason D. Lee
We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity.
no code implementations • 13 Jun 2023 • Jining Wang, Delai Qiu, YouMing Liu, Yining Wang, Chuan Chen, Zibin Zheng, Yuren Zhou
We extend several KGE models with the method, resulting in substantial performance improvements on widely-used benchmark datasets.
1 code implementation • 2 Jun 2023 • Shravan Nayak, Surangika Ranathunga, Sarubi Thillainathan, Rikki Hung, Anthony Rinaldi, Yining Wang, Jonah Mackey, Andrew Ho, En-Shiun Annie Lee
In this paper, we show that intermediate-task fine-tuning (ITFT) of PMSS models is extremely beneficial for domain-specific NMT, especially when target domain data is limited/unavailable and the considered languages are missing or under-represented in the PMSS model.
no code implementations • 1 Jan 2023 • Wenjing Zhang, Yining Wang, Mingzhe Chen, Tao Luo, Dusit Niyato
In this paper, a semantic communication framework for image transmission is developed.
Multi-agent Reinforcement Learning Reinforcement Learning (RL) +1
no code implementations • 30 Nov 2022 • Yining Wang, Xumeng Gong, Shaochuan Li, Bing Yang, YiWu Sun, Chuan Shi, Yangang Wang, Cheng Yang, Hui Li, Le Song
Its improvement in both accuracy and efficiency makes it a valuable tool for de novo antibody design and could make further improvements in immuno-theory.
no code implementations • 11 Oct 2022 • Yining Wang
In this paper we study the non-stationary stochastic optimization question with bandit feedback and dynamic regret measures.
1 code implementation • 16 Sep 2022 • Zhiping Xiao, Jeffrey Zhu, Yining Wang, Pei Zhou, Wen Hong Lam, Mason A. Porter, Yizhou Sun
We examine a variety of applications and we thereby demonstrate the effectiveness of our PEM model.
no code implementations • 17 Aug 2022 • Yining Wang, Mingzhe Chen, Tao Luo, Walid Saad, Dusit Niyato, H. Vincent Poor, Shuguang Cui
Hence, the BS must select an appropriate resource block for each user as well as determine and transmit part of the semantic information to the users.
no code implementations • 22 Jul 2022 • Xi Chen, Jiameng Lyu, Yining Wang, Yuan Zhou
We introduce the regularized revenue, i. e., the total revenue with a balancing regularization, as our objective to incorporate fair resource-consumption balancing into the revenue maximization goal.
no code implementations • 30 Apr 2022 • Yifan Yan, Xudong Pan, Yining Wang, Mi Zhang, Min Yang
On $9$ state-of-the-art white-box watermarking schemes and a broad set of industry-level DNN architectures, our attack for the first time reduces the embedded identity message in the protected models to be almost random.
no code implementations • Neurocomputing 2022 • Hanchen Wang, Yining Wang, Jianfeng Li, Tao Luo
This degree difference between equivalent entities poses a great challenge for entity alignment.
no code implementations • 3 Oct 2021 • Xi Chen, Quanquan Liu, Yining Wang
In particular, for a sequence of arriving context vectors, with each context associated with an underlying value, the decision-maker either makes a query at a certain point or skips the context.
no code implementations • 10 Sep 2021 • Xi Chen, Sentao Miao, Yining Wang
In the recent decades, the advance of information technology and abundant personal data facilitate the application of algorithmic personalized pricing.
1 code implementation • 29 Jan 2021 • Yining Wang, Mingzhe Chen, Zhaohui Yang, Walid Saad, Tao Luo, Shuguang Cui, H. Vincent Poor
The problem is formulated as an optimization problem whose goal is to maximize the reliability of the VR network by selecting the appropriate VAPs to be turned on and controlling the user association with SBSs.
no code implementations • 5 Jan 2021 • Xi Chen, Yanjun Han, Yining Wang
{The adversarial combinatorial bandit with general non-linear reward is an important open problem in bandit literature, and it is still unclear whether there is a significant gap from the case of linear reward, stochastic bandit, or semi-bandit feedback.}
no code implementations • 11 Dec 2020 • Yusha Liu, Yining Wang, Aarti Singh
We also study adaptation to unknown function smoothness over a continuous scale of H\"older spaces indexed by $\alpha$, with a bandit model selection approach applied with our proposed two-layer algorithms.
no code implementations • 27 Sep 2020 • Xi Chen, David Simchi-Levi, Yining Wang
In this paper, we consider a dynamic pricing problem over $T$ time periods with an \emph{unknown} demand function of posted price and personalized information.
no code implementations • 7 Sep 2020 • Yining Wang, He Wang
First, we prove that a natural re-solving heuristic attains $O(1)$ regret compared to the value of the optimal policy.
no code implementations • 4 Sep 2020 • Yining Wang, Yi Chen, Ethan X. Fang, Zhaoran Wang, Runze Li
We consider the stochastic contextual bandit problem under the high dimensional linear model.
no code implementations • WS 2020 • Qian Wang, Yuchen Liu, Cong Ma, Yu Lu, Yining Wang, Long Zhou, Yang Zhao, Jiajun Zhang, Cheng-qing Zong
This paper describes the CASIA{'}s system for the IWSLT 2020 open domain translation task.
no code implementations • 16 Mar 2020 • Yining Wang, Xi Chen, Xiangyu Chang, Dongdong Ge
In this paper, using the problem of demand function prediction in dynamic pricing as the motivating example, we study the problem of constructing accurate confidence intervals for the demand function.
no code implementations • ICLR 2021 • Yining Wang, Ruosong Wang, Simon S. Du, Akshay Krishnamurthy
We design a new provably efficient algorithm for episodic reinforcement learning with generalized linear function approximation.
no code implementations • 28 Nov 2019 • Yining Wang, Mingzhe Chen, Zhaohui Yang, Tao Luo, Walid Saad
Using GRUs and CNNs, the UAVs can model the long-term historical illumination distribution and predict the future illumination distribution.
no code implementations • IJCNLP 2019 • Yining Wang, Jiajun Zhang, Long Zhou, Yuchen Liu, Cheng-qing Zong
In this paper, we introduce a novel interactive approach to translate a source language into two different languages simultaneously and interactively.
no code implementations • 9 Oct 2019 • Xi Chen, Akshay Krishnamurthy, Yining Wang
We establish both upper and lower bounds on the regret, and show that our policy is optimal up to logarithmic factor in $T$ when the assortment capacity is constant.
no code implementations • 17 Sep 2019 • Yining Wang, Mingzhe Chen, Zhaohui Yang, Xue Hao, Tao Luo, Walid Saad
This problem is formulated as an optimization problem whose goal is to minimize the total transmit power while meeting the illumination and communication requirements of users.
no code implementations • 5 Sep 2019 • Kefan Dong, Jian Peng, Yining Wang, Yuan Zhou
Our learning algorithm, Adaptive Value-function Elimination (AVE), is inspired by the policy elimination algorithm proposed in (Jiang et al., 2017), known as OLIVE.
1 code implementation • IJCNLP 2019 • Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, Cheng-qing Zong
Moreover, we propose to further improve NCLS by incorporating two related tasks, monolingual summarization and machine translation, into the training process of CLS under multi-task learning.
no code implementations • ACL 2019 • Yining Wang, Long Zhou, Jiajun Zhang, FeiFei Zhai, Jingfang Xu, Cheng-qing Zong
We verify our methods on various translation scenarios, including one-to-many, many-to-many and zero-shot.
no code implementations • 4 May 2019 • Yingkai Li, Yining Wang, Xi Chen, Yuan Zhou
Linear contextual bandit is an important class of sequential decision making problems with a wide range of applications to recommender systems, online advertising, healthcare, and many other machine learning related tasks.
no code implementations • 30 Mar 2019 • Yingkai Li, Yining Wang, Yuan Zhou
We study the linear contextual bandit problem with finite action sets.
no code implementations • NeurIPS 2018 • Simon S. Du, Yining Wang, Xiyu Zhai, Sivaraman Balakrishnan, Ruslan R. Salakhutdinov, Aarti Singh
We show that for an $m$-dimensional convolutional filter with linear activation acting on a $d$-dimensional input, the sample complexity of achieving population prediction error of $\epsilon$ is $\widetilde{O(m/\epsilon^2)$, whereas the sample-complexity for its FNN counterpart is lower bounded by $\Omega(d/\epsilon^2)$ samples.
no code implementations • NeurIPS 2018 • Yining Wang, Xi Chen, Yuan Zhou
In this paper we consider the dynamic assortment selection problem under an uncapacitated multinomial-logit (MNL) model.
no code implementations • 31 Oct 2018 • Xi Chen, Yining Wang, Yuan Zhou
To this end, we develop an upper confidence bound (UCB) based policy and establish the regret bound on the order of $\widetilde O(d\sqrt{T})$, where $d$ is the dimension of the feature and $\widetilde O$ suppresses logarithmic dependence.
no code implementations • 25 Oct 2018 • Yining Wang, Erva Ulu, Aarti Singh, Levent Burak Kara
Our approach uses a computationally tractable experimental design method to select number of sample force locations based on geometry only, without inspecting the stress response that requires computationally expensive finite-element analysis.
no code implementations • EMNLP 2018 • Yining Wang, Jiajun Zhang, FeiFei Zhai, Jingfang Xu, Cheng-qing Zong
However, previous studies show that one-to-many translation based on this framework cannot perform on par with the individually trained models.
no code implementations • 27 Jun 2018 • Xi Chen, Chao Shi, Yining Wang, Yuan Zhou
One key challenge is that utilities of products are unknown to the seller and need to be learned.
no code implementations • 26 May 2018 • Simon S. Du, Yining Wang, Sivaraman Balakrishnan, Pradeep Ravikumar, Aarti Singh
We first show that a simple local binning median step can effectively remove the adversary noise and this median estimator is minimax optimal up to absolute constants over the H\"{o}lder function class with smoothness parameters smaller than or equal to 1.
no code implementations • 25 May 2018 • Yang Zhao, Yining Wang, Jiajun Zhang, Cheng-qing Zong
Neural Machine Translation (NMT) has drawn much attention due to its promising translation performance recently.
no code implementations • NeurIPS 2018 • Simon S. Du, Yining Wang, Xiyu Zhai, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Aarti Singh
It is widely believed that the practical success of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) owes to the fact that CNNs and RNNs use a more compact parametric representation than their Fully-Connected Neural Network (FNN) counterparts, and consequently require fewer training examples to accurately estimate their parameters.
no code implementations • NeurIPS 2018 • Xi Chen, Yining Wang, Yuan Zhou
We further establish the matching lower bound result to show the optimality of our policy.
no code implementations • NeurIPS 2018 • Yining Wang, Sivaraman Balakrishnan, Aarti Singh
In this setup, an algorithm is allowed to adaptively query the underlying function at different locations and receives noisy evaluations of function values at the queried points (i. e. the algorithm has access to zeroth-order information).
no code implementations • 26 Feb 2018 • Yining Wang, Yi Wu, Simon S. Du
Local polynomial regression (Fan and Gijbels 1996) is an important class of methods for nonparametric density estimation and regression problems.
no code implementations • 21 Feb 2018 • Cynthia Rudin, Yining Wang
Learning-to-rank techniques have proven to be extremely useful for prioritization problems, where we rank items in order of their estimated probabilities, and dedicate our limited resources to the top-ranked items.
no code implementations • 14 Nov 2017 • Zeyuan Allen-Zhu, Yuanzhi Li, Aarti Singh, Yining Wang
The experimental design problem concerns the selection of k points from a potentially large design pool of p-dimensional vectors, so as to maximize the statistical efficiency regressed on the selected k design points.
1 code implementation • 13 Nov 2017 • Yining Wang, Long Zhou, Jiajun Zhang, Cheng-qing Zong
Our experiments show that subword model performs best for Chinese-to-English translation with the vocabulary which is not so big while hybrid word-character model is most suitable for English-to-Chinese translation.
no code implementations • IJCNLP 2017 • Yining Wang, Yang Zhao, Jiajun Zhang, Cheng-qing Zong, Zhengshan Xue
While neural machine translation (NMT) has become the new paradigm, the parameter optimization requires large-scale parallel data which is scarce in many domains and language pairs.
no code implementations • 30 Oct 2017 • Yining Wang
In this paper we study the frequentist convergence rate for the Latent Dirichlet Allocation (Blei et al., 2003) topic models.
no code implementations • 29 Oct 2017 • Yining Wang, Simon Du, Sivaraman Balakrishnan, Aarti Singh
We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries.
no code implementations • 18 Sep 2017 • Xi Chen, Yining Wang
In this short note we consider a dynamic assortment planning problem under the capacitated multinomial logit (MNL) bandit model.
no code implementations • 9 Aug 2017 • Xi Chen, Yining Wang, Yu-Xiang Wang
We consider a non-stationary sequential stochastic optimization problem, in which the underlying cost functions change over time under a variation budget constraint.
no code implementations • ICML 2017 • Zeyuan Allen-Zhu, Yuanzhi Li, Aarti Singh, Yining Wang
We consider computationally tractable methods for the experimental design problem, where k out of n design points of dimension p are selected so that certain optimality criteria are approximately satisfied.
2 code implementations • ICML 2017 • Chong Wang, Yining Wang, Po-Sen Huang, Abdel-rahman Mohamed, Dengyong Zhou, Li Deng
The probability of a segmented sequence is calculated as the product of the probabilities of all its segments, where each segment is modeled using existing tools such as recurrent neural networks.
no code implementations • NeurIPS 2017 • Simon S. Du, Yining Wang, Aarti Singh
This observation leads to many interesting results on general high-rank matrix estimation problems, which we briefly summarize below ($A$ is an $n\times n$ high-rank PSD matrix and $A_k$ is the best rank-$k$ approximation of $A$): (1) High-rank matrix completion: By observing $\Omega(\frac{n\max\{\epsilon^{-4}, k^2\}\mu_0^2\|A\|_F^2\log n}{\sigma_{k+1}(A)^2})$ elements of $A$ where $\sigma_{k+1}\left(A\right)$ is the $\left(k+1\right)$-th singular value of $A$ and $\mu_0$ is the incoherence, the truncated SVD on a zero-filled matrix satisfies $\|\widehat{A}_k-A\|_F \leq (1+O(\epsilon))\|A-A_k\|_F$ with high probability.
no code implementations • 9 Feb 2017 • Yining Wang, Jialei Wang, Sivaraman Balakrishnan, Aarti Singh
We consider the problems of estimation and of constructing component-wise confidence intervals in a sparse high-dimensional linear regression model when some covariates of the design matrix are missing completely at random.
no code implementations • 24 Oct 2016 • Yining Wang, Yu-Xiang Wang, Aarti Singh
Subspace clustering is the problem of partitioning unlabeled data points into a number of clusters so that data points within one cluster lie approximately on a low-dimensional linear subspace.
no code implementations • NeurIPS 2016 • Bo Li, Yining Wang, Aarti Singh, Yevgeniy Vorobeychik
Recommendation and collaborative filtering systems are important in modern information and e-commerce applications.
no code implementations • NeurIPS 2016 • Yining Wang, Animashree Anandkumar
In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition.
no code implementations • 23 Feb 2016 • Maria Florina Balcan, Simon S. Du, Yining Wang, Adams Wei Yu
We consider the noisy power method algorithm, which has wide applications in machine learning and statistics, especially those related to principal component analysis (PCA) under resource (communication, memory or privacy) constraints.
no code implementations • 19 Feb 2016 • Yong Ren, Yining Wang, Jun Zhu
Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees.
no code implementations • 9 Jan 2016 • Yining Wang, Adams Wei Yu, Aarti Singh
We derive computationally tractable methods to select a small subset of experiment settings from a large pool of given design points.
no code implementations • NeurIPS 2015 • Yining Wang, Yu-Xiang Wang, Aarti Singh
Subspace clustering is an unsupervised learning problem that aims at grouping data points into multiple ``clusters'' so that data points in a single cluster lie approximately on a low-dimensional linear subspace.
no code implementations • NeurIPS 2015 • Yining Wang, Hsiao-Yu Tung, Alexander Smola, Animashree Anandkumar
Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares.
no code implementations • 17 May 2015 • Yining Wang, Aarti Singh
We consider the problem of matrix column subset selection, which selects a subset of columns from an input matrix such that the input can be well approximated by the span of the selected columns.
no code implementations • 4 Apr 2015 • Yining Wang, Yu-Xiang Wang, Aarti Singh
A line of recent work (4, 19, 24, 20) provided strong theoretical guarantee for sparse subspace clustering (4), the state-of-the-art algorithm for subspace clustering, on both noiseless and noisy data sets.
no code implementations • NeurIPS 2014 • Yining Wang, Jun Zhu
Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document.
no code implementations • 20 Jun 2014 • Yining Wang, Aarti Singh
We present a simple noise-robust margin-based active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise.
no code implementations • 24 Apr 2013 • Yining Wang, Li-Wei Wang, Yuanzhi Li, Di He, Tie-Yan Liu, Wei Chen
We show that NDCG with logarithmic discount has consistent distinguishability although it converges to the same limit for all ranking functions.