no code implementations • 5 Feb 2025 • Ben Liu, Jihai Zhang, Fangquan Lin, Cheng Yang, Min Peng, Wotao Yin
However, existing methods face two limitations: 1) they typically assume that all answers to the questions are contained in KGs, neglecting the incompleteness issue of KGs, and 2) they treat the KG as a static repository and overlook the implicit logical reasoning structures inherent in KGs.
no code implementations • 31 Dec 2024 • HanQin Cai, Chandra Kundu, Jialin Liu, Wotao Yin
This paper proposes a novel scalable and learnable non-convex approach, coined Learned Robust Matrix Completion (LRMC), for large-scale RMC problems.
1 code implementation • 16 Aug 2024 • Xue Wang, Tian Zhou, Jianqing Zhu, Jialin Liu, Kun Yuan, Tao Yao, Wotao Yin, Rong Jin, HanQin Cai
Attention based models have achieved many remarkable breakthroughs in numerous applications.
no code implementations • 9 Jul 2024 • Jihai Zhang, Wei Wang, Siyan Guo, Li Wang, Fangquan Lin, Cheng Yang, Wotao Yin
Optimization problems seek to find the best solution to an objective under a set of constraints, and have been widely investigated in real-world applications.
no code implementations • 9 Jun 2024 • Ziang Chen, Xiaohan Chen, Jialin Liu, Xinshang Wang, Wotao Yin
In this work, we investigate the expressive or representative power of GNNs, a crucial aspect of neural network theory, specifically in the context of QP tasks, with both continuous and mixed-integer settings.
1 code implementation • 4 Jun 2024 • Zhonglin Xie, Wotao Yin, Zaiwen Wen
Then, we formulate a novel learning to optimize (L2O) problem aimed at minimizing the stopping time subject to the convergence and stability condition.
no code implementations • 24 May 2024 • Xiaohan Chen, Jialin Liu, Wotao Yin
Learning to Optimize (L2O) stands at the intersection of traditional optimization and machine learning, utilizing the capabilities of machine learning to enhance conventional optimization techniques.
1 code implementation • 23 May 2024 • Huajie Qian, Donghao Ying, Henry Lam, Wotao Yin
Ensemble learning is a popular technique to improve the accuracy of machine learning models.
1 code implementation • 18 Feb 2024 • Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, Tianlong Chen
In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has become standard.
no code implementations • 11 Feb 2024 • Ziang Chen, Jialin Liu, Xiaohan Chen, Xinshang Wang, Wotao Yin
In the literature, message-passing GNN (MP-GNN), as the simplest GNN structure, is frequently used as a fast approximation of SB and we find that not all MILPs's SB can be represented with MP-GNN.
1 code implementation • 20 Oct 2023 • Haoyu Wang, Jialin Liu, Xiaohan Chen, Xinshang Wang, Pan Li, Wotao Yin
Mixed-integer linear programming (MILP) stands as a notable NP-hard problem pivotal to numerous crucial industrial applications.
no code implementations • 20 Aug 2023 • Ming Jin, Bilgehan Sel, Fnu Hardeep, Wotao Yin
This paper outlines a natural conversational approach to solving personalized energy-related problems using large language models (LLMs).
1 code implementation • 1 Jun 2023 • Lisang Ding, Kexin Jin, Bicheng Ying, Kun Yuan, Wotao Yin
Their communication, governed by the communication topology and gossip weight matrices, facilitates the exchange of model updates.
1 code implementation • 29 May 2023 • Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin, HanQin Cai
Learning to Optimize (L2O), a technique that utilizes machine learning to learn an optimization algorithm automatically from data, has gained arising attention in recent years.
no code implementations • 12 May 2023 • Yutong He, Xinmeng Huang, Yiming Chen, Wotao Yin, Kun Yuan
In this paper, we investigate the performance limit of distributed stochastic optimization algorithms employing communication compression.
no code implementations • NeurIPS 2020 • Yanli Liu, Kaiqing Zhang, Tamer Başar, Wotao Yin
In this paper, we revisit and improve the convergence of policy gradient (PG), natural PG (NPG) methods, and their variance-reduced variants, under general smooth policy parametrizations.
1 code implementation • 14 Nov 2022 • Quan Xiao, Han Shen, Wotao Yin, Tianyi Chen
By leveraging the special structure of the equality constraints problem, the paper first presents an alternating implicit projected SGD approach and establishes the $\tilde{\cal O}(\epsilon^{-2})$ sample complexity that matches the state-of-the-art complexity of ALSET \citep{chen2021closing} for unconstrained bilevel problems.
1 code implementation • 19 Oct 2022 • Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin
While Mixed-integer linear programming (MILP) is NP-hard in general, practical MILP has received roughly 100--fold speedup in the past twenty years.
1 code implementation • 14 Oct 2022 • Zhuoqing Song, Weijian Li, Kexin Jin, Lei Shi, Ming Yan, Wotao Yin, Kun Yuan
In the proposed family, EquiStatic has a degree of $\Theta(\ln(n))$, where $n$ is the network size, and a series of time-dependent one-peer topologies, EquiDyn, has a constant degree of 1.
1 code implementation • 25 Sep 2022 • Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin
In particular, the graph neural network (GNN) is considered a suitable ML model for optimization problems whose variables and constraints are permutation--invariant, for example, the linear program (LP).
no code implementations • 8 Jun 2022 • Xinmeng Huang, Yiming Chen, Wotao Yin, Kun Yuan
We establish a convergence lower bound for algorithms whether using unbiased or contractive compressors in unidirection or bidirection.
3 code implementations • 18 May 2022 • Tian Zhou, Ziqing Ma, Xue Wang, Qingsong Wen, Liang Sun, Tao Yao, Wotao Yin, Rong Jin
Recent studies have shown that deep learning models such as RNNs and Transformers have brought significant performance gains for long-term forecasting of time series because they effectively utilize historical information.
Ranked #3 on
Time Series Forecasting
on ETTh2 (96) Univariate
no code implementations • 7 Dec 2021 • Zhishuai Guo, Yi Xu, Wotao Yin, Rong Jin, Tianbao Yang
Although rigorous convergence analysis exists for Adam, they impose specific requirements on the update of the adaptive step size, which are not generic enough to cover many other variants of Adam.
no code implementations • NeurIPS 2021 • Tianyi Chen, Yuejiao Sun, Wotao Yin
By leveraging the hidden smoothness of the problem, this paper presents a tighter analysis of ALSET for stochastic nested problems.
no code implementations • NeurIPS 2021 • Xinmeng Huang, Kun Yuan, Xianghui Mao, Wotao Yin
In this paper, we will improve the convergence analysis and rates of variance reduction under without-replacement sampling orders for composite finite-sum minimization. Our results are in two-folds.
2 code implementations • 8 Nov 2021 • Bicheng Ying, Kun Yuan, Hanbin Hu, Yiming Chen, Wotao Yin
On mainstream DNN training tasks, BlueFog reaches a much higher throughput and achieves an overall $1. 2\times \sim 1. 8\times$ speedup over Horovod, a state-of-the-art distributed deep learning package based on Ring-Allreduce.
1 code implementation • NeurIPS 2021 • Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin
Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces the concept of unrolling an iterative algorithm and training it like a neural network.
2 code implementations • NeurIPS 2021 • Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Pan Pan, Wotao Yin
Experimental results on a variety of tasks and models demonstrate that decentralized (momentum) SGD over exponential graphs promises both fast and high-quality training.
1 code implementation • NeurIPS 2021 • HanQin Cai, Jialin Liu, Wotao Yin
Robust principal component analysis (RPCA) is a critical tool in modern machine learning, which detects outliers in the task of low-rank matrix reconstruction.
no code implementations • 29 Sep 2021 • Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Yingya Zhang, Pan Pan, Wotao Yin
Decentralized adaptive gradient methods, in which each node averages only with its neighbors, are critical to save communication and wall-clock training time in deep learning tasks.
1 code implementation • 27 Sep 2021 • Bumsu Kim, HanQin Cai, Daniel Mckenzie, Wotao Yin
Zeroth-order methods have been gaining popularity due to the demands of large-scale machine learning applications, and the paper focuses on the selection of the step size $\alpha_k$ in these methods.
no code implementations • 25 Jun 2021 • Tianyi Chen, Yuejiao Sun, Wotao Yin
By leveraging the hidden smoothness of the problem, this paper presents a tighter analysis of ALSET for stochastic nested problems.
1 code implementation • 2 Jun 2021 • Daniel Mckenzie, Howard Heaton, Qiuwei Li, Samy Wu Fung, Stanley Osher, Wotao Yin
Systems of competing agents can often be modeled as games.
no code implementations • 19 May 2021 • Yiming Chen, Kun Yuan, Yingya Zhang, Pan Pan, Yinghui Xu, Wotao Yin
Communication overhead hinders the scalability of large-scale distributed training.
no code implementations • 30 Apr 2021 • Zhishuai Guo, Yi Xu, Wotao Yin, Rong Jin, Tianbao Yang
First, we show that an increasing or large enough momentum parameter for the first-order moment used in practice is sufficient to ensure the convergence of adaptive algorithms whose adaptive scaling factors of the step size are bounded.
1 code implementation • 29 Apr 2021 • Howard Heaton, Samy Wu Fung, Aviv Gibali, Wotao Yin
This is accomplished using feasibility-based fixed point networks (F-FPNs).
no code implementations • 25 Apr 2021 • Xinmeng Huang, Kun Yuan, Xianghui Mao, Wotao Yin
In the highly data-heterogeneous scenario, Prox-DFinito with optimal cyclic sampling can attain a sample-size-independent convergence rate, which, to our knowledge, is the first result that can match with uniform-iid-sampling with variance reduction.
1 code implementation • ICCV 2021 • Kun Yuan, Yiming Chen, Xinmeng Huang, Yingya Zhang, Pan Pan, Yinghui Xu, Wotao Yin
Experimental results on a variety of computer vision tasks and models demonstrate that DecentLaM promises both efficient and high-quality training.
2 code implementations • 23 Mar 2021 • Samy Wu Fung, Howard Heaton, Qiuwei Li, Daniel Mckenzie, Stanley Osher, Wotao Yin
Unlike traditional networks, implicit networks solve a fixed point equation to compute inferences.
1 code implementation • 23 Mar 2021 • Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin
It automates the design of an optimization method based on its performance on a set of training problems.
1 code implementation • 22 Mar 2021 • Fei Feng, Wotao Yin, Alekh Agarwal, Lin F. Yang
Policy optimization methods remain a powerful workhorse in empirical Reinforcement Learning (RL), with a focus on neural policies that can easily reason over complex and continuous state and/or action spaces.
1 code implementation • 21 Feb 2021 • HanQin Cai, Yuchen Lou, Daniel Mckenzie, Wotao Yin
We consider the zeroth-order optimization problem in the huge-scale setting, where the dimension of the problem is so large that performing even basic vector operations on the decision variables is infeasible.
no code implementations • 9 Feb 2021 • Tianyi Chen, Yuejiao Sun, Quan Xiao, Wotao Yin
This paper develops a new optimization method for a class of stochastic bilevel problems that we term Single-Timescale stochAstic BiLevEl optimization (STABLE) method.
no code implementations • 21 Jan 2021 • Jinshan Zeng, Wotao Yin, Ding-Xuan Zhou
We modify ALM to use a Moreau envelope of the augmented Lagrangian and establish its convergence under conditions that are weaker than those in the literature.
Optimization and Control
no code implementations • ICLR 2021 • Jiayi Shen, Xiaohan Chen, Howard Heaton, Tianlong Chen, Jialin Liu, Wotao Yin, Zhangyang Wang
We first present Twin L2O, the first dedicated minimax L2O framework consisting of two LSTMs for updating min and max variables, respectively.
1 code implementation • 31 Dec 2020 • Tianyi Chen, Ziye Guo, Yuejiao Sun, Wotao Yin
This paper proposes an adaptive stochastic gradient descent method for distributed machine learning, which can be viewed as the communication-adaptive counterpart of the celebrated Adam method - justifying its name CADA.
1 code implementation • 22 Dec 2020 • Xinwei Zhang, Wotao Yin, Mingyi Hong, Tianyi Chen
To the best of our knowledge, this is the first formulation and algorithm developed for the hybrid FL.
1 code implementation • 13 Dec 2020 • Qi Qi, Yi Xu, Rong Jin, Wotao Yin, Tianbao Yang
In this paper, we present a simple yet effective provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Ranked #3 on
Image Classification
on iWildCam2020-WILDS
1 code implementation • 6 Oct 2020 • HanQin Cai, Daniel Mckenzie, Wotao Yin, Zhenliang Zhang
By treating the gradient as an unknown signal to be recovered, we show how one can use tools from one-bit compressed sensing to construct a robust and reliable estimator of the normalized gradient.
no code implementations • 25 Aug 2020 • Tianyi Chen, Yuejiao Sun, Wotao Yin
In particular, we apply Adam to SCSC, and the exhibited rate of convergence matches that of the original Adam on non-compositional stochastic optimization.
2 code implementations • 5 Aug 2020 • Howard Heaton, Samy Wu Fung, Alex Tong Lin, Stanley Osher, Wotao Yin
To bridge this gap, we present a new algorithm that takes samples from the manifold of true data as input and outputs an approximation of the projection operator onto this manifold.
1 code implementation • NeurIPS 2020 • Yanli Liu, Yuan Gao, Wotao Yin
Furthermore, the role of dynamic parameters has not been addressed.
Optimization and Control
no code implementations • 12 Jul 2020 • Tianyi Chen, Xiao Jin, Yuejiao Sun, Wotao Yin
Horizontal Federated learning (FL) handles multi-client data that share the same set of features, and vertical FL trains a better predictor that combine all the features from different clients.
1 code implementation • 22 May 2020 • Xinwei Zhang, Mingyi Hong, Sairaj Dhople, Wotao Yin, Yang Liu
Aiming at designing FL algorithms that are provably fast and require as few assumptions as possible, we propose a new algorithm design strategy from the primal-dual optimization perspective.
1 code implementation • 29 Mar 2020 • HanQin Cai, Daniel Mckenzie, Wotao Yin, Zhenliang Zhang
We consider the problem of minimizing a high-dimensional objective function, which may include a regularization term, using (possibly noisy) evaluations of the function.
1 code implementation • NeurIPS 2020 • Fei Feng, Ruosong Wang, Wotao Yin, Simon S. Du, Lin F. Yang
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems [tang2017exploration, bellemare2016unifying], we investigate when this paradigm is provably efficient.
no code implementations • 4 Mar 2020 • Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin
Our numerical examples show convergence of Safe-L2O algorithms, even when the provided data is not from the distribution of training data.
1 code implementation • 26 Feb 2020 • Tianyi Chen, Yuejiao Sun, Wotao Yin
The new algorithms adaptively choose between fresh and stale stochastic gradients and have convergence rates comparable to the original SGD.
no code implementations • 6 Dec 2019 • Fei Feng, Wotao Yin, Lin F. Yang
In particular, we provide an algorithm that uses $\widetilde{O}(N/(1-\gamma)^3/\varepsilon^2)$ samples in a generative model to learn an $\varepsilon$-optimal policy, where $\gamma$ is the discount factor and $N$ is the number of near-optimal actions in the approximate model.
no code implementations • 24 Oct 2019 • Lei Guan, Wotao Yin, Dongsheng Li, Xicheng Lu
It allows the overlapping of the pipelines of multiple micro-batches, including those belonging to different mini-batches.
no code implementations • 25 Sep 2019 • Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin
Inferences by each network form solution estimates, and networks are trained to optimize these estimates for a particular distribution of data.
no code implementations • 25 Sep 2019 • Ernest K. Ryu, Kun Yuan, Wotao Yin
Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood.
no code implementations • 26 May 2019 • Ernest K. Ryu, Kun Yuan, Wotao Yin
Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood.
1 code implementation • 14 May 2019 • Ernest K. Ryu, Jialin Liu, Sicheng Wang, Xiaohan Chen, Zhangyang Wang, Wotao Yin
Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms.
no code implementations • ICLR 2019 • Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin
In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning.
no code implementations • ICLR 2019 • Robert Hannah, Fei Feng, Wotao Yin
In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD).
1 code implementation • 3 Dec 2018 • Yibo Zeng, Fei Feng, Wotao Yin
In this paper, we propose AsyncQVI, an asynchronous-parallel Q-value iteration for discounted Markov decision processes whose transition and reward can only be sampled through a generative model.
no code implementations • 22 Nov 2018 • Tao Sun, Yuejiao Sun, Yangyang Xu, Wotao Yin
random and cyclic selections are either infeasible or very expensive.
1 code implementation • 21 Nov 2018 • Yanli Liu, Yunbei Xu, Wotao Yin
They reduce a difficult problem to simple subproblems, so they are easy to implement and have many applications.
Optimization and Control
no code implementations • NeurIPS 2018 • Tao Sun, Yuejiao Sun, Wotao Yin
This paper studies Markov chain gradient descent, a variant of stochastic gradient descent where the random samples are taken on the trajectory of a Markov chain.
3 code implementations • NeurIPS 2018 • Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin
In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery.
1 code implementation • NeurIPS 2018 • Tianyi Chen, Georgios B. Giannakis, Tao Sun, Wotao Yin
This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation.
no code implementations • 14 Mar 2018 • Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin
Performance of distributed optimization and learning systems is bottlenecked by "straggler" nodes and slow communication links, which significantly delay computation.
1 code implementation • 21 Jan 2018 • Weisheng Dong, Peiyao Wang, Wotao Yin, Guangming Shi, Fangfang Wu, Xiaotong Lu
Then, the iterative process is unfolded into a deep neural network, which is composed of multiple denoisers modules interleaved with back-projection (BP) modules that ensure the observation consistencies.
no code implementations • 22 Nov 2017 • Yifan Chen, Yuejiao Sun, Wotao Yin
If no sufficient decrease is found, the current point is called an approximate $R$-local minimizer.
no code implementations • NeurIPS 2017 • Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin
Slow running or straggler tasks can significantly reduce computation speed in distributed computation.
no code implementations • 31 Aug 2017 • Jialin Liu, Cristina Garcia-Cardona, Brendt Wohlberg, Wotao Yin
Convolutional sparse representations are a form of sparse representation with a structured, translation invariant dictionary.
no code implementations • 29 Jun 2017 • Jialin Liu, Cristina Garcia-Cardona, Brendt Wohlberg, Wotao Yin
While a number of different algorithms have recently been proposed for convolutional dictionary learning, this remains an expensive problem.
no code implementations • 13 Dec 2016 • Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin
Recent years have witnessed the surge of asynchronous parallel (async-parallel) iterative algorithms due to problems involving very large-scale data and a large number of decision variables.
no code implementations • 8 Nov 2016 • Yat Tin Chow, Tianyu Wu, Wotao Yin
To this problem, we apply the coordinate-update algorithms, which update only one or a few components of $x$ at each step.
Optimization and Control Computation 90C06, 90C25, 65K05
no code implementations • 30 Sep 2016 • Hao-Jun Michael Shi, Shenyinying Tu, Yangyang Xu, Wotao Yin
This monograph presents a class of algorithms called coordinate descent algorithms for mathematicians, statisticians, and engineers outside the field of optimization.
no code implementations • 15 Sep 2016 • Robert Hannah, Wotao Yin
Existing analysis of ARock assumes the delays to be bounded, and uses this bound to set a step size that is important to both convergence and efficiency.
no code implementations • 5 Jan 2016 • Zhimin Peng, Tianyu Wu, Yangyang Xu, Ming Yan, Wotao Yin
To derive simple subproblems for several new classes of applications, this paper systematically studies coordinate-friendly operators that perform low-cost coordinate updates.
no code implementations • 8 Jun 2015 • Zhimin Peng, Prudhvi Gurram, Heesung Kwon, Wotao Yin
In this paper, a novel framework of sparse kernel learning for Support Vector Data Description (SVDD) based anomaly detection is presented.
1 code implementation • 8 Jun 2015 • Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin
The agents share $x$ through either global memory or communication.
no code implementations • 16 Aug 2014 • Yangyang Xu, Wotao Yin
With very few exceptions, this issue has limited the applications of image-patch methods to the local kind of tasks such as denoising, inpainting, cartoon-texture decomposition, super-resolution, and image deblurring, for which one can process a few patches at a time.
no code implementations • 12 Aug 2014 • Yangyang Xu, Wotao Yin
Its convergence for both convex and nonconvex cases are established in different senses.
1 code implementation • 30 Jun 2014 • Stanley Osher, Feng Ruan, Jiechao Xiong, Yuan YAO, Wotao Yin
In this paper, we recover sparse signals from their noisy linear measurements by solving nonlinear differential inclusions, which is based on the notion of inverse scale space (ISS) developed in applied mathematics.
no code implementations • 24 Apr 2014 • Wei Shi, Qing Ling, Gang Wu, Wotao Yin
In this paper, we develop a decentralized algorithm for the consensus optimization problem $$\min\limits_{x\in\mathbb{R}^p}~\bar{f}(x)=\frac{1}{n}\sum\limits_{i=1}^n f_i(x),$$ which is defined over a connected network of $n$ agents, where each function $f_i$ is held privately by agent $i$ and encodes the agent's data and objective.
Optimization and Control
no code implementations • 30 Jan 2014 • Jianing V. Shi, Wotao Yin, Aswin C. Sankaranarayanan, Richard G. Baraniuk
We apply this framework to accelerate the acquisition process of dynamic MRI and show it achieves the best reconstruction accuracy with the least computational time compared with existing algorithms in the literature.
1 code implementation • 4 Dec 2013 • Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su
Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods.
Numerical Analysis Numerical Analysis Computation
no code implementations • 6 Mar 2011 • Yangyang Xu, Wotao Yin, Zaiwen Wen, Yin Zhang
By taking the advantages of both nonnegativity and low-rankness, one can generally obtain superior results than those of just using one of the two properties.
Information Theory Numerical Analysis Information Theory Numerical Analysis