Search Results for author: Majid Jahani

Found 9 papers, 2 papers with code

Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample

1 code implementation28 Jan 2019 Albert S. Berahas, Majid Jahani, Peter Richtárik, Martin Takáč

We present two sampled quasi-Newton methods (sampled LBFGS and sampled LSR1) for solving empirical risk minimization problems that arise in machine learning.

Benchmarking BIG-bench Machine Learning +3

Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes

1 code implementation17 Dec 2012 Peter Richtárik, Majid Jahani, Selin Damla Ahipaşaoğlu, Martin Takáč

Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations.

Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences

no code implementations10 Oct 2017 Majid Jahani, Naga Venkata C. Gudapati, Chenxin Ma, Rachael Tappenden, Martin Takáč

In this work we introduce the concept of an Underestimate Sequence (UES), which is motivated by Nesterov's estimate sequence.

Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy

no code implementations26 Oct 2018 Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takáč

In this paper, we propose a Distributed Accumulated Newton Conjugate gradiEnt (DANCE) method in which sample size is gradually increasing to quickly obtain a solution whose empirical loss is under satisfactory statistical accuracy.

Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1

no code implementations30 May 2019 Majid Jahani, MohammadReza Nazari, Sergey Rusakov, Albert S. Berahas, Martin Takáč

In this paper, we present a scalable distributed implementation of the Sampled Limited-memory Symmetric Rank-1 (S-LSR1) algorithm.

Don't Forget Your Teacher: A Corrective Reinforcement Learning Framework

no code implementations30 May 2019 Mohammadreza Nazari, Majid Jahani, Lawrence V. Snyder, Martin Takáč

Therefore, we propose a student-teacher RL mechanism in which the RL (the "student") learns to maximize its reward, subject to a constraint that bounds the difference between the RL policy and the "teacher" policy.

reinforcement-learning Reinforcement Learning (RL) +1

DynNet: Physics-based neural architecture design for linear and nonlinear structural response modeling and prediction

no code implementations3 Jul 2020 Soheil Sadeghi Eshkevari, Martin Takáč, Shamim N. Pakzad, Majid Jahani

Data-driven models for predicting dynamic responses of linear and nonlinear systems are of great importance due to their wide application from probabilistic analysis to inverse problems such as system identification and damage diagnosis.

Cannot find the paper you are looking for? You can Submit a new open access paper.