no code implementations • 4 Feb 2024 • Arsalan SharifNassab, Saber Salehkaleybar, Richard Sutton
This paper addresses the challenge of optimizing meta-parameters (i. e., hyperparameters) in machine learning algorithms, a critical factor influencing training efficiency and model performance.
1 code implementation • 11 Dec 2023 • Yuqin Yang, Saber Salehkaleybar, Negar Kiyavash
We provide a candidate intervention target set which is a superset of the true intervention targets.
no code implementations • 15 Nov 2023 • Sadegh Khorasani, Saber Salehkaleybar, Negar Kiyavash, Niao He, Matthias Grossglauser
Policy gradient (PG) is widely used in reinforcement learning due to its scalability and good performance.
no code implementations • 27 May 2022 • Ramin Safaeian, Saber Salehkaleybar, Mahmoud Tabandeh
In particular, we show that these functions have some desirable properties, enabling us to speed up the process of applying Meek rules.
no code implementations • 25 May 2022 • Saeed Masiha, Saber Salehkaleybar, Niao He, Negar Kiyavash, Patrick Thiran
We prove that the total sample complexity of SCRN in achieving $\epsilon$-global optimum is $\mathcal{O}(\epsilon^{-7/(2\alpha)+1})$ for $1\le\alpha< 3/2$ and $\mathcal{\tilde{O}}(\epsilon^{-2/(\alpha)})$ for $3/2\le\alpha\le 2$.
1 code implementation • 20 May 2022 • Ehsan Mokhtarian, Saber Salehkaleybar, AmirEmad Ghassami, Negar Kiyavash
We study experiment design for unique identification of the causal graph of a simple SCM, where the graph may contain cycles.
no code implementations • 17 May 2022 • Saber Salehkaleybar, Sadegh Khorasani, Negar Kiyavash, Niao He, Patrick Thiran
SHARP algorithm is parameter-free, achieving $\epsilon$-approximate first-order stationary point with $O(\epsilon^{-3})$ number of trajectories, while using a batch size of $O(1)$ at each iteration.
1 code implementation • 7 Dec 2021 • Mohammad Reza Samsami, Mohammadhossein Bahari, Saber Salehkaleybar, Alexandre Alahi
CIM explicitly discovers the causal model and utilizes it to train the policy.
no code implementations • 19 Aug 2021 • Arsalan SharifNassab, Saber Salehkaleybar, S. Jamaloddin Golestani
We then prove that this lower bound is order optimal in $m$ and $n$ by presenting a distributed learning algorithm, called Multi-Resolution Estimator for Non-Convex loss function (MRE-NC), whose expected loss matches the lower bound for large $mn$ up to polylogarithmic factors.
no code implementations • 16 Sep 2020 • Sepehr Dehdashtian, Matin Hashemi, Saber Salehkaleybar
We consider the problem of recovering channel code parameters over a candidate set by merely analyzing the received encoded signals.
no code implementations • 7 Sep 2020 • Amir Amirinezhad, Saber Salehkaleybar, Matin Hashemi
We study the problem of experiment design to learn causal structures from interventional data.
1 code implementation • IEEE conference 2020 • Samira Malek, Saber Salehkaleybar, Arash Amini
In this paper, we introduce a new network architecture by increasing the number of variable-node layers, while keeping the check-node layers unchanged.
1 code implementation • ICML 2020 • Ali AhmadiTeshnizi, Saber Salehkaleybar, Negar Kiyavash
We utilize the proposed method for computing MEC sizes and experiment design in active and passive learning settings.
no code implementations • ICLR 2020 • Arsalan Sharifnassab, Saber Salehkaleybar, S. Jamaloddin Golestani
We show that there exist poor local minima with positive curvature for some training sets of size $n\geq m+2d-2$.
1 code implementation • NeurIPS 2019 • Arsalan Sharifnassab, Saber Salehkaleybar, S. Jamaloddin Golestani
We propose an algorithm called Multi-Resolution Estimator (MRE) whose expected error is no larger than $\tilde{O}\big(m^{-{1}/{\max(d, 2)}} n^{-1/2}\big)$, where $d$ is the dimension of the parameter space.
1 code implementation • Signal Processing, Elsevier 2020 • Benyamin Ghojogh, Saber Salehkaleybar
For the second algorithm, we show that it returns the correct output with high probability.
Distributed Voting Distributed, Parallel, and Cluster Computing Quantitative Methods
no code implementations • 12 Oct 2019 • AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash
For this case, we propose an efficient exact algorithm for the worst-case gain setup, as well as an approximate algorithm for the average gain setup.
no code implementations • 10 Sep 2019 • M. Reza Heydari, Saber Salehkaleybar, Kun Zhang
We propose two nonlinear regression methods, named Adversarial Orthogonal Regression (AdOR) for additive noise models and Adversarial Orthogonal Structural Equation Model (AdOSE) for the general case of structural equation models.
no code implementations • 11 Aug 2019 • Saber Salehkaleybar, AmirEmad Ghassami, Negar Kiyavash, Kun Zhang
It can be shown that causal effects among observed variables cannot be identified uniquely even under the assumptions of faithfulness and non-Gaussianity of exogenous noises.
1 code implementation • 12 May 2019 • Saber Salehkaleybar, Arsalan Sharif-Nassab, S. Jamaloddin Golestani
We investigate the impact of communication constraint, $B$, on the expected error and derive a tight lower bound on the error achievable by any algorithm.
2 code implementations • 20 Dec 2018 • Behrooz Zarebavani, Foad Jafarinejad, Matin Hashemi, Saber Salehkaleybar
The main goal in many fields in the empirical sciences is to discover causal relationships among a set of variables from observational data.
no code implementations • 5 Feb 2018 • AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash, Kun Zhang
In this paper, we propose a new technique for counting the number of DAGs in a Markov equivalence class.
no code implementations • ICML 2018 • AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash, Elias Bareinboim
We study the problem of causal structure learning when the experimenter is limited to perform at most $k$ non-adaptive experiments of size $1$.
no code implementations • NeurIPS 2017 • AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash, Kun Zhang
We study causal inference in a multi-environment setting, in which the functional relations for producing the variables from their direct causes remain the same across environments, while the distribution of exogenous noises may vary.
no code implementations • 26 Mar 2017 • Saber Salehkaleybar, S. Jamaloddin Golestani
In this paper, we propose a novel token-based approach to compute a wide class of target functions to which we refer as "Token-based function Computation with Memory" (TCM) algorithm.
no code implementations • 26 Mar 2017 • Saber Salehkaleybar, Arsalan Sharif-Nassab, S. Jamaloddin Golestani
Considering a network with $n$ nodes, where each node initially votes for one (or more) choices out of $K$ possible choices, we present a Distributed Multi-choice Voting/Ranking (DMVR) algorithm to determine either the choice with maximum vote (the voting problem) or to rank all the choices in terms of their acquired votes (the ranking problem).
no code implementations • 27 Feb 2017 • Saber Salehkaleybar, Jalal Etesami, Negar Kiyavash, Kun Zhang
We show that the support of transition matrix among the observed processes and lengths of all latent paths between any two observed processes can be identified successfully under some conditions on the VAR model.
no code implementations • 27 Feb 2017 • AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash
We study the problem of causal structure learning over a set of random variables when the experimenter is allowed to perform at most $M$ experiments in a non-adaptive manner.
no code implementations • 23 Jan 2017 • Saber Salehkaleybar, Jalal Etesami, Negar Kiyavash
We propose an approach for learning the causal structure in stochastic dynamical systems with a $1$-step functional dependency in the presence of latent variables.