no code implementations • 20 Jun 2024 • Weiqin Chen, Mark S. Squillante, Chai Wah Wu, Santiago Paternain
We devise a control-theoretic reinforcement learning approach to support direct learning of the optimal policy.
no code implementations • 3 Nov 2023 • Sanjeeb Dash, Soumyadip Ghosh, Joao Goncalves, Mark S. Squillante
Model explainability is crucial for human users to be able to interpret how a proposed classifier assigns labels to data based on its feature values.
no code implementations • 8 Jun 2023 • Peizhong Ju, Sen Lin, Mark S. Squillante, Yingbin Liang, Ness B. Shroff
For example, when the total number of features in the source task's learning model is fixed, we show that it is more advantageous to allocate a greater number of redundant features to the task-specific part rather than the common part.
no code implementations • 19 Sep 2022 • Ismail Yunus Akhalwaya, Shashanka Ubaru, Kenneth L. Clarkson, Mark S. Squillante, Vishnu Jejjala, Yang-Hui He, Kugendran Naidoo, Vasileios Kalantzis, Lior Horesh
In this study, we present NISQ-TDA, a fully implemented end-to-end quantum machine learning algorithm needing only a short circuit-depth, that is applicable to high-dimensional classical data, and with provable asymptotic speedup for certain classes of problems.
no code implementations • 23 Feb 2022 • Xuhui Zhang, Jose Blanchet, Soumyadip Ghosh, Mark S. Squillante
In contrast, our study first illustrates the benefits of incorporating a natural geometric structure within a linear regression model, which corresponds to the generalized eigenvalue problem formed by the Gram matrices of both domains.
no code implementations • 5 Aug 2021 • Shashanka Ubaru, Ismail Yunus Akhalwaya, Mark S. Squillante, Kenneth L. Clarkson, Lior Horesh
In this paper, we completely overhaul the QTDA algorithm to achieve an improved exponential speedup and depth complexity of $O(n\log(1/(\delta\epsilon)))$.
no code implementations • 28 May 2019 • Yingdong Lu, Mark S. Squillante, Chai Wah Wu
We consider a new form of reinforcement learning (RL) that is based on opportunities to directly learn the optimal control policy and a general Markov decision process (MDP) framework devised to support these opportunities.
no code implementations • 18 Dec 2018 • Tsui-Wei Weng, Pin-Yu Chen, Lam M. Nguyen, Mark S. Squillante, Ivan Oseledets, Luca Daniel
With deep neural networks providing state-of-the-art machine learning models for numerous machine learning tasks, quantifying the robustness of these models has become an important area of research.
no code implementations • 21 May 2018 • Yingdong Lu, Mark S. Squillante, Chai Wah Wu
We consider a new family of operators for reinforcement learning with the goal of alleviating the negative effects and becoming more robust to approximation or estimation errors.