Search Results for author: Fangda Gu

Found 7 papers, 4 papers with code

Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations

no code implementations27 Feb 2024 Jiaqi Zhai, Lucy Liao, Xing Liu, Yueming Wang, Rui Li, Xuan Cao, Leon Gao, Zhaojie Gong, Fangda Gu, Michael He, Yinghai Lu, Yu Shi

Large-scale recommendation systems are characterized by their reliance on high cardinality, heterogeneous features and the need to handle tens of billions of user actions on a daily basis.

Recommendation Systems

Synthesis of Stabilizing Recurrent Equilibrium Network Controllers

1 code implementation31 Mar 2022 Neelay Junnarkar, He Yin, Fangda Gu, Murat Arcak, Peter Seiler

We propose a parameterization of a nonlinear dynamic controller based on the recurrent equilibrium network, a generalization of the recurrent neural network.

Policy Gradient Methods

Recurrent Neural Network Controllers Synthesis with Stability Guarantees for Partially Observed Systems

1 code implementation8 Sep 2021 Fangda Gu, He Yin, Laurent El Ghaoui, Murat Arcak, Peter Seiler, Ming Jin

Neural network controllers have become popular in control tasks thanks to their flexibility and expressivity.

LEMMA

Implicit Graph Neural Networks

1 code implementation NeurIPS 2020 Fangda Gu, Heng Chang, Wenwu Zhu, Somayeh Sojoudi, Laurent El Ghaoui

Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data.

Graph Learning

Implicit Deep Learning

no code implementations17 Aug 2019 Laurent El Ghaoui, Fangda Gu, Bertrand Travacca, Armin Askari, Alicia Y. Tsai

Implicit deep learning prediction rules generalize the recursive rules of feedforward neural networks.

Fenchel Lifted Networks: A Lagrange Relaxation of Neural Network Training

1 code implementation20 Nov 2018 Fangda Gu, Armin Askari, Laurent El Ghaoui

In this paper, we introduce a new class of lifted models, Fenchel lifted networks, that enjoy the same benefits as previous lifted models, without suffering a degradation in performance over classical networks.

Context-Aware Policy Reuse

no code implementations11 Jun 2018 Siyuan Li, Fangda Gu, Guangxiang Zhu, Chongjie Zhang

Transfer learning can greatly speed up reinforcement learning for a new task by leveraging policies of relevant tasks.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.