no code implementations • ICML 2020 • Maya Gupta, Erez Louidor, Oleksandr Mangylov, Nobu Morioka, Tamann Narayan, Sen Zhao
We propose new multi-input shape constraints across four intuitive categories: complements, diminishers, dominance, and unimodality constraints.
1 code implementation • 26 Jul 2023 • Sen Zhao, Wei Wei, Xian-Ling Mao, Shuai Zhu, Minghui Yang, Zujie Wen, Dangyang Chen, Feida Zhu
Specifically, MHCPL timely chooses useful social information according to the interactive history and builds a dynamic hypergraph with three types of multiplex relations from different views.
1 code implementation • 4 May 2023 • Sen Zhao, Wei Wei, Yifan Liu, Ziyang Wang, Wendi Li, Xian-Ling Mao, Shuai Zhu, Minghui Yang, Zujie Wen
Conversational recommendation systems (CRS) aim to timely and proactively acquire user dynamic preferred attributes through conversations for item recommendation.
1 code implementation • 23 Feb 2022 • Sen Zhao, Wei Wei, Ding Zou, Xianling Mao
Specifically, MIDGN disentangles the user's intents from two different perspectives, respectively: 1) In the global level, MIDGN disentangles the user's intent coupled with inter-bundle items; 2) In the Local level, MIDGN disentangles the user's intent coupled with items within each bundle.
no code implementations • 15 Feb 2022 • Taman Narayan, Heinrich Jiang, Sen Zhao, Sanjiv Kumar
Much effort has been devoted to making large and more accurate models, but relatively little has been put into understanding which examples are benefiting from the added complexity.
no code implementations • 4 Feb 2022 • Lang Liu, Mahdi Milani Fard, Sen Zhao
We propose Distribution Embedding Networks (DEN) for classification with small data.
no code implementations • 2 Feb 2022 • Sen Zhao, Erez Louidor, Olexander Mangylov, Maya Gupta
We consider the problem of estimating a good maximizer of a black-box function given noisy examples.
no code implementations • 10 Dec 2021 • Sen Zhao, Yong Zhang, Shang Wang, Beitong Zhou, Cheng Cheng
Data-driven methods for remaining useful life (RUL) prediction normally learn features from a fixed window size of a priori of degradation, which may lead to less accurate prediction results on different datasets because of the variance of local features.
no code implementations • 1 Jan 2021 • Lang Liu, Mahdi Milani Fard, Sen Zhao
We propose Distribution Embedding Network (DEN) for meta-learning, which is designed for applications where both the distribution and the number of features could vary across tasks.
8 code implementations • 10 Dec 2019 • Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
no code implementations • ICLR 2019 • Sen Zhao, Mahdi Milani Fard, Harikrishna Narasimhan, Maya Gupta
Real-world machine learning applications often have complex test metrics, and may have training and test data that are not identically distributed.
no code implementations • 16 May 2017 • Sen Zhao, Daniela Witten, Ali Shojaie
In this paper, we consider a simple and very na\"{i}ve two-step procedure for this task, in which we (i) fit a lasso model in order to obtain a subset of the variables, and (ii) fit a least squares model on the lasso-selected set.