no code implementations • 8 Dec 2023 • Shuai Tang, Zhiwei Steven Wu, Sergul Aydore, Michael Kearns, Aaron Roth
Our proposed MI attack learns quantile regression models that predict (a quantile of) the distribution of reconstruction loss on examples not used in training.
2 code implementations • 6 Mar 2023 • Shuai Tang, Sergul Aydore, Michael Kearns, Saeyoung Rho, Aaron Roth, Yichen Wang, Yu-Xiang Wang, Zhiwei Steven Wu
We revisit the problem of differentially private squared error linear regression.
no code implementations • 15 Sep 2022 • Giuseppe Vietri, Cedric Archambeau, Sergul Aydore, William Brown, Michael Kearns, Aaron Roth, Ankit Siva, Shuai Tang, Zhiwei Steven Wu
A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}.
1 code implementation • 25 Feb 2022 • Linhao Luo, Yumeng Li, Buyu Gao, Shuai Tang, Sinan Wang, Jiancheng Li, Tanchao Zhu, Jiancai Liu, Zhao Li, Shirui Pan
We integrate these components into a unified framework and present MAMDR, which can be applied to any model structure to perform multi-domain recommendation.
no code implementations • 9 Feb 2022 • Mahta Mousavi, Eric Lybrand, Shuangquan Feng, Shuai Tang, Rayan Saab, Virginia de Sa
In this work, we propose a novel algorithm called Spectrally Adaptive Common Spatial Patterns (SACSP) that improves CSP by learning a temporal/spectral filter for each spatial filter so that the spatial filters are concentrated on the most relevant temporal frequencies for each user.
1 code implementation • 2 Mar 2021 • Wesley J. Maddox, Shuai Tang, Pablo Garcia Moreno, Andrew Gordon Wilson, Andreas Damianou
The inductive biases of trained neural networks are difficult to understand and, consequently, to adapt to new settings.
no code implementations • 11 Jun 2020 • Shuai Tang, Virginia R. de Sa
The large amount of online data and vast array of computing resources enable current researchers in both industry and academia to employ the power of deep learning with neural networks.
3 code implementations • 25 Mar 2020 • Shuai Tang, Wesley J. Maddox, Charlie Dickens, Tom Diethe, Andreas Damianou
A suitable similarity index for comparing learnt neural networks plays an important role in understanding the behaviour of the highly-nonlinear functions, and can provide insights on further theoretical analysis and empirical studies.
1 code implementation • 21 Oct 2019 • Mao-Chuang Yeh, Shuai Tang, Anand Bhattad, Chuhang Zou, David Forsyth
Style transfer methods produce a transferred image which is a rendering of a content image in the manner of a style image.
no code implementations • 27 May 2019 • Shuai Tang, Mahta Mousavi, Virginia R. de Sa
Word embeddings learnt from large corpora have been adopted in various applications in natural language processing and served as the general input representations to learning systems.
no code implementations • ICLR 2019 • Shuai Tang, Virginia R. de Sa
Multi-view learning can provide self-supervision when different views are available of the same data.
1 code implementation • 29 Oct 2018 • Shuai Tang, Paul Smolensky, Virginia R. de Sa
idely used recurrent units, including Long-short Term Memory (LSTM) and the Gated Recurrent Unit (GRU), perform well on natural language tasks, but their ability to learn structured representations is still questionable.
no code implementations • ICLR 2019 • Shuai Tang, Virginia R. de Sa
Consensus maximisation learning can provide self-supervision when different views are available of the same data.
no code implementations • ACL 2019 • Shuai Tang, Virginia R. de Sa
The encoder-decoder models for unsupervised sentence representation learning tend to discard the decoder after being trained on a large unlabelled corpus, since only the encoder is needed to map the input sentence into a vector representation.
no code implementations • 18 May 2018 • Shuai Tang, Virginia R. de Sa
Multi-view learning can provide self-supervision when different views are available of the same data.
no code implementations • 31 Mar 2018 • Mao-Chuang Yeh, Shuai Tang, Anand Bhattad, D. A. Forsyth
Style transfer methods produce a transferred image which is a rendering of a content image in the manner of a style image.
no code implementations • 5 Jan 2018 • Mao-Chuang Yeh, Shuai Tang
This paper demonstrates that controlling inter-layer correlations yields visible improvements in style transfer methods.
no code implementations • ICLR 2018 • Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language.
no code implementations • WS 2018 • Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required.
no code implementations • WS 2017 • Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
We train our skip-thought neighbor model on a large corpus with continuous sentences, and then evaluate the trained model on 7 tasks, which include semantic relatedness, paraphrase detection, and classification benchmarks.
no code implementations • 9 Jun 2017 • Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics.
no code implementations • 23 Nov 2015 • Patrick W. Gallagher, Shuai Tang, Zhuowen Tu
Top-down information plays a central role in human perception, but plays relatively little role in many current state-of-the-art deep networks, such as Convolutional Neural Networks (CNNs).