1 code implementation • 5 Dec 2023 • Shuangquan Feng, Junhua Ma, Virginia R. de Sa
Researchers have proposed to use data of human preference feedback to fine-tune text-to-image generative models.
no code implementations • NeurIPS Workshop SVRHM 2021 • Vijay Veerabadran, Ritik Raina, Virginia R. de Sa
In this work we introduce DivNormEI, a novel bio-inspired convolutional network that performs divisive normalization, a canonical cortical computation, along with lateral inhibition and excitation that is tailored for integration into modern Artificial Neural Networks (ANNs).
no code implementations • 21 Jun 2020 • Vijay Veerabadran, Virginia R. de Sa
Work at the intersection of vision science and deep learning is starting to explore the efficacy of deep convolutional networks (DCNs) and recurrent networks in solving perceptual grouping problems that underlie primate visual recognition and segmentation.
no code implementations • 11 Jun 2020 • Shuai Tang, Virginia R. de Sa
The large amount of online data and vast array of computing resources enable current researchers in both industry and academia to employ the power of deep learning with neural networks.
1 code implementation • 13 Dec 2019 • Xiaojing Xu, Jeannie S. Huang, Virginia R. de Sa
Previous work on automated pain detection from facial expressions has primarily focused on frame-level pain metrics based on specific facial muscle activations, such as Prkachin and Solomon Pain Intensity (PSPI).
Ranked #1 on Pain Intensity Regression on UNBC-McMaster ShoulderPain dataset (MAE (VAS) metric)
no code implementations • 25 Sep 2019 • Vijay Veerabadran, Virginia R. de Sa
The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes.
no code implementations • 27 May 2019 • Shuai Tang, Mahta Mousavi, Virginia R. de Sa
Word embeddings learnt from large corpora have been adopted in various applications in natural language processing and served as the general input representations to learning systems.
no code implementations • ICLR 2019 • Shuai Tang, Virginia R. de Sa
Multi-view learning can provide self-supervision when different views are available of the same data.
1 code implementation • 29 Oct 2018 • Shuai Tang, Paul Smolensky, Virginia R. de Sa
idely used recurrent units, including Long-short Term Memory (LSTM) and the Gated Recurrent Unit (GRU), perform well on natural language tasks, but their ability to learn structured representations is still questionable.
no code implementations • ICLR 2019 • Shuai Tang, Virginia R. de Sa
Consensus maximisation learning can provide self-supervision when different views are available of the same data.
no code implementations • ACL 2019 • Shuai Tang, Virginia R. de Sa
The encoder-decoder models for unsupervised sentence representation learning tend to discard the decoder after being trained on a large unlabelled corpus, since only the encoder is needed to map the input sentence into a vector representation.
no code implementations • 18 May 2018 • Shuai Tang, Virginia R. de Sa
Multi-view learning can provide self-supervision when different views are available of the same data.
no code implementations • ICLR 2018 • Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language.
no code implementations • WS 2018 • Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required.
no code implementations • 9 Jun 2017 • Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics.
no code implementations • WS 2017 • Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
We train our skip-thought neighbor model on a large corpus with continuous sentences, and then evaluate the trained model on 7 tasks, which include semantic relatedness, paraphrase detection, and classification benchmarks.