Search Results for author: Virginia R. de Sa

Found 16 papers, 3 papers with code

FERGI: Automatic Annotation of User Preferences for Text-to-Image Generation from Spontaneous Facial Expression Reaction

1 code implementation5 Dec 2023 Shuangquan Feng, Junhua Ma, Virginia R. de Sa

Firstly, we can automatically annotate user preferences between image pairs with substantial difference in these AU responses with an accuracy significantly outperforming state-of-the-art scoring models.

Text-to-Image Generation

Bio-inspired learnable divisive normalization for ANNs

no code implementations NeurIPS Workshop SVRHM 2021 Vijay Veerabadran, Ritik Raina, Virginia R. de Sa

In this work we introduce DivNormEI, a novel bio-inspired convolutional network that performs divisive normalization, a canonical cortical computation, along with lateral inhibition and excitation that is tailored for integration into modern Artificial Neural Networks (ANNs).

Image Classification Object Recognition

Learning compact generalizable neural representations supporting perceptual grouping

no code implementations21 Jun 2020 Vijay Veerabadran, Virginia R. de Sa

Work at the intersection of vision science and deep learning is starting to explore the efficacy of deep convolutional networks (DCNs) and recurrent networks in solving perceptual grouping problems that underlie primate visual recognition and segmentation.

Pathfinder Transfer Learning

Deep Transfer Learning with Ridge Regression

no code implementations11 Jun 2020 Shuai Tang, Virginia R. de Sa

The large amount of online data and vast array of computing resources enable current researchers in both industry and academia to employ the power of deep learning with neural networks.

regression Transfer Learning

Pain Evaluation in Video using Extended Multitask Learning from Multidimensional Measurements

1 code implementation13 Dec 2019 Xiaojing Xu, Jeannie S. Huang, Virginia R. de Sa

Previous work on automated pain detection from facial expressions has primarily focused on frame-level pain metrics based on specific facial muscle activations, such as Prkachin and Solomon Pain Intensity (PSPI).

Pain Intensity Regression

V1Net: A computational model of cortical horizontal connections

no code implementations25 Sep 2019 Vijay Veerabadran, Virginia R. de Sa

The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes.

Boundary Detection Object Recognition

An Empirical Study on Post-processing Methods for Word Embeddings

no code implementations27 May 2019 Shuai Tang, Mahta Mousavi, Virginia R. de Sa

Word embeddings learnt from large corpora have been adopted in various applications in natural language processing and served as the general input representations to learning systems.

Retrieval Sentence +1

A Simple Recurrent Unit with Reduced Tensor Product Representations

1 code implementation29 Oct 2018 Shuai Tang, Paul Smolensky, Virginia R. de Sa

idely used recurrent units, including Long-short Term Memory (LSTM) and the Gated Recurrent Unit (GRU), perform well on natural language tasks, but their ability to learn structured representations is still questionable.

Natural Language Inference

Improving Sentence Representations with Consensus Maximisation

no code implementations ICLR 2019 Shuai Tang, Virginia R. de Sa

Consensus maximisation learning can provide self-supervision when different views are available of the same data.

Self-Supervised Learning Sentence

Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning

no code implementations ACL 2019 Shuai Tang, Virginia R. de Sa

The encoder-decoder models for unsupervised sentence representation learning tend to discard the decoder after being trained on a large unlabelled corpus, since only the encoder is needed to map the input sentence into a vector representation.

Representation Learning Sentence

Multi-view Sentence Representation Learning

no code implementations18 May 2018 Shuai Tang, Virginia R. de Sa

Multi-view learning can provide self-supervision when different views are available of the same data.

MULTI-VIEW LEARNING Representation Learning +1

Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning

no code implementations ICLR 2018 Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa

Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language.

Representation Learning Sentence

Trimming and Improving Skip-thought Vectors

no code implementations9 Jun 2017 Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa

The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics.

Sentence text-classification +1

Rethinking Skip-thought: A Neighborhood based Approach

no code implementations WS 2017 Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa

We train our skip-thought neighbor model on a large corpus with continuous sentences, and then evaluate the trained model on 7 tasks, which include semantic relatedness, paraphrase detection, and classification benchmarks.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.