Search Results for author: Jaime G. Carbonell

Found 11 papers, 3 papers with code

Harnessing Code Switching to Transcend the Linguistic Barrier

no code implementations30 Jan 2020 Ashiqur R. KhudaBukhsh, Shriphani Palakodety, Jaime G. Carbonell

Code mixing (or code switching) is a common phenomenon observed in social-media content generated by a linguistically diverse user-base.

Voice for the Voiceless: Active Sampling to Detect Comments Supporting the Rohingyas

no code implementations8 Oct 2019 Shriphani Palakodety, Ashiqur R. KhudaBukhsh, Jaime G. Carbonell

The Rohingya refugee crisis is one of the biggest humanitarian crises of modern times with more than 600, 000 Rohingyas rendered homeless according to the United Nations High Commissioner for Refugees.

Active Learning Hate Speech Detection +1

Hope Speech Detection: A Computational Analysis of the Voice of Peace

no code implementations11 Sep 2019 Shriphani Palakodety, Ashiqur R. KhudaBukhsh, Jaime G. Carbonell

The recent Pulwama terror attack (February 14, 2019, Pulwama, Kashmir) triggered a chain of escalating events between India and Pakistan adding another episode to their 70-year-old dispute over Kashmir.

Hope Speech Detection Language Identification +1

A Little Annotation does a Lot of Good: A Study in Bootstrapping Low-resource Named Entity Recognizers

1 code implementation IJCNLP 2019 Aditi Chaudhary, Jiateng Xie, Zaid Sheikh, Graham Neubig, Jaime G. Carbonell

Most state-of-the-art models for named entity recognition (NER) rely on the availability of large amounts of labeled data, making them challenging to extend to new, lower-resourced languages.

Active Learning Cross-Lingual Transfer +4

The Nonlinearity Coefficient - Predicting Generalization in Deep Neural Networks

no code implementations ICLR 2019 George Philipp, Jaime G. Carbonell

Via an extensive empirical study, we show that the NLC is a powerful predictor of test error and that attaining a right-sized NLC is essential for optimal performance.

Gradients explode - Deep Networks are shallow - ResNet explained

no code implementations ICLR 2018 George Philipp, Dawn Song, Jaime G. Carbonell

Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities ``solve'' the exploding gradient problem, we show that this is not the case and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice.

The exploding gradient problem demystified - definition, prevalence, impact, origin, tradeoffs, and solutions

no code implementations15 Dec 2017 George Philipp, Dawn Song, Jaime G. Carbonell

Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities "solve" the exploding gradient problem, we show that this is not the case in general and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice.

Nonparametric Neural Networks

no code implementations14 Dec 2017 George Philipp, Jaime G. Carbonell

Automatically determining the optimal size of a neural network for a given task without prior information currently requires an expensive global search and training many networks from scratch.

Cannot find the paper you are looking for? You can Submit a new open access paper.