no code implementations • 21 Dec 2022 • Luke Vilnis, Zach Fisher, Bhargav Kanagal, Patrick Murray, Sumit Sanghai
Large language models have ushered in a golden age of semantic parsing.
no code implementations • 18 Oct 2022 • Luke Vilnis, Yury Zemlyanskiy, Patrick Murray, Alexandre Passos, Sumit Sanghai
Decoding methods for large language models often trade-off between diversity of outputs and parallelism of computation.
1 code implementation • NeurIPS 2021 • Michael Boratko, Dongxu Zhang, Nicholas Monath, Luke Vilnis, Kenneth Clarkson, Andrew McCallum
While vectors in Euclidean space can theoretically represent any graph, much recent work shows that alternatives such as complex, hyperbolic, order, or box embeddings have geometric properties better suited to modeling real-world graphs.
1 code implementation • NeurIPS 2020 • Shib Sankar Dasgupta, Michael Boratko, Dongxu Zhang, Luke Vilnis, Xiang Lorraine Li, Andrew McCallum
Geometric embeddings have recently received attention for their natural ability to represent transitive asymmetric relations via containment.
1 code implementation • AKBC 2020 • Dhruvesh Patel, Shib Sankar Dasgupta, Michael Boratko, Xiang Li, Luke Vilnis, Andrew McCallum
Box Embeddings [Vilnis et al., 2018, Li et al., 2019] represent concepts with hyperrectangles in $n$-dimensional space and are shown to be capable of modeling tree-like structures efficiently by training on a large subset of the transitive closure of the WordNet hypernym graph.
no code implementations • ICLR 2019 • Xiang Li, Luke Vilnis, Dongxu Zhang, Michael Boratko, Andrew McCallum
However, the hard edges of the boxes present difficulties for standard gradient based optimization; that work employed a special surrogate function for the disjoint case, but we find this method to be fragile.
no code implementations • CONLL 2018 • Dung Thai, Sree Harsha Ramesh, Shikhar Murty, Luke Vilnis, Andrew McCallum
Complex textual information extraction tasks are often posed as sequence labeling or \emph{shallow parsing}, where fields are extracted using local labels made consistent through probabilistic inference in a graphical model with constrained transitions.
2 code implementations • ACL 2018 • Shikhar Murty*, Patrick Verga*, Luke Vilnis, Irena Radovanovic, Andrew McCallum
Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies.
no code implementations • ACL 2018 • Luke Vilnis, Xiang Li, Shikhar Murty, Andrew McCallum
Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e. g. entailment graphs).
6 code implementations • ICLR 2018 • Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, Andrew McCallum
Knowledge bases (KB), both automatically and manually constructed, are often incomplete --- many valid facts can be inferred from the KB by synthesizing existing information.
no code implementations • 15 Nov 2017 • Shikhar Murty, Patrick Verga, Luke Vilnis, Andrew McCallum
We consider the challenging problem of entity typing over an extremely fine grained set of types, wherein a single mention or entity can have many simultaneous and often hierarchically-structured types.
no code implementations • NAACL 2018 • Haw-Shiuan Chang, ZiYun Wang, Luke Vilnis, Andrew McCallum
Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, coreference, relation extraction, and question answering.
no code implementations • 2 Aug 2017 • Dung Thai, Shikhar Murty, Trapit Bansal, Luke Vilnis, David Belanger, Andrew McCallum
In textual information extraction and other sequence labeling tasks it is now common to use recurrent neural networks (such as LSTM) to form rich embedded representations of long-term input co-occurrence patterns.
no code implementations • 1 Aug 2017 • Xiang Li, Luke Vilnis, Andrew McCallum
Recent work in learning ontologies (hierarchical and partially-ordered structures) has leveraged the intrinsic geometry of spaces of learned representations to make predictions that automatically obey complex structural constraints.
4 code implementations • 21 Nov 2015 • Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens
This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks.
14 code implementations • CONLL 2016 • Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, Samy Bengio
The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation.
no code implementations • IJCNLP 2015 • Emma Strubell, Luke Vilnis, Kate Silverstein, Andrew McCallum
We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components.
no code implementations • 4 Mar 2015 • Luke Vilnis, David Belanger, Daniel Sheldon, Andrew McCallum
Many inference problems in structured prediction are naturally solved by augmenting a tractable dependency structure with complex, non-local auxiliary objectives.
1 code implementation • 20 Dec 2014 • Luke Vilnis, Andrew McCallum
Current work in lexical distributed representations maps each word to a point vector in low-dimensional space.
no code implementations • 30 Oct 2014 • Emma Strubell, Luke Vilnis, Andrew McCallum
We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components.