no code implementations • 30 Nov 2020 • Pedro Domingos

Deep learning's successes are often attributed to its ability to automatically discover new representations of the data, rather than relying on handcrafted features like other learning methods.

1 code implementation • 28 Sep 2020 • William Agnew, Christopher Xie, Aaron Walsman, Octavian Murad, Caelen Wang, Pedro Domingos, Siddhartha Srinivasa

By using these priors over the physical properties of objects, our system improves reconstruction quality not just by standard visual metrics, but also performance of model-based control on a variety of robotics manipulation tasks in challenging, cluttered environments.

no code implementations • 3 Mar 2020 • William Agnew, Pedro Domingos

Current deep reinforcement learning (RL) approaches incorporate minimal prior knowledge about the environment, limiting computational and sample efficiency.

no code implementations • 10 Nov 2017 • Tarek R. Besold, Artur d'Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe Kuehnberger, Luis C. Lamb, Daniel Lowd, Priscila Machado Vieira Lima, Leo de Penning, Gadi Pinkas, Hoifung Poon, Gerson Zaverucha

Recent studies in cognitive science, artificial intelligence, and psychology have produced a number of cognitive models of reasoning, learning, and language that are underpinned by computation.

1 code implementation • ICLR 2018 • Abram L. Friesen, Pedro Domingos

Based on this, we develop a recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justified straight-through estimator as a special case.

no code implementations • 11 Nov 2016 • Abram L. Friesen, Pedro Domingos

We illustrate the power and generality of this approach by applying it to a new type of structured prediction problem: learning a nonconvex function that can be globally optimized in polynomial time.

no code implementations • 8 Nov 2016 • Abram L. Friesen, Pedro Domingos

Similarly to DPLL-style SAT solvers and recursive conditioning in probabilistic inference, our algorithm, RDIS, recursively sets variables so as to simplify and decompose the objective function into approximately independent sub-functions, until the remaining functions are simple enough to be optimized by standard techniques like gradient descent.

no code implementations • 22 Jan 2016 • Robert Peharz, Robert Gens, Franz Pernkopf, Pedro Domingos

We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks.

no code implementations • 7 Jul 2015 • Aniruddh Nath, Pedro Domingos

While most previous statistical debugging methods generalize over many executions of a single program, TFLMs are trained on a corpus of previously seen buggy programs, and learn to identify recurring patterns of bugs.

no code implementations • 2 May 2014 • Mathias Niepert, Pedro Domingos

A sequence of random variables is exchangeable if its joint distribution is invariant under variable permutations.

no code implementations • 26 Sep 2013 • Vibhav Gogate, Pedro Domingos

In this paper, we present structured message passing (SMP), a unifying framework for approximate inference algorithms that take advantage of structured representations such as algebraic decision diagrams and sparse hash tables.

1 code implementation • 14 Feb 2012 • Hoifung Poon, Pedro Domingos

Experiments show that inference and learning with SPNs can be both faster and more accurate than with standard deep networks.

no code implementations • NeurIPS 2010 • Vibhav Gogate, William Webb, Pedro Domingos

We present an algorithm for learning high-treewidth Markov networks where inference is still tractable.

no code implementations • NeurIPS 2010 • Daniel Lowd, Pedro Domingos

Arithmetic circuits (ACs) exploit context-specific independence and determinism to allow exact inference even in networks with high treewidth.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.