no code implementations • 28 Jun 2024 • Xinghua Lou, Meet Dave, Shrinu Kushagra, Miguel Lazaro-Gredilla, Kevin Murphy
The transformer baseline is based on the MTR model, which predicts multiple future trajectories conditioned on the past trajectories and static road layout features.
1 code implementation • 24 Jan 2023 • Ken Kansky, Skanda Vaidyanath, Scott Swingle, Xinghua Lou, Miguel Lazaro-Gredilla, Dileep George
We provide a benchmark of more than 200 PushWorld puzzles in PDDL and in an OpenAI Gym environment.
2 code implementations • ICML 2017 • Ken Kansky, Tom Silver, David A. Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, Dileep George
The recent adaptation of deep neural network-based methods to reinforcement learning and planning domains has yielded remarkable progress on individual tasks.
no code implementations • NeurIPS 2016 • Xinghua Lou, Ken Kansky, Wolfgang Lehrach, CC Laan, Bhaskara Marthi, D. Scott Phoenix, Dileep George
We demonstrate that a generative model for object shapes can achieve state of the art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods.
no code implementations • 17 Sep 2013 • Christian Widmer, Philipp Drewe, Xinghua Lou, Shefali Umrania, Stephanie Heinrich, Gunnar Rätsch
Analysis of microscopy images can provide insight into many biological processes.
no code implementations • NeurIPS 2011 • Xinghua Lou, Fred A. Hamprecht
We study the problem of learning to track a large quantity of homogeneous objects such as cell tracking in cell culture study and developmental biology.