no code implementations • 26 Jul 2024 • Boyi Li, Ligeng Zhu, Ran Tian, Shuhan Tan, Yuxiao Chen, Yao Lu, Yin Cui, Sushant Veer, Max Ehrlich, Jonah Philion, Xinshuo Weng, Fuzhao Xue, Andrew Tao, Ming-Yu Liu, Sanja Fidler, Boris Ivanovic, Trevor Darrell, Jitendra Malik, Song Han, Marco Pavone
Finally, we establish a benchmark for video captioning and introduce a leaderboard, aiming to accelerate advancements in video understanding, captioning, and data alignment.
no code implementations • 1 Jul 2024 • Ran Tian, Boyi Li, Xinshuo Weng, Yuxiao Chen, Edward Schmerling, Yue Wang, Boris Ivanovic, Marco Pavone
The autonomous driving industry is increasingly adopting end-to-end learning from sensory inputs to minimize human biases in system design.
no code implementations • 1 Jul 2024 • Yixiao Wang, Yifei Zhang, Mingxiao Huo, Ran Tian, Xiang Zhang, Yichen Xie, Chenfeng Xu, Pengliang Ji, Wei Zhan, Mingyu Ding, Masayoshi Tomizuka
The increasing complexity of tasks in robotics demands efficient strategies for multitask and continual learning.
no code implementations • 24 Jun 2024 • Yuxin Chen, Chen Tang, Chenran Li, Ran Tian, Peter Stone, Masayoshi Tomizuka, Wei Zhan
Instead of inferring the complete human behavior characteristics, MEReQ infers a residual reward function that captures the discrepancy between the human expert's and the prior policy's underlying reward functions.
no code implementations • 11 Oct 2023 • Ran Tian, Chenfeng Xu, Masayoshi Tomizuka, Jitendra Malik, Andrea Bajcsy
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
no code implementations • 11 Oct 2023 • Yuxin Chen, Chen Tang, Ran Tian, Chenran Li, Jinning Li, Masayoshi Tomizuka, Wei Zhan
We observe that, generally, a more diverse set of co-play agents during training enhances the generalization performance of the ego agent; however, this improvement varies across distinct scenarios and environments.
no code implementations • 2 Jan 2023 • Ran Tian, Masayoshi Tomizuka, Anca Dragan, Andrea Bajcsy
Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change.
1 code implementation • 21 Oct 2022 • Ran Tian, Ankur P. Parikh
We present Amos, a stochastic gradient-based optimizer designed for training deep neural networks.
no code implementations • 23 May 2022 • Tao Lei, Ran Tian, Jasmijn Bastings, Ankur P. Parikh
In this work, we explore whether modeling recurrence into the Transformer architecture can both be beneficial and efficient, by building an extremely simple recurrent module into the Transformer.
no code implementations • 30 Aug 2021 • Ran Tian, Joshua Maynez, Ankur P. Parikh
The highly popular Transformer architecture, based on self-attention, is the foundation of large pretrained models such as BERT, that have become an enduring paradigm in NLP.
no code implementations • 7 Mar 2021 • Ran Tian, Masayoshi Tomizuka, Liting Sun
In this work, we advocate that humans are bounded rational and have different intelligence levels when reasoning about others' decision-making process, and such an inherent and latent characteristic should be accounted for in reward learning algorithms.
1 code implementation • EMNLP 2020 • Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang
Named Entity Recognition (NER) is one of the first stages in deep language understanding yet current NER models heavily rely on human-annotated data.
no code implementations • 3 Sep 2020 • Ran Tian, Liting Sun, Masayoshi Tomizuka
Classical game-theoretic approaches for multi-agent systems in both the forward policy design problem and the inverse reward learning problem often make strong rationality assumptions: agents perfectly maximize expected utilities under uncertainties.
no code implementations • 19 Oct 2019 • Ran Tian, Shashi Narayan, Thibault Sellam, Ankur P. Parikh
We address the issue of hallucination in data-to-text generation, i. e., reducing the generation of text that is unsupported by the source.
no code implementations • 16 Oct 2019 • Ran Tian, Nan Li, Ilya Kolmanovsky, Yildiray Yildiz, Anouck Girard
For a foreseeable future, autonomous vehicles (AVs) will operate in traffic together with human-driven vehicles.
Robotics Systems and Control Systems and Control
no code implementations • 27 Sep 2019 • Ran Tian, Nan Li, Ilya Kolmanovsky, Anouck Girard
It is a long-standing goal of artificial intelligence (AI) to be superior to human beings in decision making.
no code implementations • 1 Oct 2018 • Ran Tian, Sisi Li, Nan Li, Ilya Kolmanovsky, Anouck Girard, Yildiray Yildiz
In this paper, we propose a decision making algorithm for autonomous vehicle control at a roundabout intersection.
no code implementations • 27 Sep 2018 • Ran Tian, Yash Agrawal, Kento Watanabe, Hiroya Takamura
Word embeddings are known to boost performance of many NLP tasks such as text classification, meanwhile they can be enhanced by labels at the document level to capture nuanced meaning such as sentiment and topic.
1 code implementation • ACL 2018 • Ryo Takahashi, Ran Tian, Kentaro Inui
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base.
no code implementations • IJCNLP 2017 • Ran Tian, Koji Mineshima, Pascual Mart{\'\i}nez-G{\'o}mez
Only a limited part of the contents in this tutorial is drawn from the previous one.
1 code implementation • ACL 2016 • Ran Tian, Naoaki Okazaki, Kentaro Inui
This paper connects a vector-based composition model to a formal semantics, the Dependency-based Compositional Semantics (DCS).
no code implementations • LREC 2016 • Corentin Dumont, Ran Tian, Kentaro Inui
We chose a popular game called {`}Minecraft{'}, and created a QA corpus with a knowledge database related to this game and the ontology of a meaning representation that will be used to structure this database.
no code implementations • 26 Nov 2015 • Ran Tian, Naoaki Okazaki, Kentaro Inui
Additive composition (Foltz et al, 1998; Landauer and Dumais, 1997; Mitchell and Lapata, 2010) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words.