no code implementations • 4 Mar 2025 • Yue Meng, Nathalie Majcherczyk, Wenliang Liu, Scott Kiesel, Chuchu Fan, Federico Pecora
Multi-agent coordination is crucial for reliable multi-robot navigation in shared spaces such as automated warehouses.
1 code implementation • 4 Mar 2025 • Yue Meng, Chuchu Fan
We first calibrate the STL on the real-world data, then generate diverse synthetic data using trajectory optimization, and finally learn the rectified diffusion policy on the augmented dataset.
no code implementations • 10 Sep 2023 • Yue Meng, Chuchu Fan
We conduct experiments on six tasks, where our method with the backup policy outperforms the classical methods (MPC, STL-solver), model-free and model-based RL methods in STL satisfaction rate, especially on tasks with complex STL specifications while being 10X-100X faster than the classical methods.
no code implementations • 18 Mar 2023 • Yue Meng, Chuchu Fan
For each system mode, we first learn an NN Lyapunov function and an NN controller to ensure the states within the region of attraction (RoA) can be stabilized.
no code implementations • 7 Mar 2023 • Yue Meng, Sai Vemprala, Rogerio Bonatti, Chuchu Fan, Ashish Kapoor
In this work, we propose Control Barrier Transformer (ConBaT), an approach that learns safe behaviors from demonstrations in a self-supervised fashion.
no code implementations • 16 Sep 2022 • Yue Meng, Zeng Qiu, Md Tawhid Bin Waez, Chuchu Fan
Recent work provides a data-driven approach to compute the density distribution of autonomous systems' forward reachable states online.
no code implementations • 8 Nov 2021 • Ying Zhang, Yanbo Chen, Jianhui Wang, Yue Meng, Tianqiao Zhao
Current transmission and distribution system states are mostly unobservable to each other, and state estimation is separately conducted in the two systems owing to the differences in network structures and analytical models.
no code implementations • 14 Sep 2021 • Yue Meng, Dawei Sun, Zeng Qiu, Md Tawhid Bin Waez, Chuchu Fan
State density distribution, in contrast to worst-case reachability, can be leveraged for safety-related problems to better quantify the likelihood of the risk for potentially hazardous situations.
1 code implementation • 14 Sep 2021 • Yue Meng, Zengyi Qin, Chuchu Fan
Reactive and safe agent modelings are important for nowadays traffic simulator designs and safe planning applications.
no code implementations • 8 Mar 2021 • Qiaojun Feng, Yue Meng, Mo Shan, Nikolay Atanasov
We show that the errors between projections of the mesh model and the observed keypoints and masks can be differentiated in order to obtain accurate instance-specific object shapes.
no code implementations • ICLR 2021 • Bowen Pan, Rameswar Panda, Camilo Fosco, Chung-Ching Lin, Alex Andonian, Yue Meng, Kate Saenko, Aude Oliva, Rogerio Feris
An inherent property of real-world videos is the high correlation of information across frames which can translate into redundancy in either temporal or spatial feature maps of the models, or both.
no code implementations • ICLR 2021 • Yue Meng, Rameswar Panda, Chung-Ching Lin, Prasanna Sattigeri, Leonid Karlinsky, Kate Saenko, Aude Oliva, Rogerio Feris
Temporal modelling is the key for efficient video action recognition.
no code implementations • 22 Dec 2020 • Yue Meng, Zhigui Lin, Michael Pedersen
In order to understand how the combination of domain evolution and impulsive harvesting affect the dynamics of a population, we propose a diffusive logistic population model with impulsive harvesting on a periodically evolving domain.
Analysis of PDEs
1 code implementation • ECCV 2020 • Yue Meng, Chung-Ching Lin, Rameswar Panda, Prasanna Sattigeri, Leonid Karlinsky, Aude Oliva, Kate Saenko, Rogerio Feris
Specifically, given a video frame, a policy network is used to decide what input resolution should be used for processing by the action recognition model, with the goal of improving both accuracy and efficiency.
no code implementations • 20 Sep 2019 • Chengxi Li, Yue Meng, Stanley H. Chan, Yi-Ting Chen
First, we decompose egocentric interactions into ego-thing and ego-stuff interaction, modeled by two GCNs.
2 code implementations • CVPR 2019 • Yue Meng, Yongxi Lu, Aman Raj, Samuel Sunarjo, Rui Guo, Tara Javidi, Gaurav Bansal, Dinesh Bharadia
SIGNet is shown to improve upon the state-of-the-art unsupervised learning for depth prediction by 30% (in squared relative error).
Ranked #71 on
Monocular Depth Estimation
on KITTI Eigen split