Search Results for author: Ming Liang

Found 30 papers, 10 papers with code

DoTAT: A Domain-oriented Text Annotation Tool

1 code implementation ACL 2022 Yupian Lin, Tong Ruan, Ming Liang, Tingting Cai, Wen Du, Yi Wang

Secondly, the tool provides annotation of events, nested event and nested entity, which are frequently required in domain-related text structuring tasks.

text annotation

MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning

1 code implementation4 Nov 2023 Bingchang Liu, Chaoyu Chen, Cong Liao, Zi Gong, Huan Wang, Zhichao Lei, Ming Liang, Dajun Chen, Min Shen, Hailian Zhou, Hang Yu, Jianguo Li

Code LLMs have emerged as a specialized research field, with remarkable studies dedicated to enhancing model's coding capabilities through fine-tuning on pre-trained models.

Multi-Task Learning

Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving

no code implementations17 Jan 2021 James Tu, Huichen Li, Xinchen Yan, Mengye Ren, Yun Chen, Ming Liang, Eilyan Bitar, Ersin Yumer, Raquel Urtasun

Yet, there have been limited studies on the adversarial robustness of multi-modal models that fuse LiDAR features with image features.

Adversarial Robustness Denoising +1

Auto4D: Learning to Label 4D Objects from Sequential Point Clouds

no code implementations17 Jan 2021 Bin Yang, Min Bai, Ming Liang, Wenyuan Zeng, Raquel Urtasun

The key idea is to decompose the 4D object label into two parts: the object size in 3D that's fixed through time for rigid objects, and the motion path describing the evolution of the object's pose through time.

3D Object Detection Object

PLUMENet: Efficient 3D Object Detection from Stereo Images

1 code implementation17 Jan 2021 Yan Wang, Bin Yang, Rui Hu, Ming Liang, Raquel Urtasun

In this paper we propose a model that unifies these two tasks and performs them in the same metric space.

3D Object Detection From Stereo Images Depth Estimation +2

HDNET: Exploiting HD Maps for 3D Object Detection

no code implementations21 Dec 2020 Bin Yang, Ming Liang, Raquel Urtasun

In this paper we show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors.

3D Object Detection Object +1

Deep Continuous Fusion for Multi-Sensor 3D Object Detection

no code implementations ECCV 2018 Ming Liang, Bin Yang, Shenlong Wang, Raquel Urtasun

In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization.

3D Object Detection Object +1

Recovering and Simulating Pedestrians in the Wild

no code implementations16 Nov 2020 Ze Yang, Siva Manivasagam, Ming Liang, Bin Yang, Wei-Chiu Ma, Raquel Urtasun

We then incorporate the reconstructed pedestrian assets bank in a realistic LiDAR simulation system by performing motion retargeting, and show that the simulated LiDAR data can be used to significantly reduce the amount of annotated real-world data required for visual perception tasks.

Data Augmentation motion retargeting

Perceive, Attend, and Drive: Learning Spatial Attention for Safe Self-Driving

no code implementations2 Nov 2020 Bob Wei, Mengye Ren, Wenyuan Zeng, Ming Liang, Bin Yang, Raquel Urtasun

In this paper, we propose an end-to-end self-driving network featuring a sparse attention module that learns to automatically attend to important regions of the input.

Motion Planning

V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction

3 code implementations ECCV 2020 Tsun-Hsuan Wang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan Zeng, James Tu, Raquel Urtasun

In this paper, we explore the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.

3D Object Detection Motion Forecasting

RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects

no code implementations ECCV 2020 Bin Yang, Runsheng Guo, Ming Liang, Sergio Casas, Raquel Urtasun

We tackle the problem of exploiting Radar for perception in the context of self-driving as Radar provides complementary information to other sensors such as LiDAR or cameras in the form of Doppler velocity.

object-detection Object Detection

Learning Lane Graph Representations for Motion Forecasting

1 code implementation ECCV 2020 Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, Raquel Urtasun

We propose a motion forecasting model that exploits a novel structured map representation as well as actor-map interactions.

Motion Forecasting Trajectory Prediction

FeederGAN: Synthetic Feeder Generation via Deep Graph Adversarial Nets

no code implementations3 Apr 2020 Ming Liang, Yao Meng, Jiyu Wang, David Lubkeman, Ning Lu

This paper presents a novel, automated, generative adversarial networks (GAN) based synthetic feeder generation mechanism, abbreviated as FeederGAN.

Attribute

Physically Realizable Adversarial Examples for LiDAR Object Detection

no code implementations CVPR 2020 James Tu, Mengye Ren, Siva Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, Raquel Urtasun

Modern autonomous driving systems rely heavily on deep learning models to process point cloud sensory data; meanwhile, deep models have been shown to be susceptible to adversarial attacks with visually imperceptible perturbations.

Adversarial Defense Autonomous Driving +4

Identifying Unknown Instances for Autonomous Driving

no code implementations24 Oct 2019 Kelvin Wong, Shenlong Wang, Mengye Ren, Ming Liang, Raquel Urtasun

In the past few years, we have seen great progress in perception algorithms, particular through the use of deep learning.

Autonomous Driving Instance Segmentation +1

Adversarial Attacks and Defences Competition

1 code implementation31 Mar 2018 Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.

BIG-bench Machine Learning

Convolutional Neural Networks with Intra-Layer Recurrent Connections for Scene Labeling

no code implementations NeurIPS 2015 Ming Liang, Xiaolin Hu, Bo Zhang

We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition.

Object Recognition Scene Labeling

Curvature Wavefront Sensing for the Large Synoptic Survey Telescope

1 code implementation16 Jun 2015 Bo Xin, Chuck Claver, Ming Liang, Srinivasan Chandrasekharan, George Angeli, Ian Shipsey

The Large Synoptic Survey Telescope (LSST) will use an active optics system (AOS) to maintain alignment and surface figure on its three large mirrors.

Instrumentation and Methods for Astrophysics

Recurrent Convolutional Neural Network for Object Recognition

no code implementations CVPR 2015 Ming Liang, Xiaolin Hu

Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer.

Object Object Recognition

LSST: from Science Drivers to Reference Design and Anticipated Data Products

1 code implementation15 May 2008 Željko Ivezić, Steven M. Kahn, J. Anthony Tyson, Bob Abel, Emily Acosta, Robyn Allsman, David Alonso, Yusra AlSayyad, Scott F. Anderson, John Andrew, James Roger P. Angel, George Z. Angeli, Reza Ansari, Pierre Antilogus, Constanza Araujo, Robert Armstrong, Kirk T. Arndt, Pierre Astier, Éric Aubourg, Nicole Auza, Tim S. Axelrod, Deborah J. Bard, Jeff D. Barr, Aurelian Barrau, James G. Bartlett, Amanda E. Bauer, Brian J. Bauman, Sylvain Baumont, Andrew C. Becker, Jacek Becla, Cristina Beldica, Steve Bellavia, Federica B. Bianco, Rahul Biswas, Guillaume Blanc, Jonathan Blazek, Roger D. Blandford, Josh S. Bloom, Joanne Bogart, Tim W. Bond, Anders W. Borgland, Kirk Borne, James F. Bosch, Dominique Boutigny, Craig A. Brackett, Andrew Bradshaw, William Nielsen Brandt, Michael E. Brown, James S. Bullock, Patricia Burchat, David L. Burke, Gianpietro Cagnoli, Daniel Calabrese, Shawn Callahan, Alice L. Callen, Srinivasan Chandrasekharan, Glenaver Charles-Emerson, Steve Chesley, Elliott C. Cheu, Hsin-Fang Chiang, James Chiang, Carol Chirino, Derek Chow, David R. Ciardi, Charles F. Claver, Johann Cohen-Tanugi, Joseph J. Cockrum, Rebecca Coles, Andrew J. Connolly, Kem H. Cook, Asantha Cooray, Kevin R. Covey, Chris Cribbs, Wei Cui, Roc Cutri, Philip N. Daly, Scott F. Daniel, Felipe Daruich, Guillaume Daubard, Greg Daues, William Dawson, Francisco Delgado, Alfred Dellapenna, Robert de Peyster, Miguel de Val-Borro, Seth W. Digel, Peter Doherty, Richard Dubois, Gregory P. Dubois-Felsmann, Josef Durech, Frossie Economou, Michael Eracleous, Henry Ferguson, Enrique Figueroa, Merlin Fisher-Levine, Warren Focke, Michael D. Foss, James Frank, Michael D. Freemon, Emmanuel Gangler, Eric Gawiser, John C. Geary, Perry Gee, Marla Geha, Charles J. B. Gessner, Robert R. Gibson, D. Kirk Gilmore, Thomas Glanzman, William Glick, Tatiana Goldina, Daniel A. Goldstein, Iain Goodenow, Melissa L. Graham, William J. Gressler, Philippe Gris, Leanne P. Guy, Augustin Guyonnet, Gunther Haller, Ron Harris, Patrick A. Hascall, Justine Haupt, Fabio Hernandez, Sven Herrmann, Edward Hileman, Joshua Hoblitt, John A. Hodgson, Craig Hogan, Dajun Huang, Michael E. Huffer, Patrick Ingraham, Walter R. Innes, Suzanne H. Jacoby, Bhuvnesh Jain, Fabrice Jammes, James Jee, Tim Jenness, Garrett Jernigan, Darko Jevremović, Kenneth Johns, Anthony S. Johnson, Margaret W. G. Johnson, R. Lynne Jones, Claire Juramy-Gilles, Mario Jurić, Jason S. Kalirai, Nitya J. Kallivayalil, Bryce Kalmbach, Jeffrey P. Kantor, Pierre Karst, Mansi M. Kasliwal, Heather Kelly, Richard Kessler, Veronica Kinnison, David Kirkby, Lloyd Knox, Ivan V. Kotov, Victor L. Krabbendam, K. Simon Krughoff, Petr Kubánek, John Kuczewski, Shri Kulkarni, John Ku, Nadine R. Kurita, Craig S. Lage, Ron Lambert, Travis Lange, J. Brian Langton, Laurent Le Guillou, Deborah Levine, Ming Liang, Kian-Tat Lim, Chris J. Lintott, Kevin E. Long, Margaux Lopez, Paul J. Lotz, Robert H. Lupton, Nate B. Lust, Lauren A. MacArthur, Ashish Mahabal, Rachel Mandelbaum, Darren S. Marsh, Philip J. Marshall, Stuart Marshall, Morgan May, Robert McKercher, Michelle McQueen, Joshua Meyers, Myriam Migliore, Michelle Miller, David J. Mills, Connor Miraval, Joachim Moeyens, David G. Monet, Marc Moniez, Serge Monkewitz, Christopher Montgomery, Fritz Mueller, Gary P. Muller, Freddy Muñoz Arancibia, Douglas R. Neill, Scott P. Newbry, Jean-Yves Nief, Andrei Nomerotski, Martin Nordby, Paul O'Connor, John Oliver, Scot S. Olivier, Knut Olsen, William O'Mullane, Sandra Ortiz, Shawn Osier, Russell E. Owen, Reynald Pain, Paul E. Palecek, John K. Parejko, James B. Parsons, Nathan M. Pease, J. Matt Peterson, John R. Peterson, Donald L. Petravick, M. E. Libby Petrick, Cathy E. Petry, Francesco Pierfederici, Stephen Pietrowicz, Rob Pike, Philip A. Pinto, Raymond Plante, Stephen Plate, Paul A. Price, Michael Prouza, Veljko Radeka, Jayadev Rajagopal, Andrew P. Rasmussen, Nicolas Regnault, Kevin A. Reil, David J. Reiss, Michael A. Reuter, Stephen T. Ridgway, Vincent J. Riot, Steve Ritz, Sean Robinson, William Roby, Aaron Roodman, Wayne Rosing, Cecille Roucelle, Matthew R. Rumore, Stefano Russo, Abhijit Saha, Benoit Sassolas, Terry L. Schalk, Pim Schellart, Rafe H. Schindler, Samuel Schmidt, Donald P. Schneider, Michael D. Schneider, William Schoening, German Schumacher, Megan E. Schwamb, Jacques Sebag, Brian Selvy, Glenn H. Sembroski, Lynn G. Seppala, Andrew Serio, Eduardo Serrano, Richard A. Shaw, Ian Shipsey, Jonathan Sick, Nicole Silvestri, Colin T. Slater, J. Allyn Smith, R. Chris Smith, Shahram Sobhani, Christine Soldahl, Lisa Storrie-Lombardi, Edward Stover, Michael A. Strauss, Rachel A. Street, Christopher W. Stubbs, Ian S. Sullivan, Donald Sweeney, John D. Swinbank, Alexander Szalay, Peter Takacs, Stephen A. Tether, Jon J. Thaler, John Gregg Thayer, Sandrine Thomas, Vaikunth Thukral, Jeffrey Tice, David E. Trilling, Max Turri, Richard Van Berg, Daniel Vanden Berk, Kurt Vetter, Francoise Virieux, Tomislav Vucina, William Wahl, Lucianne Walkowicz, Brian Walsh, Christopher W. Walter, Daniel L. Wang, Shin-Yawn Wang, Michael Warner, Oliver Wiecha, Beth Willman, Scott E. Winters, David Wittman, Sidney C. Wolff, W. Michael Wood-Vasey, Xiuqin Wu, Bo Xin, Peter Yoachim, Hu Zhan, for the LSST Collaboration

About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18, 000 deg$^2$ region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to $r\sim27. 5$.

Cannot find the paper you are looking for? You can Submit a new open access paper.