Search Results for author: Meng Cao

Found 74 papers, 31 papers with code

PhysGame: Uncovering Physical Commonsense Violations in Gameplay Videos

1 code implementation2 Dec 2024 Meng Cao, Haoran Tang, Haoze Zhao, Hangyu Guo, Jiaheng Liu, Ge Zhang, Ruyang Liu, Qiang Sun, Ian Reid, Xiaodan Liang

In this paper, we propose PhysGame as a pioneering benchmark to evaluate physical commonsense violations in gameplay videos.

Question Answering Video Understanding

Continual LLaVA: Continual Instruction Tuning in Large Vision-Language Models

1 code implementation4 Nov 2024 Meng Cao, Yuyang Liu, Yingfei Liu, Tiancai Wang, Jiahua Dong, Henghui Ding, Xiangyu Zhang, Ian Reid, Xiaodan Liang

In terms of methodology, we propose Continual LLaVA, a rehearsal-free method tailored for continual instruction tuning in LVLMs.

How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization?

1 code implementation23 Oct 2024 Jiahua Dong, Wenqi Liang, Hongliu Li, Duzhen Zhang, Meng Cao, Henghui Ding, Salman Khan, Fahad Shahbaz Khan

Moreover, they heavily suffer from catastrophic forgetting and concept neglect on old personalized concepts when continually learning a series of new concepts.

Noise Estimation

ING-VP: MLLMs cannot Play Easy Vision-based Games Yet

1 code implementation9 Oct 2024 Haoran Zhang, Hangyu Guo, Shuyue Guo, Meng Cao, Wenhao Huang, Jiaheng Liu, Ge Zhang

To bridge this gap, we present ING-VP, the first INteractive Game-based Vision Planning benchmark, specifically designed to evaluate the spatial imagination and multi-step reasoning abilities of MLLMs.

Spatial Reasoning

TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights

no code implementations6 Oct 2024 Aiwei Liu, Haoping Bai, Zhiyun Lu, Yanchao Sun, Xiang Kong, Simon Wang, Jiulong Shan, Albin Madappally Jose, Xiaojiang Liu, Lijie Wen, Philip S. Yu, Meng Cao

In this work, we propose that the optimal data for DPO has equal expected rewards for each token in winning and losing responses, as there is no difference in token importance.

Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models

no code implementations3 Oct 2024 Zhengfeng Lai, Vasileios Saveris, Chen Chen, Hong-You Chen, Haotian Zhang, BoWen Zhang, Juan Lao Tebar, Wenze Hu, Zhe Gan, Peter Grasch, Meng Cao, Yinfei Yang

Our findings reveal that a hybrid approach that keeps both synthetic captions and AltTexts can outperform the use of synthetic captions alone, improving both alignment and performance, with each model demonstrating preferences for particular caption formats.

Contrastive Localized Language-Image Pre-Training

no code implementations3 Oct 2024 Hong-You Chen, Zhengfeng Lai, Haotian Zhang, Xinze Wang, Marcin Eichner, Keen You, Meng Cao, BoWen Zhang, Yinfei Yang, Zhe Gan

Contrastive Language-Image Pre-training (CLIP) has been a celebrated method for training vision encoders to generate image/text representations facilitating various applications.

Data Augmentation for Sparse Multidimensional Learning Performance Data Using Generative AI

no code implementations24 Sep 2024 Liang Zhang, Jionghao Lin, John Sabatini, Conrad Borchers, Daniel Weitekamp, Meng Cao, John Hollander, Xiangen Hu, Arthur C. Graesser

Second, a tensor factorization method is used to impute missing values in sparse tensors of collected learner data, thereby grounding the imputation on knowledge tracing tasks that predict missing performance values based on real observations.

ARC Data Augmentation +4

MUSE: Mamba is Efficient Multi-scale Learner for Text-video Retrieval

1 code implementation20 Aug 2024 Haoran Tang, Meng Cao, Jinfa Huang, Ruyang Liu, Peng Jin, Ge Li, Xiaodan Liang

Text-Video Retrieval (TVR) aims to align and associate relevant video content with corresponding natural language queries.

Mamba Natural Language Queries +2

Apple Intelligence Foundation Language Models

no code implementations29 Jul 2024 Tom Gunter, ZiRui Wang, Chong Wang, Ruoming Pang, Aonan Zhang, BoWen Zhang, Chen Chen, Chung-Cheng Chiu, David Qiu, Deepak Gopinath, Dian Ang Yap, Dong Yin, Feng Nan, Floris Weers, Guoli Yin, Haoshuo Huang, Jianyu Wang, Jiarui Lu, John Peebles, Ke Ye, Mark Lee, Nan Du, Qibin Chen, Quentin Keunebroek, Sam Wiseman, Syd Evans, Tao Lei, Vivek Rathod, Xiang Kong, Xianzhi Du, Yanghao Li, Yongqiang Wang, Yuan Gao, Zaid Ahmed, Zhaoyang Xu, Zhiyun Lu, Al Rashid, Albin Madappally Jose, Alec Doane, Alfredo Bencomo, Allison Vanderby, Andrew Hansen, Ankur Jain, Anupama Mann Anupama, Areeba Kamal, Bugu Wu, Carolina Brum, Charlie Maalouf, Chinguun Erdenebileg, Chris Dulhanty, Dominik Moritz, Doug Kang, Eduardo Jimenez, Evan Ladd, Fangping Shi, Felix Bai, Frank Chu, Fred Hohman, Hadas Kotek, Hannah Gillis Coleman, Jane Li, Jeffrey Bigham, Jeffery Cao, Jeff Lai, Jessica Cheung, Jiulong Shan, Joe Zhou, John Li, Jun Qin, Karanjeet Singh, Karla Vega, Kelvin Zou, Laura Heckman, Lauren Gardiner, Margit Bowler, Maria Cordell, Meng Cao, Nicole Hay, Nilesh Shahdadpuri, Otto Godwin, Pranay Dighe, Pushyami Rachapudi, Ramsey Tantawi, Roman Frigg, Sam Davarnia, Sanskruti Shah, Saptarshi Guha, Sasha Sirovica, Shen Ma, Shuang Ma, Simon Wang, Sulgi Kim, Suma Jayaram, Vaishaal Shankar, Varsha Paidi, Vivek Kumar, Xin Wang, Xin Zheng, Walker Cheng, Yael Shrager, Yang Ye, Yasu Tanaka, Yihao Guo, Yunsong Meng, Zhao Tang Luo, Zhi Ouyang, Alp Aygar, Alvin Wan, Andrew Walkingshaw, Andy Narayanan, Antonie Lin, Arsalan Farooq, Brent Ramerth, Colorado Reed, Chris Bartels, Chris Chaney, David Riazati, Eric Liang Yang, Erin Feldman, Gabriel Hochstrasser, Guillaume Seguin, Irina Belousova, Joris Pelemans, Karen Yang, Keivan Alizadeh Vahid, Liangliang Cao, Mahyar Najibi, Marco Zuliani, Max Horton, Minsik Cho, Nikhil Bhendawade, Patrick Dong, Piotr Maj, Pulkit Agrawal, Qi Shan, Qichen Fu, Regan Poston, Sam Xu, Shuangning Liu, Sushma Rao, Tashweena Heeramun, Thomas Merth, Uday Rayala, Victor Cui, Vivek Rangarajan Sridhar, Wencong Zhang, Wenqi Zhang, Wentao Wu, Xingyu Zhou, Xinwen Liu, Yang Zhao, Yin Xia, Zhile Ren, Zhongzheng Ren

We present foundation language models developed to power Apple Intelligence features, including a ~3 billion parameter model designed to run efficiently on devices and a large server-based language model designed for Private Cloud Compute.

Language Modelling

SLRL: Structured Latent Representation Learning for Multi-view Clustering

no code implementations11 Jul 2024 Zhangci Xiong, Meng Cao

Subsequently, to exploit the structural information among samples, a k-nearest neighbor graph is constructed from this common latent representation.

Clustering Graph Learning +1

Integrating Attentional Factors and Spacing in Logistic Knowledge Tracing Models to Explore the Impact of Training Sequences on Category Learning

no code implementations22 Jun 2024 Meng Cao, Philip I. Pavlik Jr., Wei Chu, Liang Zhang

Although a recent study underscores the joint influence of memory and attentional factors on sequencing effects, there remains a scarcity of effective computational models integrating both attentional and memory considerations to comprehensively understand the effect of training sequences on students' performance.

Blocking Knowledge Tracing

Cross-Modal Conditioned Reconstruction for Language-guided Medical Image Segmentation

2 code implementations3 Apr 2024 Xiaoshuang Huang, Hongxiang Li, Meng Cao, Long Chen, Chenyu You, Dong An

Recent developments underscore the potential of textual information in enhancing learning models for a deeper understanding of medical visual semantics.

Image Segmentation Medical Image Segmentation +2

Mechanistic Understanding and Mitigation of Language Model Non-Factual Hallucinations

1 code implementation27 Mar 2024 Lei Yu, Meng Cao, Jackie Chi Kit Cheung, Yue Dong

State-of-the-art language models (LMs) sometimes generate non-factual hallucinations that misalign with world knowledge.

Attribute Hallucination +3

depyf: Open the Opaque Box of PyTorch Compiler for Machine Learning Researchers

1 code implementation14 Mar 2024 Kaichao You, Runsheng Bai, Meng Cao, Jianmin Wang, Ion Stoica, Mingsheng Long

PyTorch \texttt{2. x} introduces a compiler designed to accelerate deep learning programs.

Predicting Learning Performance with Large Language Models: A Study in Adult Literacy

no code implementations4 Mar 2024 Liang Zhang, Jionghao Lin, Conrad Borchers, John Sabatini, John Hollander, Meng Cao, Xiangen Hu

This research is motivated by the potential of LLMs to predict learning performance based on its inherent reasoning and computational capabilities.

Knowledge Tracing Reading Comprehension

Direct Large Language Model Alignment Through Self-Rewarding Contrastive Prompt Distillation

1 code implementation19 Feb 2024 Aiwei Liu, Haoping Bai, Zhiyun Lu, Xiang Kong, Simon Wang, Jiulong Shan, Meng Cao, Lijie Wen

In this paper, we propose a method to evaluate the response preference by using the output probabilities of response pairs under contrastive prompt pairs, which could achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF.

Language Modelling Large Language Model

Recommendation Fairness in Social Networks Over Time

no code implementations5 Feb 2024 Meng Cao, Hussain Hussain, Sandipan Sikdar, Denis Helic, Markus Strohmaier, Roman Kern

We further study how interventions on network properties influence fairness by examining counterfactual scenarios with alternative evolution outcomes and differing network properties.

counterfactual Fairness +1

Beyond Sparse Rewards: Enhancing Reinforcement Learning with Language Model Critique in Text Generation

no code implementations14 Jan 2024 Meng Cao, Lei Shu, Lei Yu, Yun Zhu, Nevan Wichers, Yinxiao Liu, Lei Meng

We investigate this approach under two different settings: one where the policy model is smaller and is paired with a more powerful critic model, and another where a single language model fulfills both roles.

Language Modelling reinforcement-learning +2

Real-Time Exposure Correction via Collaborative Transformations and Adaptive Sampling

1 code implementation CVPR 2024 Ziwen Li, Feng Zhang, Meng Cao, Jinpu Zhang, Yuanjie Shao, Yuehuan Wang, Nong Sang

Specifically the global transformation adjusts the overall appearance using image-adaptive 3D LUTs to provide decent global contrast and sharp details while the pixel transformation compensates for local context.

Exposure Correction Image Enhancement

Responsible AI Considerations in Text Summarization Research: A Review of Current Practices

no code implementations18 Nov 2023 Yu Lu Liu, Meng Cao, Su Lin Blodgett, Jackie Chi Kit Cheung, Alexandra Olteanu, Adam Trischler

We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.

Text Summarization

Exploring Recommendation Capabilities of GPT-4V(ision): A Preliminary Case Study

no code implementations7 Nov 2023 Peilin Zhou, Meng Cao, You-Liang Huang, Qichen Ye, Peiyan Zhang, Junling Liu, Yueqi Xie, Yining Hua, Jaeboum Kim

Large Multimodal Models (LMMs) have demonstrated impressive performance across various vision and language tasks, yet their potential applications in recommendation tasks with visual assistance remain unexplored.

General Knowledge Reading Comprehension

Successor Features for Efficient Multisubject Controlled Text Generation

no code implementations3 Nov 2023 Meng Cao, Mehdi Fatemi, Jackie Chi Kit Cheung, Samira Shabanian

While large language models (LLMs) have achieved impressive performance in generating fluent and realistic text, controlling the generated text so that it exhibits properties such as safety, factuality, and non-toxicity remains challenging.

Computational Efficiency Language Modelling +1

Video Referring Expression Comprehension via Transformer with Content-conditioned Query

no code implementations25 Oct 2023 Ji Jiang, Meng Cao, Tengtao Song, Long Chen, Yi Wang, Yuexian Zou

Video Referring Expression Comprehension (REC) aims to localize a target object in videos based on the queried natural language.

cross-modal alignment Referring Expression +2

G2L: Semantically Aligned and Uniform Video Grounding via Geodesic and Game Theory

no code implementations ICCV 2023 Hongxiang Li, Meng Cao, Xuxin Cheng, Yaowei Li, Zhihong Zhu, Yuexian Zou

Due to two annoying issues in video grounding: (1) the co-existence of some visual entities in both ground truth and other moments, \ie semantic overlapping; (2) only a few moments in the video are annotated, \ie sparse annotation dilemma, vanilla contrastive learning is unable to model the correlations between temporally distant moments and learned inconsistent video representations.

Contrastive Learning Video Grounding

Improving Retrieval-Augmented Large Language Models via Data Importance Learning

1 code implementation6 Jul 2023 Xiaozhong Lyu, Stefan Grafberger, Samantha Biegel, Shaopeng Wei, Meng Cao, Sebastian Schelter, Ce Zhang

There are exponentially many terms in the multilinear extension, and one key contribution of this paper is a polynomial time algorithm that computes exactly, given a retrieval-augmented model with an additive utility function and a validation set, the data importance of data points in the retrieval corpus using the multilinear extension of the model's utility function.

Imputation Question Answering +1

Improving Reference-based Distinctive Image Captioning with Contrastive Rewards

no code implementations25 Jun 2023 Yangjun Mao, Jun Xiao, Dong Zhang, Meng Cao, Jian Shao, Yueting Zhuang, Long Chen

A recent DIC method proposes to generate distinctive captions by comparing the target image with a set of semantic-similar reference images, i. e., reference-based DIC (Ref-DIC).

Benchmarking Contrastive Learning +1

Systematic Rectification of Language Models via Dead-end Analysis

1 code implementation27 Feb 2023 Meng Cao, Mehdi Fatemi, Jackie Chi Kit Cheung, Samira Shabanian

Other methods rely on rule-based or prompt-based token elimination, which are limited as they dismiss future tokens and the overall meaning of the complete discourse.

Reinforcement Learning (RL)

RGI: robust GAN-inversion for mask-free image inpainting and unsupervised pixel-wise anomaly detection

no code implementations24 Feb 2023 Shancong Mou, Xiaoyi Gu, Meng Cao, Haoping Bai, Ping Huang, Jiulong Shan, Jianjun Shi

In this paper, we propose a Robust GAN-inversion (RGI) method with a provable robustness guarantee to achieve image restoration under unknown \textit{gross} corruptions, where a small fraction of pixels are completely corrupted.

Anomaly Detection Image Inpainting +1

Learning with Rejection for Abstractive Text Summarization

1 code implementation16 Feb 2023 Meng Cao, Yue Dong, Jingyi He, Jackie Chi Kit Cheung

State-of-the-art abstractive summarization systems frequently hallucinate content that is not supported by the source document, mainly due to noise in the training dataset.

Abstractive Text Summarization

Exploiting Auxiliary Caption for Video Grounding

no code implementations15 Jan 2023 Hongxiang Li, Meng Cao, Xuxin Cheng, Zhihong Zhu, Yaowei Li, Yuexian Zou

Video grounding aims to locate a moment of interest matching the given query sentence from an untrimmed video.

Contrastive Learning Dense Video Captioning +2

Iterative Proposal Refinement for Weakly-Supervised Video Grounding

no code implementations CVPR 2023 Meng Cao, Fangyun Wei, Can Xu, Xiubo Geng, Long Chen, Can Zhang, Yuexian Zou, Tao Shen, Daxin Jiang

Weakly-Supervised Video Grounding (WSVG) aims to localize events of interest in untrimmed videos with only video-level annotations.

Sentence Video Grounding

Video Referring Expression Comprehension via Transformer with Content-aware Query

no code implementations6 Oct 2022 Ji Jiang, Meng Cao, Tengtao Song, Yuexian Zou

To this end, we introduce two new datasets (i. e., VID-Entity and VidSTG-Entity) by augmenting the VIDSentence and VidSTG datasets with the explicitly referred words in the whole sentence, respectively.

cross-modal alignment Referring Expression +2

Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

1 code implementation13 Sep 2022 Hussain Hussain, Meng Cao, Sandipan Sikdar, Denis Helic, Elisabeth Lex, Markus Strohmaier, Roman Kern

We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.

Fairness Node Classification

LocVTP: Video-Text Pre-training for Temporal Localization

1 code implementation21 Jul 2022 Meng Cao, Tianyu Yang, Junwu Weng, Can Zhang, Jue Wang, Yuexian Zou

To further enhance the temporal reasoning ability of the learned feature, we propose a context projection head and a temporal aware contrastive loss to perceive the contextual relationships.

Retrieval Temporal Localization +1

Correspondence Matters for Video Referring Expression Comprehension

1 code implementation21 Jul 2022 Meng Cao, Ji Jiang, Long Chen, Yuexian Zou

Extensive experiments demonstrate that our DCNet achieves state-of-the-art performance on both video and image REC benchmarks.

Contrastive Learning Referring Expression +3

A Survey on Neural Abstractive Summarization Methods and Factual Consistency of Summarization

no code implementations20 Apr 2022 Meng Cao

Automatic summarization is the process of shortening a set of textual data computationally, to create a subset (a summary) that represents the most important pieces of information in the original text.

Abstractive Text Summarization

Jacobian Norm for Unsupervised Source-Free Domain Adaptation

no code implementations7 Apr 2022 Weikai Li, Meng Cao, Songcan Chen

Unsupervised Source (data) Free domain adaptation (USFDA) aims to transfer knowledge from a well-trained source model to a related but unlabeled target domain.

Source-Free Domain Adaptation

PAEDID: Patch Autoencoder Based Deep Image Decomposition For Pixel-level Defective Region Segmentation

no code implementations28 Mar 2022 Shancong Mou, Meng Cao, Haoping Bai, Ping Huang, Jianjun Shi, Jiulong Shan

To combine the best of both worlds, we present an unsupervised patch autoencoder based deep image decomposition (PAEDID) method for defective region segmentation.

Anomaly Detection

Unsupervised Pre-training for Temporal Action Localization Tasks

1 code implementation CVPR 2022 Can Zhang, Tianyu Yang, Junwu Weng, Meng Cao, Jue Wang, Yuexian Zou

These pre-trained models can be sub-optimal for temporal localization tasks due to the inherent discrepancy between video-level classification and clip-level localization.

Contrastive Learning Representation Learning +4

Synthetic Defect Generation for Display Front-of-Screen Quality Inspection: A Survey

no code implementations3 Mar 2022 Shancong Mou, Meng Cao, Zhendong Hong, Ping Huang, Jiulong Shan, Jianjun Shi

Display front-of-screen (FOS) quality inspection is essential for the mass production of displays in the manufacturing process.

Synthetic Data Generation

Information Gain Propagation: a new way to Graph Active Learning with Soft Labels

1 code implementation ICLR 2022 Wentao Zhang, Yexin Wang, Zhenbang You, Meng Cao, Ping Huang, Jiulong Shan, Zhi Yang, Bin Cui

Graph Neural Networks (GNNs) have achieved great success in various tasks, but their performance highly relies on a large number of labeled nodes, which typically requires considerable human effort.

Active Learning

Self-supervised Semi-supervised Learning for Data Labeling and Quality Evaluation

no code implementations22 Nov 2021 Haoping Bai, Meng Cao, Ping Huang, Jiulong Shan

On active learning task, our method achieves 97. 0% Top-1 Accuracy on CIFAR10 with 0. 1% annotated data, and 83. 9% Top-1 Accuracy on CIFAR100 with 10% annotated data.

Active Learning Representation Learning

RIM: Reliable Influence-based Active Learning on Graphs

1 code implementation NeurIPS 2021 Wentao Zhang, Yexin Wang, Zhenbang You, Meng Cao, Ping Huang, Jiulong Shan, Zhi Yang, Bin Cui

Message passing is the core of most graph models such as Graph Convolutional Network (GCN) and Label Propagation (LP), which usually require a large number of clean labeled data to smooth out the neighborhood over the graph.

Active Learning

On Pursuit of Designing Multi-modal Transformer for Video Grounding

no code implementations EMNLP 2021 Meng Cao, Long Chen, Mike Zheng Shou, Can Zhang, Yuexian Zou

Almost all existing video grounding methods fall into two frameworks: 1) Top-down model: It predefines a set of segment candidates and then conducts segment classification and regression.

Decoder Sentence +1

Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization

1 code implementation ACL 2022 Meng Cao, Yue Dong, Jackie Chi Kit Cheung

State-of-the-art abstractive summarization systems often generate \emph{hallucinations}; i. e., content that is not directly inferable from the source text.

Abstractive Text Summarization Reinforcement Learning (RL) +1

UniFaceGAN: A Unified Framework for Temporally Consistent Facial Video Editing

no code implementations12 Aug 2021 Meng Cao, HaoZhi Huang, Hao Wang, Xuan Wang, Li Shen, Sheng Wang, Linchao Bao, Zhifeng Li, Jiebo Luo

Compared with the state-of-the-art facial image editing methods, our framework generates video portraits that are more photo-realistic and temporally smooth.

3D Reconstruction Face Reenactment +3

All You Need is a Second Look: Towards Arbitrary-Shaped Text Detection

no code implementations24 Jun 2021 Meng Cao, Can Zhang, Dongming Yang, Yuexian Zou

Compared to the traditional single-stage segmentation network, our NASK conducts the detection in a coarse-to-fine manner with the first stage segmentation spotting the rectangle text proposals and the second one retrieving compact representations.

Instance Segmentation Segmentation +2

BatchQuant: Quantized-for-all Architecture Search with Robust Quantizer

no code implementations NeurIPS 2021 Haoping Bai, Meng Cao, Ping Huang, Jiulong Shan

While single-shot quantized neural architecture search enjoys flexibility in both model architecture and quantization policy, the combined search space comes with many challenges, including instability when training the weight-sharing supernet and difficulty in navigating the exponentially growing search space.

Hardware Aware Neural Architecture Search Model Optimization +2

Video Frame Interpolation via Structure-Motion based Iterative Fusion

no code implementations11 May 2021 Xi Li, Meng Cao, Yingying Tang, Scott Johnston, Zhendong Hong, Huimin Ma, Jiulong Shan

Inspired by the observation that audiences have different visual preferences on foreground and background objects, we for the first time propose to use saliency masks in the evaluation processes of the task of video frame interpolation.

Optical Flow Estimation Video Frame Interpolation

RR-Net: Injecting Interactive Semantics in Human-Object Interaction Detection

no code implementations30 Apr 2021 Dongming Yang, Yuexian Zou, Can Zhang, Meng Cao, Jie Chen

Upon the frame, an Interaction Intensifier Module and a Correlation Parsing Module are carefully designed, where: a) interactive semantics from humans can be exploited and passed to objects to intensify interactions, b) interactive correlations among humans, objects and interactions are integrated to promote predictions.

Human-Object Interaction Detection Relation

CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning

1 code implementation CVPR 2021 Can Zhang, Meng Cao, Dongming Yang, Jie Chen, Yuexian Zou

In this paper, we argue that learning by comparing helps identify these hard snippets and we propose to utilize snippet Contrastive learning to Localize Actions, CoLA for short.

CoLA Contrastive Learning +3

Quantum error-correcting codes from matrix-product codes related to quasi-orthogonal and quasi-unitary matrices

no code implementations31 Dec 2020 Meng Cao

The construction of matrix-product codes with certain self-orthogonality over finite fields is an effective way to obtain good $q$-ary quantum codes of large length.

Information Theory Information Theory Quantum Physics

Factual Error Correction for Abstractive Summarization Models

1 code implementation EMNLP 2020 Meng Cao, Yue Dong, Jiapeng Wu, Jackie Chi Kit Cheung

Experimental results show that our model is able to correct factual errors in summaries generated by other neural summarization models and outperforms previous models on factual consistency evaluation on the CNN/DailyMail dataset.

Abstractive Text Summarization

TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion

1 code implementation EMNLP 2020 Jiapeng Wu, Meng Cao, Jackie Chi Kit Cheung, William L. Hamilton

Our analysis also reveals important sources of variability both within and across TKG datasets, and we introduce several simple but strong baselines that outperform the prior state of the art in certain settings.

Imputation Temporal Knowledge Graph Completion

An artificial intelligence system for predicting the deterioration of COVID-19 patients in the emergency department

1 code implementation4 Aug 2020 Farah E. Shamout, Yiqiu Shen, Nan Wu, Aakash Kaku, Jungkyu Park, Taro Makino, Stanisław Jastrzębski, Duo Wang, Ben Zhang, Siddhant Dogra, Meng Cao, Narges Razavian, David Kudlowitz, Lea Azour, William Moore, Yvonne W. Lui, Yindalon Aphinyanaphongs, Carlos Fernandez-Granda, Krzysztof J. Geras

In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time.

COVID-19 Diagnosis Decision Making +1

Task-agnostic Temporally Consistent Facial Video Editing

no code implementations3 Jul 2020 Meng Cao, Hao-Zhi Huang, Hao Wang, Xuan Wang, Li Shen, Sheng Wang, Linchao Bao, Zhifeng Li, Jiebo Luo

Compared with the state-of-the-art facial image editing methods, our framework generates video portraits that are more photo-realistic and temporally smooth.

3D Reconstruction Video Editing

All you need is a second look: Towards Tighter Arbitrary shape text detection

no code implementations26 Apr 2020 Meng Cao, Yuexian Zou

Specifically, \textit{NASK} consists of a Text Instance Segmentation network namely \textit{TIS} (\(1^{st}\) stage), a Text RoI Pooling module and a Fiducial pOint eXpression module termed as \textit{FOX} (\(2^{nd}\) stage).

Instance Segmentation Scene Text Detection +3

Unsupervised Domain Adaptation Through Transferring both the Source-Knowledge and Target-Relatedness Simultaneously

no code implementations18 Mar 2020 Qing Tian, Yanan Zhu, Chuang Ma, Meng Cao

Unsupervised domain adaptation (UDA) is an emerging research topic in the field of machine learning and pattern recognition, which aims to help the learning of unlabeled target domain by transferring knowledge from the source domain.

BIG-bench Machine Learning Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.