no code implementations • ICML 2020 • Xianggen Liu, Jian Peng, Qiang Liu, Sen Song
Deep generative modeling has achieved many successes for continuous data generation, such as producing realistic images and controlling their properties (e. g., styles).
no code implementations • ICML 2020 • Yihao Feng, Tongzheng Ren, Ziyang Tang, Qiang Liu
In this work, we investigate the statistical properties of the kernel loss, which allows us to find a feasible set that contains the true value function with high probability.
no code implementations • Findings (NAACL) 2022 • Chengyue Gong, Xiaocong Du, Dhruv Choudhary, Bhargav Bhushanam, Qiang Liu, Arun Kejariwal
On the definition side, we reduce the bias in transfer loss by focusing on the items to which information from high-frequency items can be efficiently transferred.
no code implementations • ICML 2020 • Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu
Theoretically, we show that the small networks pruned using our method achieve provably lower loss than small networks trained from scratch with the same size.
no code implementations • 2 Sep 2024 • Dingshuo Chen, ZHIXUN LI, Yuyan Ni, Guibin Zhang, Ding Wang, Qiang Liu, Shu Wu, Jeffrey Xu Yu, Liang Wang
Therefore, we propose a Molecular data Pruning framework for enhanced Generalization (MolPeg), which focuses on the source-free data pruning scenario, where data pruning is applied with pretrained models.
1 code implementation • 23 Aug 2024 • Kaizhao Liang, Bo Liu, Lizhang Chen, Qiang Liu
Recently, a wide range of memory-efficient LLM training algorithms have gained substantial popularity.
no code implementations • 22 Aug 2024 • Mengqi Zhang, Bowen Fang, Qiang Liu, Pengjie Ren, Shu Wu, Zhumin Chen, Liang Wang
Building on the validated hypothesis, we propose a novel knowledge editing method that incorporates a Knowledge Erasure mechanism for Large language model Editing (KELE).
no code implementations • 20 Aug 2024 • Qiang Liu, Mengyu Chu, Nils Thuerey
To improve learning the challenging multi-objective task posed by PINNs, we propose the ConFIG method, which provides conflict-free updates by ensuring a positive dot product between the final update and each loss-specific gradient.
no code implementations • 20 Aug 2024 • YuTing Liu, Jinghao Zhang, Yizhou Dang, Yuliang Liang, Qiang Liu, Guibing Guo, Jianzhe Zhao, Xingwei Wang
We then merge the collaborative weights into LLM's weights, enabling LLM to perceive the collaborative signals and generate personalized recommendations without fine-tuning or extra collaborative tokens in prompts.
1 code implementation • 17 Aug 2024 • Housen Wang, Yuxing Chen, Sirong Cao, Xiaoli Wang, Qiang Liu
We propose a unified framework for delay differential equations (DDEs) based on deep neural networks (DNNs) - the neural delay differential equations (NDDEs), aimed at solving the forward and inverse problems of delay differential equations.
no code implementations • 8 Aug 2024 • Xin Sun, Qiang Liu, Shu Wu, Zilei Wang, Liang Wang
This paper addresses the challenge of out-of-distribution (OOD) generalization in graph machine learning, a field rapidly advancing yet grappling with the discrepancy between source and target data distributions.
no code implementations • 26 Jul 2024 • Jinghao Zhang, Guofan Liu, Qiang Liu, Shu Wu, Liang Wang
To address these issues, we propose a Counterfactual Knowledge Distillation method that could solve the imbalance problem and make the best use of all modalities.
1 code implementation • 19 Jul 2024 • Bo Liu, Rui Wang, Lemeng Wu, Yihao Feng, Peter Stone, Qiang Liu
In this work, we explore SSM design through the lens of online learning, conceptualizing SSMs as meta-modules for specific online learning problems.
1 code implementation • 17 Jul 2024 • Yuanzhi Zhu, Xingchao Liu, Qiang Liu
The rectified flow framework trains one-step generative models using two operations, reflow and distillation.
no code implementations • 17 Jul 2024 • Haisong Gong, Huanhuan Ma, Qiang Liu, Shu Wu, Liang Wang
These keywords serve as a guide to extract and summarize critical information into abstracted evidence.
1 code implementation • 16 Jul 2024 • ZHIXUN LI, Yushun Dong, Qiang Liu, Jeffrey Xu Yu
We claim that the imbalance across different demographic groups is a significant source of unfairness, resulting in imbalanced contributions from each group to the parameters updating.
1 code implementation • 3 Jul 2024 • Yihan Hu, Siqi Chai, Zhening Yang, Jingyu Qian, Kun Li, Wenxin Shao, Haichao Zhang, Wei Xu, Qiang Liu
We conclude that the proposed generative model may serve as a foundation for a variety of motion planning tasks, including data generation, simulation, planning, and online training.
no code implementations • 1 Jul 2024 • Ruidong Wu, Ruihan Guo, Rui Wang, Shitong Luo, Yue Xu, Jiahan Li, Jianzhu Ma, Qiang Liu, Yunan Luo, Jian Peng
Despite the striking success of general protein folding models such as AlphaFold2(AF2, Jumper et al. (2021)), the accurate computational modeling of antibody-antigen complexes remains a challenging task.
no code implementations • 14 Jun 2024 • Son Nguyen, Lizhang Chen, Bo Liu, Qiang Liu
In this study, we introduce a novel adaptive optimizer, H-Fac, which incorporates a factorized approach to momentum and scaling parameters.
no code implementations • 7 Jun 2024 • Huanhuan Ma, Jinghao Zhang, Qiang Liu, Shu Wu, Liang Wang
By employing latent variables for phrase-level predictions, the final prediction of the image-caption pair can be aggregated using logical rules.
no code implementations • 18 May 2024 • Qiang Liu, Yuhan Jiao, Shuxin Guo
On April 22, 2020, the CME Group switched to Bachelier pricing for a group of oil futures options.
no code implementations • 17 May 2024 • Shuxin Guo, Qiang Liu
Arguably, it is incorrect to use yearly returns directly for compounding, with reported annualized return of above 60% for Medallion for the 31 years up to 2018.
no code implementations • 17 May 2024 • Shuxin Guo, Qiang Liu
We study the data-generating processes for factors expressed in return differences, which the literature on time-series asset pricing seems to have overlooked.
1 code implementation • 13 May 2024 • Hanshu Yan, Xingchao Liu, Jiachun Pan, Jun Hao Liew, Qiang Liu, Jiashi Feng
We present Piecewise Rectified Flow (PeRFlow), a flow-based method for accelerating diffusion models.
no code implementations • 24 Apr 2024 • Xiang Tao, Qiang Liu, Shu Wu, Liang Wang
The model learns semantic evolvement information of events by capturing local semantic changes and global semantic evolvement information through specific graph autoencoder and reconstruction strategies.
no code implementations • 30 Mar 2024 • Bo Liu, Lemeng Wu, Lizhang Chen, Kaizhao Liang, Jiaxu Zhu, Chen Liang, Raghuraman Krishnamoorthi, Qiang Liu
The Lion optimizer has been a promising competitor with the AdamW for training large AI models, with advantages on memory, computation, and sample efficiency.
no code implementations • 26 Mar 2024 • Xiang Tao, Mingqing Zhang, Qiang Liu, Shu Wu, Liang Wang
This method models the propagation of news in the form of a propagation graph, and builds propagation graph test-time adaptation framework, enhancing the model's adaptability and robustness when facing OOD problems.
1 code implementation • 21 Mar 2024 • Yangchun Zhang, Qiang Liu, Weiming Li, Yirui Zhou
Criticism 3 lies in Unsatisfactory Proof from the Perspective of Potential Equilibrium.
no code implementations • 13 Mar 2024 • YuTing Liu, Yizhou Dang, Yuliang Liang, Qiang Liu, Guibing Guo, Jianzhe Zhao, Xingwei Wang
Recently, sign-aware graph recommendation has drawn much attention as it will learn users' negative preferences besides positive ones from both positive and negative interactions (i. e., links in a graph) with items.
1 code implementation • 12 Mar 2024 • Han Huang, Haitian Zhong, Tao Yu, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan
Compared to this, editing Large Vision-Language Models (LVLMs) faces extra challenges from diverse data modalities and complicated model components, and data for LVLMs editing are limited.
no code implementations • 7 Mar 2024 • Yi Xiao, Xiangxin Zhou, Qiang Liu, Liang Wang
In this paper, we present the first systematic survey on multimodal frameworks for molecules research.
1 code implementation • 7 Mar 2024 • Yuliang Liu, Biao Yang, Qiang Liu, Zhang Li, Zhiyin Ma, Shuo Zhang, Xiang Bai
We present TextMonkey, a large multimodal model (LMM) tailored for text-centric tasks.
1 code implementation • 7 Mar 2024 • 01. AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, Zonghong Dai
The Yi model family is based on 6B and 34B pretrained language models, then we extend them to chat models, 200K long context models, depth-upscaled models, and vision-language models.
Ranked #1 on Chatbot on AlpacaEval
no code implementations • 29 Feb 2024 • Jiajun Zhang, ZHIXUN LI, Qiang Liu, Shu Wu, Liang Wang
With the rapid development of social media, the wide dissemination of fake news on social media is increasingly threatening both individuals and society.
1 code implementation • 26 Feb 2024 • Jiaqi Guan, Xiangxin Zhou, Yuwei Yang, Yu Bao, Jian Peng, Jianzhu Ma, Qiang Liu, Liang Wang, Quanquan Gu
Designing 3D ligands within a target binding site is a fundamental task in drug discovery.
no code implementations • 22 Feb 2024 • Yuwei Xia, Ding Wang, Qiang Liu, Liang Wang, Shu Wu, XiaoYu Zhang
Temporal Knowledge Graph (TKG) forecasting aims to predict future facts based on given histories.
no code implementations • 21 Feb 2024 • Mengqi Zhang, Xiaotian Ye, Qiang Liu, Pengjie Ren, Shu Wu, Zhumin Chen
Large language models (LLMs) are pivotal in advancing natural language processing (NLP) tasks, yet their efficacy is hampered by inaccuracies and outdated knowledge.
1 code implementation • 20 Feb 2024 • Haisong Gong, Weizhi Xu, Shu Wu, Qiang Liu, Liang Wang
To address this, we propose a novel word-level Heterogeneous-graph-based model for Fact Checking over unstructured and structured information, namely HeterFC.
1 code implementation • 20 Feb 2024 • Haisong Gong, Qiang Liu, Shu Wu, Liang Wang
In this work, we propose the Text-Guided Molecule Generation with Diffusion Language Model (TGM-DLM), a novel approach that leverages diffusion models to address the limitations of autoregressive methods.
Ranked #6 on Text-based de novo Molecule Generation on ChEBI-20
Language Modelling Text-based de novo Molecule Generation +1
1 code implementation • 18 Feb 2024 • Junfei Wu, Qiang Liu, Ding Wang, Jinghao Zhang, Shu Wu, Liang Wang, Tieniu Tan
In this work, we adopt the intuition that the LVLM tends to respond logically consistently for existent objects but inconsistently for hallucinated objects.
no code implementations • 18 Feb 2024 • Jinghao Zhang, YuTing Liu, Qiang Liu, Shu Wu, Guibing Guo, Liang Wang
Recently, the powerful large language models (LLMs) have been instrumental in propelling the progress of recommender systems (RS).
1 code implementation • 11 Feb 2024 • Xiang Tao, Qiang Liu, Shu Wu, Liang Wang
Based on our theoretical analysis, we further identify the limitations of the GraphMAE from the perspectives of alignment and uniformity, which have been considered as two key properties of high-quality representations in GCL.
1 code implementation • 6 Feb 2024 • Xixi Hu, Bo Liu, Xingchao Liu, Qiang Liu
To address this challenge, we propose AdaFlow, an imitation learning framework based on flow-based generative modeling.
no code implementations • 6 Feb 2024 • Qiang Liu, Xiang Tao, Junfei Wu, Shu Wu, Liang Wang
In this work, we investigate to use Large Language Models (LLMs) for rumor detection on social media.
no code implementations • 22 Jan 2024 • Tianlun Hu, Qi Liao, Qiang Liu, Antonio Massaro, Georg Carle
Based on the proposed framework, we design a new neural-assisted algorithm to allocate radio resources to slices to maximize the network utility under inter-slice resource constraints.
no code implementations • 31 Dec 2023 • Peihao Wang, Zhiwen Fan, Dejia Xu, Dilin Wang, Sreyas Mohan, Forrest Iandola, Rakesh Ranjan, Yilei Li, Qiang Liu, Zhangyang Wang, Vikas Chandra
In this paper, we reveal that the gradient estimation in score distillation is inherent to high variance.
no code implementations • CVPR 2024 • Peihao Wang, Dejia Xu, Zhiwen Fan, Dilin Wang, Sreyas Mohan, Forrest Iandola, Rakesh Ranjan, Yilei Li, Qiang Liu, Zhangyang Wang, Vikas Chandra
In this paper, we reveal that the existing score distillation-based text-to-3D generation frameworks degenerate to maximal likelihood seeking on each view independently and thus suffer from the mode collapse problem, manifesting as the Janus artifact in practice.
1 code implementation • 8 Dec 2023 • Qiang Liu, Nils Thuerey
Leveraging neural networks as surrogate models for turbulence simulation is a topic of growing interest.
no code implementations • 28 Nov 2023 • Junyan Qiu, Haitao Wang, Zhaolin Hong, Yiping Yang, Qiang Liu, Xingxing Wang
The successful integration of large language models (LLMs) into recommendation systems has proven to be a major breakthrough in recent studies, paving the way for more generic and transferable recommendations.
1 code implementation • CVPR 2024 • Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, Xiang Bai
Additionally, experiments on 18 datasets further demonstrate that Monkey surpasses existing LMMs in many tasks like Image Captioning and various Visual Question Answering formats.
Ranked #13 on MMR total on MRR-Benchmark (using extra training data)
no code implementations • 10 Nov 2023 • YuTing Liu, Enneng Yang, Yizhou Dang, Guibing Guo, Qiang Liu, Yuliang Liang, Linying Jiang, Xingwei Wang
In this paper, we revisit the value of ID embeddings for multimodal recommendation and conduct a thorough study regarding its semantics, which we recognize as subtle features of \emph{content} and \emph{structure}.
1 code implementation • 16 Oct 2023 • Kirill Neklyudov, Rob Brekelmans, Alexander Tong, Lazar Atanackovic, Qiang Liu, Alireza Makhzani
The dynamical formulation of the optimal transport can be extended through various choices of the underlying geometry (kinetic energy), and the regularization of density paths (potential energy).
1 code implementation • 15 Oct 2023 • Huanhuan Ma, Weizhi Xu, Yifan Wei, Liuji Chen, Qiang Liu, Shu Wu, Liang Wang
Each instance is accompanied by a veracity label and an explanation that outlines the reasoning path supporting the veracity classification.
no code implementations • 9 Oct 2023 • Lizhang Chen, Bo Liu, Kaizhao Liang, Qiang Liu
As we can expect from the results of a random search program, Lion incorporates elements from several existing algorithms, including signed momentum, decoupled weight decay, Polak, and Nesterov momentum, but does not fit into any existing category of theoretically grounded optimizers.
1 code implementation • NeurIPS 2023 • ZHIXUN LI, Xin Sun, Yifan Luo, Yanqiao Zhu, Dingshuo Chen, Yingtao Luo, Xiangxin Zhou, Qiang Liu, Shu Wu, Liang Wang, Jeffrey Xu Yu
To fill this gap, we systematically analyze the performance of GSL in different scenarios and develop a comprehensive Graph Structure Learning Benchmark (GSLB) curated from 20 diverse graph datasets and 16 distinct GSL algorithms.
2 code implementations • NeurIPS 2023 • Dingshuo Chen, Yanqiao Zhu, Jieyu Zhang, Yuanqi Du, ZHIXUN LI, Qiang Liu, Shu Wu, Liang Wang
Molecular Representation Learning (MRL) has emerged as a powerful tool for drug and materials discovery in a variety of tasks such as virtual screening and inverse design.
no code implementations • 14 Sep 2023 • Xiangzhu Meng, Wei Wei, Qiang Liu, Shu Wu, Liang Wang
Motivated by the related medical findings on functional connectivites, TiBGL proposes template-induced brain graph learning to extract template brain graphs for all groups.
no code implementations • 14 Sep 2023 • Xiangzhu Meng, Qiang Liu, Shu Wu, Liang Wang
In recent years, functional magnetic resonance imaging (fMRI) has been widely utilized to diagnose neurological disease, by exploiting the region of interest (RoI) nodes as well as their connectivities in human brain.
2 code implementations • 12 Sep 2023 • Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, Qiang Liu
Leveraging our new pipeline, we create, to the best of our knowledge, the first one-step diffusion-based text-to-image generator with SD-level image quality, achieving an FID (Frechet Inception Distance) of $23. 3$ on MS COCO 2017-5k, surpassing the previous state-of-the-art technique, progressive distillation, by a significant margin ($37. 2$ $\rightarrow$ $23. 3$ in FID).
no code implementations • 26 Jun 2023 • Yihan Hu, Kun Li, Pingyuan Liang, Jingyu Qian, Zhening Yang, Haichao Zhang, Wenxin Shao, Zhuangzhuang Ding, Wei Xu, Qiang Liu
This paper presents our 2nd place solution for the NuPlan Challenge 2023.
no code implementations • 25 Jun 2023 • Jinghao Zhang, Qiang Liu, Shu Wu, Liang Wang
Even worse, the strong statistical correlation might mislead models to learn the spurious preference towards inconsequential modalities.
no code implementations • 20 Jun 2023 • Tianlun Hu, Qi Liao, Qiang Liu, Georg Carle
Network slicing enables operators to efficiently support diverse applications on a common physical infrastructure.
1 code implementation • NeurIPS 2023 • Bo Liu, Yihao Feng, Peter Stone, Qiang Liu
One of the grand enduring goals of AI is to create generalist agents that can learn multiple different tasks from diverse data via multitask learning (MTL).
no code implementations • 23 May 2023 • Kriti Aggarwal, Aditi Khandelwal, Kumar Tanmay, Owais Mohammed Khan, Qiang Liu, Monojit Choudhury, Hardik Hansrajbhai Chauhan, Subhojit Som, Vishrav Chaudhary, Saurabh Tiwary
Visual document understanding is a complex task that involves analyzing both the text and the visual elements in document images.
Ranked #1 on Visual Question Answering (VQA) on DeepForm
1 code implementation • 11 May 2023 • Xingang Peng, Jiaqi Guan, Qiang Liu, Jianzhu Ma
Deep generative models have recently achieved superior performance in 3D molecule generation.
no code implementations • 25 Apr 2023 • Qiang Liu, Junfei Wu, Shu Wu, Liang Wang
Then, DAL reversely optimizes news-aspect and evidence-aspect debiasing discriminators to mitigate the impact of news and evidence content biases.
1 code implementation • 22 Apr 2023 • Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, Peter Stone
LLM+P takes in a natural language description of a planning problem, then returns a correct (or optimal) plan for solving that problem in natural language.
no code implementations • 12 Apr 2023 • Qiang Liu, Zhaocheng Liu, Zhenxi Zhu, Shu Wu, Liang Wang
However, none of existing multi-interest recommendation models consider the Out-Of-Distribution (OOD) generalization problem, in which interest distribution may change.
no code implementations • 23 Mar 2023 • WenBo Hu, Xin Sun, Qiang Liu, Le Wu, Liang Wang
To address this, we evaluate the quality of propensity scores from the perspective of uncertainty calibration, proposing the use of expected calibration error (ECE) as a measure of propensity-score quality.
no code implementations • ICCV 2023 • Mao Ye, Gregory P. Meyer, Yuning Chai, Qiang Liu
Although halting a token is a non-differentiable operation, our method allows for differentiable end-to-end learning by leveraging an equivalent differentiable forward-pass.
1 code implementation • NeurIPS 2023 • Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Barun Patra, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei
A big convergence of language, multimodal perception, action, and world modeling is a key step toward artificial general intelligence.
1 code implementation • 20 Feb 2023 • Liang Yao, Jiazhen Peng, Shenggong Ji, Qiang Liu, Hongyun Cai, Feng He, Xu Cheng
Friend recall is an important way to improve Daily Active Users (DAU) in online games.
no code implementations • 2 Feb 2023 • Yuwei Xia, Mengqi Zhang, Qiang Liu, Shu Wu, Xiao-Yu Zhang
Most existing works focus on exploring evolutionary information in history to obtain effective temporal embeddings for entities and relations, but they ignore the variation in evolution patterns of facts, which makes them struggle to adapt to future data with different evolution patterns.
no code implementations • 17 Jan 2023 • Haoxin Wang, Ziran Wang, Dawei Chen, Qiang Liu, Hongyu Ke, Kyungtae Han
A Metaverse is a perpetual, immersive, and shared digital universe that is linked to but beyond the physical reality, and this emerging technology is attracting enormous attention from different industries.
no code implementations • 9 Jan 2023 • Tianlun Hu, Qi Liao, Qiang Liu, Georg Carle
In this paper, we propose a novel transfer learning (TL) aided multi-agent deep reinforcement learning (MADRL) approach with inter-agent similarity analysis for inter-cell inter-slice resource partitioning.
no code implementations • CVPR 2023 • Xingchao Liu, Lemeng Wu, Shujian Zhang, Chengyue Gong, Wei Ping, Qiang Liu
To further accelerate the computation of the back-propagation, we propose to use a non-uniform discretization to approximate the ODE trajectory, where we measure how straight the trajectory is and gather the straight parts into one discretization step.
no code implementations • CVPR 2023 • Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei
A big convergence of language, vision, and multimodal pretraining is emerging.
1 code implementation • CVPR 2023 • Linshan Wu, Zhun Zhong, Leyuan Fang, Xingxin He, Qiang Liu, Jiayi Ma, Hao Chen
Our AGMM can effectively endow reliable supervision for unlabeled pixels based on the distributions of labeled and unlabeled pixels.
1 code implementation • CVPR 2023 • Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li
Oriented at this, we revisit the key components within perception and prediction, and prioritize the tasks such that all these tasks contribute to planning.
no code implementations • 12 Dec 2022 • Lemeng Wu, Dilin Wang, Meng Li, Yunyang Xiong, Raghuraman Krishnamoorthi, Qiang Liu, Vikas Chandra
Fusing 3D LiDAR features with 2D camera features is a promising technique for enhancing the accuracy of 3D detection, thanks to their complementary physical properties.
1 code implementation • CVPR 2023 • Lemeng Wu, Dilin Wang, Chengyue Gong, Xingchao Liu, Yunyang Xiong, Rakesh Ranjan, Raghuraman Krishnamoorthi, Vikas Chandra, Qiang Liu
We perform evaluations on multiple 3D tasks and find that our PSF performs comparably to the standard diffusion model, outperforming other efficient 3D point cloud generation methods.
no code implementations • 16 Nov 2022 • Qiang Liu
Combinatorial optimizations are usually complex and inefficient, which limits their applications in large-scale networks with billions of links.
2 code implementations • 7 Nov 2022 • Andrey Ignatov, Radu Timofte, Maurizio Denna, Abdel Younes, Ganzorig Gankhuyag, Jingang Huh, Myeong Kyun Kim, Kihwan Yoon, Hyeon-Cheol Moon, Seungho Lee, Yoonsik Choe, Jinwoo Jeong, Sungjei Kim, Maciej Smyl, Tomasz Latkowski, Pawel Kubik, Michal Sokolski, Yujie Ma, Jiahao Chao, Zhou Zhou, Hongfan Gao, Zhengfeng Yang, Zhenbing Zeng, Zhengyang Zhuge, Chenghua Li, Dan Zhu, Mengdi Sun, Ran Duan, Yan Gao, Lingshun Kong, Long Sun, Xiang Li, Xingdong Zhang, Jiawei Zhang, Yaqi Wu, Jinshan Pan, Gaocheng Yu, Jin Zhang, Feng Zhang, Zhe Ma, Hongbin Wang, Hojin Cho, Steve Kim, Huaen Li, Yanbo Ma, Ziwei Luo, Youwei Li, Lei Yu, Zhihong Wen, Qi Wu, Haoqiang Fan, Shuaicheng Liu, Lize Zhang, Zhikai Zong, Jeremy Kwon, Junxi Zhang, Mengyuan Li, Nianxiang Fu, Guanchen Ding, Han Zhu, Zhenzhong Chen, Gen Li, Yuanfan Zhang, Lei Sun, Dafeng Zhang, Neo Yang, Fitz Liu, Jerry Zhao, Mustafa Ayazoglu, Bahri Batuhan Bilecen, Shota Hirose, Kasidis Arunruangsirilert, Luo Ao, Ho Chun Leung, Andrew Wei, Jie Liu, Qiang Liu, Dahai Yu, Ao Li, Lei Luo, Ce Zhu, Seongmin Hong, Dongwon Park, Joonhee Lee, Byeong Hyun Lee, Seunggyu Lee, Se Young Chun, Ruiyuan He, Xuhao Jiang, Haihang Ruan, Xinjian Zhang, Jing Liu, Garas Gendy, Nabil Sabor, Jingchao Hou, Guanghui He
While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints.
1 code implementation • 30 Oct 2022 • Qiang Liu, Nakjung Choi, Tao Han
First, we design a learning-based simulator to reduce the sim-to-real discrepancy, which is accomplished by a new parameter searching method based on Bayesian optimization.
2 code implementations • 24 Oct 2022 • Lingxiao Li, Qiang Liu, Anna Korba, Mikhail Yurochkin, Justin Solomon
These energies rely on mollifier functions -- smooth approximations of the Dirac delta originated from PDE theory.
no code implementations • 22 Oct 2022 • ZHIXUN LI, Dingshuo Chen, Qiang Liu, Shu Wu
In this paper, we argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute.
1 code implementation • 12 Oct 2022 • Ruqi Zhang, Qiang Liu, Xin T. Tong
Sampling methods, as important inference and learning techniques, are typically designed for unconstrained domains.
1 code implementation • 11 Oct 2022 • Junfei Wu, Weizhi Xu, Qiang Liu, Shu Wu, Liang Wang
Comprehensive experiments have demonstrated the superiority of GETRAL over the state-of-the-arts and validated the efficacy of semantic mining with graph structure and contrastive learning.
no code implementations • 6 Oct 2022 • Yan Zheng, Lemeng Wu, Xingchao Liu, Zhen Chen, Qiang Liu, QiXing Huang
We first propose a diffusion-based generative model to tackle this problem by generating voxelized shapes with close-to-reality outlines and structures.
2 code implementations • 29 Sep 2022 • Qiang Liu
We present a flow-based approach to the optimal transport (OT) problem between two continuous distributions $\pi_0,\pi_1$ on $\mathbb{R}^d$, of minimizing a transport cost $\mathbb{E}[c(X_1-X_0)]$ in the set of couplings $(X_0, X_1)$ whose marginal distributions on $X_0, X_1$ equals $\pi_0,\pi_1$, respectively, where $c$ is a cost function.
1 code implementation • 29 Sep 2022 • Yanqiao Zhu, Dingshuo Chen, Yuanqi Du, Yingze Wang, Qiang Liu, Shu Wu
Molecular pretraining, which learns molecular representations over massive unlabeled data, has become a prominent paradigm to solve a variety of tasks in computational chemistry and drug discovery.
no code implementations • 19 Sep 2022 • Mao Ye, Bo Liu, Stephen Wright, Peter Stone, Qiang Liu
Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning.
6 code implementations • 7 Sep 2022 • Xingchao Liu, Chengyue Gong, Qiang Liu
The idea of rectified flow is to learn the ODE to follow the straight paths connecting the points drawn from \pi_0 and \pi_1 as much as possible.
1 code implementation • 3 Sep 2022 • Yingtao Luo, Zhaocheng Liu, Qiang Liu
The unstable correlation between procedures and diagnoses existed in the training distribution can cause spurious correlation between historical EHR and future diagnosis.
no code implementations • 2 Sep 2022 • Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu
One of the key challenges of learning an online recommendation model is the temporal domain shift, which causes the mismatch between the training and testing data distribution and hence domain generalization error.
no code implementations • 2 Sep 2022 • Mao Ye, Lemeng Wu, Qiang Liu
We propose a family of First Hitting Diffusion Models (FHDM), deep generative models that generate data with a diffusion process that terminates at a random first hitting time.
no code implementations • 2 Sep 2022 • Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, Qiang Liu
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development.
no code implementations • 31 Aug 2022 • Xingchao Liu, Lemeng Wu, Mao Ye, Qiang Liu
Diffusion-based generative models have achieved promising results recently, but raise an array of open questions in terms of conceptual understanding, theoretical analysis, algorithm improvement and extensions to discrete, structured, non-Euclidean domains.
1 code implementation • Conference 2022 • Fenyu Hu, Zeyu Cui, Shu Wu, Qiang Liu, Jinlin Wu, Liang Wang & Tieniu Tan
Graph Neural Networks (GNNs) are powerful to learn representation of graph-structured data, which fuse both attributive and topological information.
2 code implementations • 22 Aug 2022 • Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei
A big convergence of language, vision, and multimodal pretraining is emerging.
Ranked #1 on Visual Reasoning on NLVR2 Test
2 code implementations • 17 Aug 2022 • Bo Liu, Yihao Feng, Qiang Liu, Peter Stone
Furthermore, we introduce the metric residual network (MRN) that deliberately decomposes the action-value function Q(s, a, g) into the negated summation of a metric plus a residual asymmetric component.
no code implementations • 14 Jul 2022 • Zhaocheng Liu, Yingtao Luo, Di Zeng, Qiang Liu, Daqing Chang, Dongying Kong, Zhi Chen
Modeling users' dynamic preferences from historical behaviors lies at the core of modern recommender systems.
1 code implementation • 6 Jul 2022 • Yuanzhi Duan, Yue Zhou, Peng He, Qiang Liu, Shukai Duan, Xiaofang Hu
In this paper, we propose a novel Feature Shift Minimization (FSM) method to compress CNN models, which evaluates the feature shift by converging the information of both features and filters.
1 code implementation • 27 Jun 2022 • Xing Han, Ziyang Tang, Joydeep Ghosh, Qiang Liu
The modified score inherits the spirit of split conformal methods, which is simple and efficient and can scale to high dimensional settings.
no code implementations • 21 Jun 2022 • Yihan Hu, Wenxin Shao, Bo Jiang, Jiajie Chen, Siqi Chai, Zhening Yang, Jingyu Qian, Helong Zhou, Qiang Liu
In this report, we introduce our solution to the Occupancy and Flow Prediction challenge in the Waymo Open Dataset Challenges at CVPR 2022, which ranks 1st on the leaderboard.
1 code implementation • 20 Jun 2022 • Ruqi Zhang, Xingchao Liu, Qiang Liu
We propose discrete Langevin proposal (DLP), a simple and scalable gradient-based proposal for sampling complex high-dimensional discrete distributions.
no code implementations • 4 Jun 2022 • Ruiqing Yan, Fan Zhang, Mengyuan Huang, Wu Liu, Dongyu Hu, Jinfeng Li, Qiang Liu, Jinrong Jiang, Qianjin Guo, Linghan Zheng
Detection of object anomalies is crucial in industrial processes, but unsupervised anomaly detection and localization is particularly important due to the difficulty of obtaining a large number of defective samples and the unpredictable types of anomalies in real life.
no code implementations • 1 Jun 2022 • Qiang Liu, Yingtao Luo, Shu Wu, Zhen Zhang, Xiangnan Yue, Hong Jin, Liang Wang
Accordingly, we for the first time propose to model the biased credit scoring data with Multi-Task Learning (MTL).
no code implementations • 31 May 2022 • Qiang Liu, Zhi Liu
Jumps and market microstructure noise are stylized features of high-frequency financial data.
1 code implementation • 24 Mar 2022 • Bo Liu, Qiang Liu, Peter Stone
As intelligent agents become autonomous over longer periods of time, they may eventually become lifelong counterparts to specific people.
no code implementations • 14 Mar 2022 • Renjie Zhou, Qiang Hu, Jian Wan, Jilin Zhang, Qiang Liu, Tianxiang Hu, Jianjun Li
The model first trains the sentence pairs in the text, calculate similarity between sentence pairs, and fine-tunes BERT used for the named entity recognition task according to the similarity, so as to alleviate word ambiguity.
no code implementations • 13 Mar 2022 • Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, Shu Wu
In this paper, we conduct a comprehensive review on the existing literature of deep graph generation from a variety of emerging methods to its wide application areas.
no code implementations • 27 Feb 2022 • Junzheng Wu, Ruigang Fu, Qiang Liu, Weiping Ni, Kenan Cheng, Biao Li, Yuli Sun
To address this limitation, a dual neighborhood hypergraph neural network is proposed in this article, which combines the multiscale superpixel segmentation and hypergraph convolution to model and exploit the complex relationships.
no code implementations • 16 Feb 2022 • Chengyue Gong, Lemeng Wu, Qiang Liu
Although traditional optimization methods focus on finding a single optimal solution, most objective functions in modern machine learning problems, especially those in deep learning, often have multiple or infinite numbers of optima.
no code implementations • 20 Jan 2022 • Qiang Liu, Yuru Zhang, Haoxin Wang
High definition (HD) map needs to be updated frequently to capture road changes, which is constrained by limited specialized collection vehicles.
1 code implementation • 18 Jan 2022 • Weizhi Xu, Junfei Wu, Qiang Liu, Shu Wu, Liang Wang
In this paper, we focus on the evidence-based fake news detection, where several evidences are utilized to probe the veracity of news (i. e., a claim).
no code implementations • 1 Jan 2022 • Ziyang Tang, Yihao Feng, Qiang Liu
The benefit of learning the operator is that we can incorporate any new reward function as input and attain its corresponding value function in a zero-shot manner.
1 code implementation • 30 Dec 2021 • Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, Jie Tang
Heterogeneous graph neural networks (HGNNs) have been blossoming in recent years, but the unique data processing and evaluation setups used by each work obstruct a full understanding of their advancements.
no code implementations • 16 Dec 2021 • Yihan Hu, Zhuangzhuang Ding, Runzhou Ge, Wenxin Shao, Li Huang, Kun Li, Qiang Liu
From this observation, we have devised a single-stage anchor-free network that can fulfill these requirements.
1 code implementation • 10 Dec 2021 • Yuanzhi Duan, Xiaofang Hu, Yue Zhou, Qiang Liu, Shukai Duan
In this paper, by exploring the similarities between feature maps, we propose a novel filter pruning method, Central Filter (CF), which suggests that a filter is approximately equal to a set of other filters after appropriate adjustments.
1 code implementation • 2 Dec 2021 • Xingchao Liu, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, Qiang Liu
We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text.
Ranked #47 on Text-to-Image Generation on MS COCO
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).
no code implementations • NeurIPS 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.
no code implementations • NeurIPS 2021 • Chengyue Gong, Mao Ye, Qiang Liu
We propose a general method to construct centroid approximation for the distribution of maximum points of a random function (a. k. a.
2 code implementations • 3 Nov 2021 • Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Furu Wei
We present a unified Vision-Language pretrained Model (VLMo) that jointly learns a dual encoder and a fusion encoder with a modular Transformer network.
Ranked #2 on Image Retrieval on PhotoChat
no code implementations • 2 Nov 2021 • Qiang Liu, Nakjung Choi, Tao Han
As online learning is converged, OnSlicing reduces 12. 5% usage without any violations as compared to the state-of-the-art online DRL solution.
1 code implementation • 1 Nov 2021 • Jinghao Zhang, Yanqiao Zhu, Qiang Liu, Mengqi Zhang, Shu Wu, Liang Wang
Although having access to multiple modalities might allow us to capture rich information, we argue that the simple coarse-grained fusion by linear combination or concatenation in previous work is insufficient to fully understand content information and item relationships. To this end, we propose a latent structure MIning with ContRastive mOdality fusion method (MICRO for brevity).
4 code implementations • NeurIPS 2021 • Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, Qiang Liu
The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks.
no code implementations • 17 Oct 2021 • Mao Ye, Qiang Liu
The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved.
no code implementations • 17 Oct 2021 • Mao Ye, Qiang Liu
In this work, we propose an efficient method to explicitly \emph{optimize} a small set of high quality ``centroid'' points to better approximate the ideal bootstrap distribution.
no code implementations • IEEE Internet of Things Journal 2021 • Meixia Fu, Songlin Sun, Qilian Liang, Xiaoyun Tong, Qiang Liu
Index Terms—Channel-spatial attention block (CSAB), exciting-inhibition network (EINet), Internet of Things (IoT), person reidentification (re-ID), soft batch dropblock.
Ranked #60 on Person Re-Identification on Market-1501
1 code implementation • 14 Oct 2021 • Qilong Yan, Yufeng Zhang, Qiang Liu, Shu Wu, Liang Wang
User profiling has long been an important problem that investigates user interests in many real applications.
no code implementations • 8 Oct 2021 • Shuo Yang, Le Hou, Xiaodan Song, Qiang Liu, Denny Zhou
Our approach exploits the special structure of BERT that contains a stack of repeated modules (i. e., transformer encoders).
no code implementations • ICLR 2022 • Jiaqi Guan, Wesley Wei Qian, Qiang Liu, Wei-Ying Ma, Jianzhu Ma, Jian Peng
Assuming different forms of the underlying potential energy function, we can not only reinterpret and unify many of the existing models but also derive new variants of SE(3)-equivariant neural networks in a principled manner.
1 code implementation • ICLR 2022 • Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Qiang Liu, Vikas Chandra
In this work, we observe that the poor performance is due to a gradient conflict issue: the gradients of different sub-networks conflict with that of the supernet more severely in ViTs than CNNs, which leads to early saturation in training and inferior convergence.
Ranked #7 on Neural Architecture Search on ImageNet
no code implementations • 29 Sep 2021 • Mao Ye, Qiang Liu
The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved.
2 code implementations • 2 Sep 2021 • Yanqiao Zhu, Yichen Xu, Qiang Liu, Shu Wu
We envision this work to provide useful empirical evidence of effective GCL algorithms and offer several insights for future research.
no code implementations • 31 Aug 2021 • Yanqiao Zhu, Yichen Xu, Hejie Cui, Carl Yang, Qiang Liu, Shu Wu
Recently, heterogeneous Graph Neural Networks (GNNs) have become a de facto model for analyzing HGs, while most of them rely on a relative large number of labeled data.
no code implementations • 16 Aug 2021 • Mengqi Zhang, Yanqiao Zhu, Qiang Liu, Shu Wu, Liang Wang
In our work, different views can be obtained based on the various relations among nodes.
no code implementations • 15 Aug 2021 • Qiang Liu, Yanqiao Zhu, Zhaocheng Liu, Yufeng Zhang, Shu Wu
To train high-performing models with the minimal annotation cost, active learning is proposed to select and label the most informative samples, yet it is still challenging to measure informativeness of samples used in DNNs.
no code implementations • 29 Jul 2021 • Runzhou Ge, Zhuangzhuang Ding, Yihan Hu, Wenxin Shao, Li Huang, Kun Li, Qiang Liu
Extended from our last year's award-winning model AFDet, we have made a handful of modifications to the base model, to improve the accuracy and at the same time to greatly reduce the latency.
no code implementations • 28 Jun 2021 • Rui Sun, Peng Jia, Yongyang Sun, Zhimin Yang, Qiang Liu, Hongyan Wei
Time domain astronomy has emerged as a vibrant research field in recent years, focusing on celestial objects that exhibit variable magnitudes or positions.
no code implementations • CVPR 2021 • Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu
The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data.
2 code implementations • 9 Jun 2021 • Yuntian Chen, Yingtao Luo, Qiang Liu, Hao Xu, Dongxiao Zhang
Partial differential equations (PDEs) are concise and understandable representations of domain knowledge, which are essential for deepening our understanding of physical processes and predicting future responses.
1 code implementation • 2 Jun 2021 • Yingtao Luo, Qiang Liu, Yuntian Chen, WenBo Hu, Tian Tian, Jun Zhu
Especially, the discovery of PDEs with highly nonlinear coefficients from low-quality data remains largely under-addressed.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.
no code implementations • NeurIPS 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).
1 code implementation • 18 May 2021 • Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, Animashree Anandkumar
Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players.
Multi-agent Reinforcement Learning reinforcement-learning +3
1 code implementation • 26 Apr 2021 • Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu
To alleviate this problem, in this work, we introduce novel loss functions in vision transformer training to explicitly encourage diversity across patch representations for more discriminative feature extraction.
Ranked #20 on Semantic Segmentation on Cityscapes val
1 code implementation • 19 Apr 2021 • Jinghao Zhang, Yanqiao Zhu, Qiang Liu, Shu Wu, Shuhui Wang, Liang Wang
To be specific, in the proposed LATTICE model, we devise a novel modality-aware structure learning layer, which learns item-item structures for each modality and aggregates multiple modalities to obtain latent item graphs.