1 code implementation • 7 Feb 2025 • Zhouliang Yu, Yuhuan Yuan, Tim Z. Xiao, Fuxiang Frank Xia, Jie Fu, Ge Zhang, Ge Lin, Weiyang Liu
Solving complex planning problems requires Large Language Models (LLMs) to explicitly model the state transition to avoid rule violations, comply with constraints, and ensure optimality-a task hindered by the inherent ambiguity of natural language.
1 code implementation • 16 Dec 2024 • Guoxuan Chen, Han Shi, Jiawei Li, Yihang Gao, Xiaozhe Ren, Yimeng Chen, Xin Jiang, Zhenguo Li, Weiyang Liu, Chao Huang
This observation suggests that information of the segments between these separator tokens can be effectively condensed into the separator tokens themselves without significant information loss.
no code implementations • 10 Dec 2024 • Zhen Liu, Tim Z. Xiao, Weiyang Liu, Yoshua Bengio, Dinghuai Zhang
While one commonly trains large diffusion models by collecting datasets on target downstream tasks, it is often desired to align and finetune pretrained diffusion models on some reward functions that are either designed by experts or learned from small-scale datasets.
no code implementations • 2 Dec 2024 • Muchao Ye, Weiyang Liu, Pan He
The rapid advancement of vision-language models (VLMs) has established a new paradigm in video anomaly detection (VAD): leveraging VLMs to simultaneously detect anomalies and provide comprehendible explanations for the decisions.
no code implementations • 16 Sep 2024 • Shengchao Liu, Divin Yan, Weitao Du, Weiyang Liu, Zhuoxinran Li, Hongyu Guo, Christian Borgs, Jennifer Chayes, Anima Anandkumar
Artificial intelligence models have shown great potential in structure-based drug design, generating ligands with high binding affinities.
no code implementations • 15 Aug 2024 • Zeju Qiu, Weiyang Liu, Haiwen Feng, Zhen Liu, Tim Z. Xiao, Katherine M. Collins, Joshua B. Tenenbaum, Adrian Weller, Michael J. Black, Bernhard Schölkopf
While LLMs exhibit impressive skills in general program synthesis and analysis, symbolic graphics programs offer a new layer of evaluation: they allow us to test an LLM's ability to answer different-grained semantic-level questions of the images or 3D geometries without a vision encoder.
no code implementations • 6 Jun 2024 • Ilia Sucholutsky, Katherine M. Collins, Maya Malaviya, Nori Jacoby, Weiyang Liu, Theodore R. Sumers, Michalis Korakakis, Umang Bhatt, Mark Ho, Joshua B. Tenenbaum, Brad Love, Zachary A. Pardos, Adrian Weller, Thomas L. Griffiths
A good teacher should not only be knowledgeable; but should be able to communicate in a way that the student understands -- to share the student's representation of the world.
no code implementations • 6 Jun 2024 • Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu
Motivated by the progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML).
no code implementations • 7 May 2024 • Jing Lin, Yao Feng, Weiyang Liu, Michael J. Black
The novel features of ChatHuman include leveraging academic publications to guide the application of 3D human-related tools, employing a retrieval-augmented generation model to generate in-context-learning examples for handling new tools, and discriminating and integrating tool results to enhance 3D human understanding.
no code implementations • 23 Apr 2024 • Peter Kulits, Haiwen Feng, Weiyang Liu, Victoria Abrevaya, Michael J. Black
Inverse graphics -- the task of inverting an image into physical variables that, when rendered, enable reproduction of the observed scene -- is a fundamental challenge in computer vision and graphics.
no code implementations • 7 Apr 2024 • Andi Zhang, Tim Z. Xiao, Weiyang Liu, Robert Bamler, Damon Wischik
We revisit the likelihood ratio between a pretrained large language model (LLM) and its finetuned variant as a criterion for out-of-distribution (OOD) detection.
1 code implementation • 14 Mar 2024 • Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, Chuang Gan
This paper answers this question in the context of tackling hard reasoning tasks (e. g., level 4-5 MATH problems) via learning from human annotations on easier tasks (e. g., level 1-3 MATH problems), which we term as easy-to-hard generalization.
1 code implementation • 31 Dec 2023 • Tim Z. Xiao, Weiyang Liu, Robert Bamler
Bayesian neural networks (BNNs) are a principled approach to modeling predictive uncertainties in deep learning, which are important in safety-critical applications.
no code implementations • CVPR 2024 • Gege Gao, Weiyang Liu, Anpei Chen, Andreas Geiger, Bernhard Schölkopf
As pretrained text-to-image diffusion models become increasingly powerful, recent efforts have been made to distill knowledge from these text-to-image pretrained models for optimizing a text-guided 3D model.
1 code implementation • NeurIPS 2023 • Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
In MINT, the teacher aims to instruct multiple learners, with each learner focusing on learning a scalar-valued target model.
1 code implementation • 10 Nov 2023 • Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf
We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT).
no code implementations • 23 Oct 2023 • Zhen Liu, Yao Feng, Yuliang Xiu, Weiyang Liu, Liam Paull, Michael J. Black, Bernhard Schölkopf
Recent work has focused on the former, and methods for reconstructing open surfaces do not support fast reconstruction with material and lighting or unconditional generative modelling.
2 code implementations • ICCV 2023 • Yandong Wen, Weiyang Liu, Yao Feng, Bhiksha Raj, Rita Singh, Adrian Weller, Michael J. Black, Bernhard Schölkopf
In this paper, we focus on a general yet important learning problem, pairwise similarity learning (PSL).
1 code implementation • 21 Sep 2023 • Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
Our MetaMath-7B model achieves 66. 4% on GSM8K and 19. 4% on MATH, exceeding the state-of-the-art models of the same size by 11. 5% and 8. 7%.
Ranked #58 on
Arithmetic Reasoning
on GSM8K
(using extra training data)
no code implementations • 12 Sep 2023 • Yao Feng, Weiyang Liu, Timo Bolkart, Jinlong Yang, Marc Pollefeys, Michael J. Black
Towards this end, both explicit and implicit 3D representations are heavily studied for a holistic modeling and capture of the whole human (e. g., body, clothing, face and hair), but neither representation is an optimal choice in terms of representation efficacy since different parts of the human avatar have different modeling desiderata.
1 code implementation • NeurIPS 2023 • Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard Schölkopf
To tackle this challenge, we introduce a principled finetuning method -- Orthogonal Finetuning (OFT), for adapting text-to-image diffusion models to downstream tasks.
1 code implementation • 5 Jun 2023 • Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
In this paper, we consider the problem of Iterative Machine Teaching (IMT), where the teacher provides examples to the learner iteratively such that the learner can achieve fast convergence to a target model.
1 code implementation • 14 Mar 2023 • Zhen Liu, Yao Feng, Michael J. Black, Derek Nowrouzezahrai, Liam Paull, Weiyang Liu
We consider the task of generating realistic 3D shapes, which is useful for a variety of applications such as automatic scene generation and physical simulation.
4 code implementations • 11 Mar 2023 • Weiyang Liu, Longhui Yu, Adrian Weller, Bernhard Schölkopf
We then use hyperspherical uniformity (which characterizes the degree of uniformity on the unit hypersphere) as a unified framework to quantify these two objectives.
no code implementations • ICCV 2023 • Yangyi Huang, Hongwei Yi, Weiyang Liu, Haofan Wang, Boxi Wu, Wenxiao Wang, Binbin Lin, Debing Zhang, Deng Cai
Most of these methods fail to achieve realistic reconstruction when only a single image is available.
1 code implementation • 2 Nov 2022 • Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley Love, Adrian Weller
We focus on the synthetic data used in mixup: a powerful regularizer shown to improve model robustness, generalization, and calibration.
1 code implementation • 31 Oct 2022 • Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf
We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.
1 code implementation • 11 Oct 2022 • Longhui Yu, Tianyang Hu, Lanqing Hong, Zhen Liu, Adrian Weller, Weiyang Liu
It has been observed that neural networks perform poorly when the data or tasks are presented sequentially.
no code implementations • 20 Jul 2022 • Weiyang Liu, Zhen Liu, Liam Paull, Adrian Weller, Bernhard Schölkopf
This paper considers the problem of unsupervised 3D object reconstruction from in-the-wild single-view images.
no code implementations • 30 Jun 2022 • Xiaofeng Cao, Weiyang Liu, Ivor W. Tsang
Finally, we demonstrate the empirical performance of MHEAL in a wide range of applications on data-efficient learning, including deep clustering, distribution matching, version space sampling and deep active learning.
no code implementations • NeurIPS 2021 • Zhaozhuo Xu, Beidi Chen, Chaojian Li, Weiyang Liu, Le Song, Yingyan Lin, Anshumali Shrivastava
However, as one of the most influential and practical MT paradigms, iterative machine teaching (IMT) is prohibited on IoT devices due to its inefficient and unscalable algorithms.
1 code implementation • CVPR 2022 • HANLIN ZHANG, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, Eric P. Xing
To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG).
1 code implementation • 18 Nov 2021 • Xiang Bai, Hanchen Wang, Liya Ma, Yongchao Xu, Jiefeng Gan, Ziwei Fan, Fan Yang, Ke Ma, Jiehua Yang, Song Bai, Chang Shu, Xinyu Zou, Renhao Huang, Changzheng Zhang, Xiaowu Liu, Dandan Tu, Chuou Xu, Wenqing Zhang, Xi Wang, Anguo Chen, Yu Zeng, Dehua Yang, Ming-Wei Wang, Nagaraj Holalkere, Neil J. Halin, Ihab R. Kamel, Jia Wu, Xuehua Peng, Xiang Wang, Jianbo Shao, Pattanasak Mongkolwat, Jianjun Zhang, Weiyang Liu, Michael Roberts, Zhongzhao Teng, Lucian Beer, Lorena Escudero Sanchez, Evis Sala, Daniel Rubin, Adrian Weller, Joan Lasenby, Chuangsheng Zheng, Jianming Wang, Zhen Li, Carola-Bibiane Schönlieb, Tian Xia
Artificial intelligence (AI) provides a promising substitution for streamlining COVID-19 diagnoses.
no code implementations • 27 Oct 2021 • Xinyuan Cao, Weiyang Liu, Santosh S. Vempala
We prove that for any desired accuracy on all tasks, the dimension of the representation remains close to that of the underlying representation.
no code implementations • NeurIPS 2021 • Weiyang Liu, Zhen Liu, Hanchen Wang, Liam Paull, Bernhard Schölkopf, Adrian Weller
In this paper, we consider the problem of iterative machine teaching, where a teacher provides examples sequentially based on the current iterative learner.
no code implementations • ICCV 2021 • Yandong Wen, Weiyang Liu, Bhiksha Raj, Rita Singh
We present a conditional estimation (CEST) framework to learn 3D facial parameters from 2D single-view images by self-supervised training from videos.
Ranked #16 on
3D Face Reconstruction
on REALY
1 code implementation • ICLR 2022 • Shengchao Liu, Hanchen Wang, Weiyang Liu, Joan Lasenby, Hongyu Guo, Jian Tang
However, the lack of 3D information in real-world scenarios has significantly impeded the learning of geometric graph representation.
1 code implementation • 12 Sep 2021 • Weiyang Liu, Yandong Wen, Bhiksha Raj, Rita Singh, Adrian Weller
As one of the earliest works in hyperspherical face recognition, SphereFace explicitly proposed to learn face embeddings with large inter-class angular margin.
no code implementations • ICLR 2022 • Yandong Wen, Weiyang Liu, Adrian Weller, Bhiksha Raj, Rita Singh
In this paper, we start by identifying the discrepancy between training and evaluation in the existing multi-class classification framework and then discuss the potential limitations caused by the "competitive" nature of softmax normalization.
1 code implementation • 2 Mar 2021 • Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, Adrian Weller
Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation.
no code implementations • 23 Oct 2020 • HANLIN ZHANG, Shuai Lin, Weiyang Liu, Pan Zhou, Jian Tang, Xiaodan Liang, Eric P. Xing
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs.
1 code implementation • CVPR 2021 • Weiyang Liu, Rongmei Lin, Zhen Liu, James M. Rehg, Liam Paull, Li Xiong, Le Song, Adrian Weller
The inductive bias of a neural network is largely determined by the architecture and the training algorithm.
no code implementations • ICML 2020 • Beidi Chen, Weiyang Liu, Zhiding Yu, Jan Kautz, Anshumali Shrivastava, Animesh Garg, Anima Anandkumar
We also find that AVH has a statistically significant correlation with human visual hardness.
1 code implementation • NeurIPS 2019 • Weiyang Liu, Zhen Liu, James M. Rehg, Le Song
By generalizing inner product with a bilinear matrix, we propose the neural similarity which serves as a learnable parametric similarity measure for CNNs.
1 code implementation • CVPR 2020 • Rongmei Lin, Weiyang Liu, Zhen Liu, Chen Feng, Zhiding Yu, James M. Rehg, Li Xiong, Le Song
Inspired by the Thomson problem in physics where the distribution of multiple propelling electrons on a unit sphere can be modeled via minimizing some potential energy, hyperspherical energy minimization has demonstrated its potential in regularizing neural networks and improving their generalization power.
1 code implementation • NeurIPS 2019 • Albert Shaw, Wei Wei, Weiyang Liu, Le Song, Bo Dai
Neural Architecture Search (NAS) has been quite successful in constructing state-of-the-art models on a variety of tasks.
1 code implementation • NeurIPS 2018 • Bo Dai, Hanjun Dai, Niao He, Weiyang Liu, Zhen Liu, Jianshu Chen, Lin Xiao, Le Song
This flexible function class couples the variational distribution with the original parameters in the graphical models, allowing end-to-end learning of the graphical models by back-propagation through the variational distribution.
3 code implementations • ECCV 2018 • Zhiding Yu, Weiyang Liu, Yang Zou, Chen Feng, Srikumar Ramalingam, B. V. K. Vijaya Kumar, Jan Kautz
Edge detection is among the most fundamental vision problems for its role in perceptual grouping and its wide applications.
no code implementations • ICLR 2019 • Yandong Wen, Mahmoud Al Ismail, Weiyang Liu, Bhiksha Raj, Rita Singh
We propose a novel framework, called Disjoint Mapping Network (DIMNet), for cross-modal biometric matching, in particular of voices and faces.
4 code implementations • NeurIPS 2018 • Weiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, Le Song
In light of this intuition, we reduce the redundancy regularization problem to generic energy minimization, and propose a minimum hyperspherical energy (MHE) objective as generic regularization for neural networks.
1 code implementation • CVPR 2018 • Weiyang Liu, Zhen Liu, Zhiding Yu, Bo Dai, Rongmei Lin, Yisen Wang, James M. Rehg, Le Song
Inner product-based convolution has been a central component of convolutional neural networks (CNNs) and the key to learning visual representations.
1 code implementation • CVPR 2018 • Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia
We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.
10 code implementations • 17 Jan 2018 • Feng Wang, Weiyang Liu, Haijun Liu, Jian Cheng
In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works.
Ranked #2 on
Face Identification
on Trillion Pairs Dataset
no code implementations • NeurIPS 2017 • Weiyang Liu, Yan-Ming Zhang, Xingguo Li, Zhiding Yu, Bo Dai, Tuo Zhao, Le Song
In light of such challenges, we propose hyperspherical convolution (SphereConv), a novel learning framework that gives angular representations on hyperspheres.
no code implementations • ICML 2018 • Weiyang Liu, Bo Dai, Xingguo Li, Zhen Liu, James M. Rehg, Le Song
We propose an active teacher model that can actively query the learner (i. e., make the learner take exams) for estimating the learner's status and provably guide the learner to achieve faster convergence.
2 code implementations • ICML 2017 • Weiyang Liu, Bo Dai, Ahmad Humayun, Charlene Tay, Chen Yu, Linda B. Smith, James M. Rehg, Le Song
Different from traditional machine teaching which views the learners as batch algorithms, we study a new paradigm where the learner uses an iterative algorithm and a teacher can feed examples sequentially and intelligently based on the current performance of the learner.
21 code implementations • CVPR 2017 • Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, Le Song
This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space.
Ranked #1 on
Face Verification
on CK+
2 code implementations • 7 Dec 2016 • Weiyang Liu, Yandong Wen, Zhiding Yu, Meng Yang
Cross-entropy loss together with softmax is arguably one of the most common used supervision components in convolutional neural networks (CNNs).
no code implementations • 15 Nov 2015 • Weiyang Liu, Rongmei Lin, Meng Yang
We propose a robust elastic net (REN) model for high-dimensional sparse regression and give its performance guarantees (both the statistical error bound and the optimization bound).
no code implementations • 14 Nov 2015 • Weiyang Liu, Zhiding Yu, Yandong Wen, Rongmei Lin, Meng Yang
Sparse coding with dictionary learning (DL) has shown excellent classification performance.
no code implementations • 25 Jul 2015 • Yandong Wen, Weiyang Liu, Meng Yang, Zhifeng Li
Practical face recognition has been studied in the past decades, but still remains an open challenge.
no code implementations • 2 Feb 2015 • Yandong Wen, Weiyang Liu, Meng Yang, Yuli Fu, Youjun Xiang, Rui Hu
We propose the structured occlusion coding (SOC) to address occlusion problems.
no code implementations • 17 Oct 2014 • Weiyang Liu, Zhiding Yu, Lijia Lu, Yandong Wen, Hui Li, Yuexian Zou
The LCD similarity measure can be kernelized under KCRC, which theoretically links CRC and LCD under the kernel method.