no code implementations • 18 Feb 2025 • Yutong Wang, Pengliang Ji, Chaoqun Yang, Kaixin Li, Ming Hu, Jiaoyang Li, Guillaume Sartoretti
The LLM-as-a-Judge paradigm shows promise for evaluating generative content but lacks reliability in reasoning-intensive scenarios, such as programming.
no code implementations • 14 Jan 2025 • Lin Liu, Yutong Wang, Jiahao Chen, Jianfang Li, Tangli Xue, Longlong Li, Jianqiang Ren, Liefeng Bo
This report introduces Make-A-Character 2, an advanced system for generating high-quality 3D characters from single portrait photographs, ideal for game development and digital human applications.
1 code implementation • 4 Jan 2025 • Yonglin Tian, Fei Lin, Yiduo Li, Tengchao Zhang, Qiyao Zhang, Xuan Fu, Jun Huang, Xingyuan Dai, Yutong Wang, Chunwei Tian, Bai Li, Yisheng Lv, Levente Kovács, Fei-Yue Wang
Low-altitude mobility, exemplified by unmanned aerial vehicles (UAVs), has introduced transformative advancements across various domains, like transportation, logistics, and agriculture.
1 code implementation • 25 Nov 2024 • Yutong Wang, Jiajie Teng, Jiajiong Cao, Yuming Li, Chenguang Ma, Hongteng Xu, Dixin Luo
In Stage \Rmnum{1}, we learn the model with a regularizer mitigating the codebook collapse problem.
no code implementations • 2 Nov 2024 • Hrithik Ravi, Clayton Scott, Daniel Soudry, Yutong Wang
Implicit bias describes the phenomenon where optimization-based training algorithms, without explicit regularization, show a preference for simple estimators even when more complex estimators have equal objective values.
no code implementations • 28 Oct 2024 • He Jiang, Yutong Wang, Rishi Veerapaneni, Tanishq Duhan, Guillaume Sartoretti, Jiaoyang Li
Lifelong Multi-Agent Path Finding (LMAPF) is a variant of MAPF where agents are continually assigned new goals, necessitating frequent re-planning to accommodate these dynamic changes.
1 code implementation • 16 Oct 2024 • Genta Indra Winata, Frederikus Hudi, Patrick Amadeus Irawan, David Anugraha, Rifki Afina Putri, Yutong Wang, Adam Nohejl, Ubaidillah Ariq Prathama, Nedjma Ousidhoum, Afifa Amriani, Anar Rzayev, Anirban Das, Ashmari Pramodya, Aulia Adila, Bryan Wilie, Candy Olivia Mawalim, Ching Lam Cheng, Daud Abolade, Emmanuele Chersoni, Enrico Santus, Fariz Ikhwantri, Garry Kuwanto, Hanyang Zhao, Haryo Akbarianto Wibowo, Holy Lovenia, Jan Christian Blaise Cruz, Jan Wira Gotama Putra, Junho Myung, Lucky Susanto, Maria Angelica Riera Machin, Marina Zhukova, Michael Anugraha, Muhammad Farid Adilazuarda, Natasha Santosa, Peerat Limkonchotiwat, Raj Dabre, Rio Alexander Audino, Samuel Cahyawijaya, Shi-Xiong Zhang, Stephanie Yulia Salim, Yi Zhou, Yinxuan Gui, David Ifeoluwa Adelani, En-Shiun Annie Lee, Shogo Okada, Ayu Purwarianti, Alham Fikri Aji, Taro Watanabe, Derry Tanti Wijaya, Alice Oh, Chong-Wah Ngo
This benchmark includes a visual question answering (VQA) dataset with text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark to date.
1 code implementation • 10 Oct 2024 • Yutong Wang, Jiali Zeng, Xuebo Liu, Derek F. Wong, Fandong Meng, Jie zhou, Min Zhang
Large language models (LLMs) have achieved reasonable quality improvements in machine translation (MT).
no code implementations • 9 Oct 2024 • Guoxiong Gao, Yutong Wang, Jiedong Jiang, Qi Gao, Zihan Qin, Tianyi Xu, Bin Dong
A significant challenge in training LLMs for these formal languages is the lack of parallel datasets that align natural language with formal language proofs.
no code implementations • 2 Oct 2024 • Lingfeng Zhang, Yuening Wang, Hongjian Gu, Atia Hamidizadeh, Zhanguang Zhang, Yuecheng Liu, Yutong Wang, David Gamaliel Arcos Bravo, Junyi Dong, Shunbo Zhou, Tongtong Cao, Xingyue Quan, Yuzheng Zhuang, Yingxue Zhang, Jianye Hao
To further explore this area, we introduce a new embodied task planning benchmark, ET-Plan-Bench, which specifically targets embodied task planning using LLMs.
no code implementations • 21 Aug 2024 • Yonglin Tian, Songlin Bai, Zhiyao Luo, Yutong Wang, Yisheng Lv, Fei-Yue Wang
Occupancy prediction has attracted intensive attention and shown great superiority in the development of autonomous driving systems.
1 code implementation • 15 Aug 2024 • Yutong Wang, Chaoyang Jiang, Xieyuanli Chen
It determines the pose of a camera sensor by robustly associating the object detections in the current frame with 3D objects in a lightweight object-level map.
1 code implementation • 12 Jun 2024 • Yutong Wang, Jiali Zeng, Xuebo Liu, Fandong Meng, Jie zhou, Min Zhang
The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods.
no code implementations • 16 May 2024 • Jing Yang, Xiao Wang, Yutong Wang, Jiawei Wang, Fei-Yue Wang
To achieve more accurate TKG reasoning, we propose an attention masking-based contrastive event network (AMCEN) with local-global temporal patterns for the two-stage prediction of future events.
1 code implementation • 19 Mar 2024 • Jiyi Chen, Pengyu Li, Yutong Wang, Pei-Cheng Ku, Qing Qu
This work proposes a deep learning (DL)-based framework, namely Sim2Real, for spectral signal reconstruction in reconstructive spectroscopy, focusing on efficient data sampling and fast inference time.
1 code implementation • 12 Mar 2024 • Yutong Wang, Rishi Sonthalia, Wei Hu
Under a random matrix theoretic assumption on the data distribution and an eigendecay assumption on the data covariance matrix $\boldsymbol{\Sigma}$, we demonstrate that any near-interpolator exhibits rapid norm growth: for $\tau$ fixed, $\boldsymbol{\beta}$ has squared $\ell_2$-norm $\mathbb{E}[\|{\boldsymbol{\beta}}\|_{2}^{2}] = \Omega(n^{\alpha})$ where $n$ is the number of samples and $\alpha >1$ is the exponent of the eigendecay, i. e., $\lambda_i(\boldsymbol{\Sigma}) \sim i^{-\alpha}$.
1 code implementation • 21 Feb 2024 • Yutong Wang, Chaoyang Jiang, Xieyuanli Chen
Meanwhile, local bundle adjustment is performed utilizing the objects and points-based covisibility graphs in our visual object mapping process.
no code implementations • 24 Dec 2023 • Jianqiang Ren, Chao He, Lin Liu, Jiahao Chen, Yutong Wang, Yafei Song, Jianfang Li, Tangli Xue, Siqi Hu, Tao Chen, Kunkun Zheng, Jianjing Xiang, Liefeng Bo
There is a growing demand for customized and expressive 3D characters with the emergence of AI agents and Metaverse, but creating 3D characters using traditional computer graphics tools is a complex and time-consuming task.
no code implementations • 29 Nov 2023 • Yutong Wang, Clayton Scott
The notion of margin loss has been central to the development and analysis of algorithms for binary classification.
1 code implementation • 24 Oct 2023 • Pengyu Li, Xiao Li, Yutong Wang, Qing Qu
We study deep neural networks for the multi-label classification (MLab) task through the lens of neural collapse (NC).
no code implementations • 10 Oct 2023 • Ren-Jian Wang, Ke Xue, Yutong Wang, Peng Yang, Haobo Fu, Qiang Fu, Chao Qian
DivHF learns a behavior descriptor consistent with human preference by querying human feedback.
no code implementations • 4 Oct 2023 • Zhiwei Xu, Yutong Wang, Spencer Frei, Gal Vardi, Wei Hu
Second, they can undergo a period of classical, harmful overfitting -- achieving a perfect fit to training data with near-random performance on test data -- before transitioning ("grokking") to near-optimal generalization later in training.
1 code implementation • 3 Aug 2023 • Minhao Zou, Zhongxue Gan, Yutong Wang, Junheng Zhang, Dongyan Sui, Chun Guan, Siyang Leng
In this work, a universal feature encoder for both graph and hypergraph representation learning is designed, called UniG-Encoder.
Ranked #7 on
Node Classification
on Cornell
no code implementations • 14 Feb 2023 • Yutong Wang, Clayton D. Scott
Gamma-Phi losses constitute a family of multiclass classification loss functions that generalize the logistic and other common losses, and have found application in the boosting literature.
1 code implementation • 9 Aug 2022 • Ke Xue, Yutong Wang, Cong Guan, Lei Yuan, Haobo Fu, Qiang Fu, Chao Qian, Yang Yu
Generating agents that can achieve zero-shot coordination (ZSC) with unseen partners is a new challenge in cooperative multi-agent reinforcement learning (MARL).
1 code implementation • 1 Jun 2022 • Yutong Wang, Renze Lou, Kai Zhang, MaoYan Chen, Yujiu Yang
To address these problems, in this work, we propose a novel learning framework named MORE (Metric learning-based Open Relation Extraction).
no code implementations • 19 May 2022 • Yutong Wang, Clayton D. Scott
Recent research in the theory of overparametrized learning has sought to establish generalization guarantees in the interpolating regime.
no code implementations • 7 Apr 2022 • Yutong Wang, Mehul Damani, Pamela Wang, Yuhong Cao, Guillaume Sartoretti
This review aims to provide an analysis of the state-of-the-art in distributed MARL for multi-robot cooperation.
Multi-agent Reinforcement Learning
reinforcement-learning
+2
1 code implementation • 4 Mar 2022 • Jianxin Zhang, Yutong Wang, Clayton Scott
Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels.
1 code implementation • 28 Jan 2022 • Yutong Wang, Guillaume Sartoretti
There, our comparison results show that FCMNet outperforms state-of-the-art communication-based reinforcement learning methods in all StarCraft II micromanagement tasks, and value decomposition methods in certain tasks.
1 code implementation • ICLR 2022 • Yutong Wang, Clayton D. Scott
Indeed, existing applications of VC theory to large networks obtain upper bounds on VC dimension that are proportional to the number of weights, and for a large class of networks, these upper bound are known to be tight.
no code implementations • ICLR 2022 • Yutong Wang, Ke Xue, Chao Qian
However, due to the inefficient selection mechanisms, these methods cannot fully guarantee both high quality and diversity.
no code implementations • 17 May 2021 • Andrey Ignatov, Andres Romero, Heewon Kim, Radu Timofte, Chiu Man Ho, Zibo Meng, Kyoung Mu Lee, Yuxiang Chen, Yutong Wang, Zeyu Long, Chenhao Wang, Yifei Chen, Boshen Xu, Shuhang Gu, Lixin Duan, Wen Li, Wang Bofei, Zhang Diankai, Zheng Chengjian, Liu Shaoli, Gao Si, Zhang Xiaofeng, Lu Kaidi, Xu Tianyu, Zheng Hui, Xinbo Gao, Xiumei Wang, Jiaming Guo, Xueyi Zhou, Hao Jia, Youliang Yan
Video super-resolution has recently become one of the most important mobile-related problems due to the rise of video communication and streaming services.
1 code implementation • 10 Feb 2021 • Yutong Wang, Clayton D. Scott
Recent empirical evidence suggests that the Weston-Watkins support vector machine is among the best performing multiclass extensions of the binary SVM.
no code implementations • 23 Jan 2021 • Ziqi Tang, Yutong Wang, Jiebo Luo
Next, we perform exploratory data analysis to delve into the data.
no code implementations • NeurIPS 2020 • Yutong Wang, Clayton D. Scott
A recent empirical comparison of nine such formulations [Do\v{g}an et al. 2016] recommends the variant proposed by Weston and Watkins (WW), despite the fact that the WW-hinge loss is not calibrated with respect to the 0-1 loss.
no code implementations • 1 Jul 2019 • Yutong Wang, Jiyuan Zheng, Qijiong Liu, Zhou Zhao, Jun Xiao, Yueting Zhuang
More specifically, we devise a discriminator, Relation Guider, to capture the relations between the whole passage and the associated answer and then the Multi-Interaction mechanism is deployed to transfer the knowledge dynamically for our question generation system.