no code implementations • COLING 2022 • BoWen Zhang, Xu Huang, Zhichao Huang, Hu Huang, Baoquan Zhang, Xianghua Fu, Liwen Jing
SILTN is interpretable because it is a neurosymbolic formalism and a computational model that supports learning and reasoning about data with a differentiable first-order logic language (FOL).
1 code implementation • 9 Mar 2025 • AgiBot-World-Contributors, Qingwen Bu, Jisong Cai, Li Chen, Xiuqi Cui, Yan Ding, Siyuan Feng, Shenyuan Gao, Xindong He, Xu Huang, Shu Jiang, Yuxin Jiang, Cheng Jing, Hongyang Li, Jialu Li, Chiming Liu, Yi Liu, Yuxiang Lu, Jianlan Luo, Ping Luo, Yao Mu, Yuehan Niu, Yixuan Pan, Jiangmiao Pang, Yu Qiao, Guanghui Ren, Cheng Ruan, Jiaqi Shan, Yongjian Shen, Chengshi Shi, Mingkang Shi, Modi shi, Chonghao Sima, Jianheng Song, Huijie Wang, Wenhao Wang, Dafeng Wei, Chengen Xie, Guo Xu, Junchi Yan, Cunbiao Yang, Lei Yang, Shukai Yang, Maoqing Yao, Jia Zeng, Chi Zhang, Qinglin Zhang, Bin Zhao, Chengyue Zhao, Jiaqi Zhao, Jianchao Zhu
Introducing AgiBot World, a large-scale platform comprising over 1 million trajectories across 217 tasks in five deployment scenarios, we achieve an order-of-magnitude increase in data scale compared to existing datasets.
no code implementations • 6 Mar 2025 • Runhan Chen, Meijin Lin, Jianshu Chen, Liangjie Lin, Jiazheng Wang, XiaoQing Li, Jianhua Wang, Xu Huang, Ling Qian, Shaoxing Liu, Yuan Long, Di Guo, Xiaobo Qu, Haiwei Han
were analyzed for reliability of within- and between- sessions using the coefficient of variation (CV) and intraclass correlation coefficient (ICC), and for reproducibility of across the machines using correlation coefficient.
no code implementations • 6 Mar 2025 • Meijin Lin, Lin Guo, Dicheng Chen, Jianshu Chen, Zhangren Tu, Xu Huang, Jianhua Wang, Ji Qi, Yuan Long, Zhiguo Huang, Di Guo, Xiaobo Qu, Haiwei Han
Conclusion: There were high or good degrees of consistency between the quantification results of QNet and LCModel for tNAA, tCho, and Ins, and QNet generally has more reasonable quantification than LCModel.
1 code implementation • 11 Feb 2025 • Xu Huang, Wenhao Zhu, Hanxu Hu, Conghui He, Lei LI, ShuJian Huang, Fei Yuan
Previous multilingual benchmarks focus primarily on simple understanding tasks, but for large language models(LLMs), we emphasize proficiency in instruction following, reasoning, long context understanding, code generation, and so on.
no code implementations • 5 Feb 2025 • Cheng He, Xu Huang, Gangwei Jiang, Zhaoyi Li, Defu Lian, Hong Xie, Enhong Chen, Xijie Liang, Zengrong Zheng
Universal knowledge representation is a central problem for multivariate time series(MTS) foundation models and yet remains open.
no code implementations • 22 Jan 2025 • Chen Chen, Xinlong Hao, Weiwen Liu, Xu Huang, Xingshan Zeng, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Yuefeng Huang, Wulong Liu, Xinzhi Wang, Defu Lian, Baoqun Yin, Yasheng Wang, Wu Liu
Normal evaluates function calls in basic scenarios; Special evaluates function calls in scenarios with vague or incomplete instructions; Agent introduces multi-agent interactions to simulate function calling evaluation in real-world multi-turn interactions.
1 code implementation • 15 Jan 2025 • Jin Chen, Jin Zhang, Xu Huang, Yi Yang, Defu Lian, Enhong Chen
The softmax function is a cornerstone of multi-class classification, integral to a wide range of machine learning applications, from large-scale retrieval and ranking models to advanced large language models.
no code implementations • 15 Jan 2025 • Yirong Zeng, Xiao Ding, Yuxian Wang, Weiwen Liu, Wu Ning, Yutai Hou, Xu Huang, Bing Qin, Ting Liu
Augmenting large language models (LLMs) with external tools is a promising approach to enhance their capabilities.
2 code implementations • 22 Nov 2024 • Tengjie Zheng, Lin Cheng, Shengping Gong, Xu Huang
Learning dynamical models from data is not only fundamental but also holds great promise for advancing principle discovery, time-series prediction, and controller design.
no code implementations • 2 Sep 2024 • Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, Zezhong Wang, Yuxian Wang, Wu Ning, Yutai Hou, Bin Wang, Chuhan Wu, Xinzhi Wang, Yong liu, Yasheng Wang, Duyu Tang, Dandan Tu, Lifeng Shang, Xin Jiang, Ruiming Tang, Defu Lian, Qun Liu, Enhong Chen
Function calling significantly extends the application boundary of large language models, where high-quality and diverse training data is critical for unlocking this capability.
no code implementations • 20 Aug 2024 • Yihang Wang, Xu Huang, Bowen Tian, Yueyang Su, Lei Yu, Huaming Liao, Yixing Fan, Jiafeng Guo, Xueqi Cheng
Generative LLM have achieved remarkable success in various industrial applications, owing to their promising In-Context Learning capabilities.
no code implementations • 27 May 2024 • Ying He, Mingyang Niu, Jingyu Hua, Yunlong Mao, Xu Huang, Chen Li, Sheng Zhong
In this paper, we first propose an embedding extension attack manipulating embeddings to undermine existing defense strategies, which rely on constraining the correlation between the embeddings uploaded by participants and the labels.
1 code implementation • 17 May 2024 • Xingmei Wang, Weiwen Liu, Xiaolong Chen, Qi Liu, Xu Huang, Yichao Wang, Xiangyang Li, Yasheng Wang, Zhenhua Dong, Defu Lian, Ruiming Tang
This model-agnostic framework can be equipped with plug-and-play textual features, with item-level alignment enhancing the utilization of external information while maintaining training and inference efficiency.
no code implementations • 10 May 2024 • Qiyan Luo, Jidan Zhang, Yuzhen Xie, Xu Huang, Ting Han
Feature matching determines the orientation accuracy for the High Spatial Resolution (HSR) optical satellite stereos, subsequently impacting several significant applications such as 3D reconstruction and change detection.
no code implementations • 11 Apr 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
Concretely, WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
1 code implementation • 23 Mar 2024 • Daijun Ding, Li Dong, Zhichao Huang, Guangning Xu, Xu Huang, Bo Liu, Liwen Jing, BoWen Zhang
To address these issues, we propose an encoder-decoder data augmentation (EDDA) framework.
1 code implementation • 11 Mar 2024 • Jianxun Lian, Yuxuan Lei, Xu Huang, Jing Yao, Wei Xu, Xing Xie
This paper introduces RecAI, a practical toolkit designed to augment or even revolutionize recommender systems with the advanced capabilities of Large Language Models (LLMs).
no code implementations • 5 Feb 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
As Large Language Models (LLMs) have shown significant intelligence, the progress to leverage LLMs as planning modules of autonomous agents has attracted more attention.
no code implementations • 18 Jan 2024 • Yichao Du, Zhirui Zhang, Linan Yue, Xu Huang, Yuqing Zhang, Tong Xu, Linli Xu, Enhong Chen
To protect privacy and meet legal regulations, federated learning (FL) has gained significant attention for training speech-to-text (S2T) systems, including automatic speech recognition (ASR) and speech translation (ST).
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
1 code implementation • 12 Jan 2024 • Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, ShuJian Huang
This study investigates how Large Language Models (LLMs) leverage source and reference data in machine translation evaluation task, aiming to better understand the mechanisms behind their remarkable performance in this task.
no code implementations • 3 Jan 2024 • Daijun Ding, Rong Chen, Liwen Jing, BoWen Zhang, Xu Huang, Li Dong, Xiaowen Zhao, Ge Song
In this paper, we propose a Multi-Perspective Prompt-Tuning (MPPT) model for CTSD that uses the analysis perspective as a bridge to transfer knowledge.
1 code implementation • 18 Nov 2023 • Yuxuan Lei, Jianxun Lian, Jing Yao, Xu Huang, Defu Lian, Xing Xie
Behavior alignment operates in the language space, representing user preferences and item information as text to mimic the target model's behavior; intention alignment works in the latent space of the recommendation model, using user and item representations to understand the model's behavior; hybrid alignment combines both language and latent spaces.
no code implementations • 20 Oct 2023 • Xu Huang, Jianxun Lian, Hao Wang, Defu Lian, Xing Xie
Recommendation systems effectively guide users in locating their desired information within extensive content repositories.
1 code implementation • 17 Oct 2023 • Xu Huang, Zhirui Zhang, Ruize Gao, Yichao Du, Lemao Liu, Gouping Huang, Shuming Shi, Jiajun Chen, ShuJian Huang
We present IMTLab, an open-source end-to-end interactive machine translation (IMT) system platform that enables researchers to quickly build IMT systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems.
1 code implementation • 31 Aug 2023 • Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system.
no code implementations • 31 Jul 2023 • Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian, Enhong Chen
The advent of large language models marks a revolutionary breakthrough in artificial intelligence.
1 code implementation • 28 Jun 2022 • Xu Huang, Defu Lian, Jin Chen, Zheng Liu, Xing Xie, Enhong Chen
Deep recommender systems (DRS) are intensively applied in modern web services.
no code implementations • 21 Feb 2022 • Yongjun Zhang, Siyuan Zou, Xinyi Liu, Xu Huang, Yi Wan, Yongxiang Yao
Next, we propose a riverbed enhancement function to optimize the cost volume of the LiDAR projection points and their homogeneous pixels to improve the matching robustness.
1 code implementation • 13 Sep 2021 • Jin Chen, Defu Lian, Binbin Jin, Xu Huang, Kai Zheng, Enhong Chen
Variational AutoEncoder (VAE) has been extended as a representative nonlinear method for collaborative filtering.
no code implementations • 1 Jul 2021 • Changlin Xiao, Rongjun Qin, Xiao Xie, Xu Huang
Individual tree detection and crown delineation (ITDD) are critical in forest inventory management and remote sensing based forest surveys are largely carried out through satellite images.
no code implementations • 1 Jul 2021 • Xiao Ling, Xu Huang, Rongjun Qin
Bundle adjustment (BA) is a technique for refining sensor orientations of satellite images, while adjustment accuracy is correlated with feature matching results.
no code implementations • 27 Jun 2021 • Rongjun Qin, Xu Huang
Nowadays the image-based methods backboned by the recently developed advanced dense image matching algorithms and geo-referencing paradigms, are becoming the dominant approaches, due to its high flexibility, availability and low cost.
1 code implementation • 23 Nov 2020 • Yuewen Zhu, Chunran Zheng, Chongjian Yuan, Xu Huang, Xiaoping Hong
In this paper we propose CamVox by adapting Livox lidars into visual SLAM (ORB-SLAM2) by exploring the lidars' unique features.
Robotics
no code implementations • 22 May 2019 • Xu Huang, Rong-Jun Qin
Given enough multi-view image corresponding points (also called tie points) and ground control points (GCP), bundle adjustment for high-resolution satellite images is used to refine the orientations or most often used geometric parameters Rational Polynomial Coefficients (RPC) of each satellite image in a unified geodetic framework, which is very critical in many photogrammetry and computer vision applications.
no code implementations • 22 May 2019 • Xiaohu Lu, Rong-Jun Qin, Xu Huang
Nowadays dense stereo matching has become one of the dominant tools in 3D reconstruction of urban regions for its low cost and high flexibility in generating dense 3D points.
no code implementations • 22 May 2019 • Bihe Chen, Rongjun Qin, Xu Huang, Shuang Song, Xiaohu Lu
Stereo dense image matching can be categorized to low-level feature based matching and deep feature based matching according to their matching cost metrics.