no code implementations • CCL 2021 • Hao Wang, Junhui Li, ZhengXian Gong
“在汉语等其他有省略代词习惯的语言中, 通常会删掉可从上下文信息推断出的代词。尽管以Transformer为代表的的神经机器翻译模型取得了巨大的成功, 但这种省略现象依旧对神经机器翻译模型造成了很大的挑战。本文在Transformer基础上提出了一个融合零指代识别的翻译模型, 并引入篇章上下文来丰富指代信息。具体地, 该模型采用联合学习的框架, 在翻译模型基础上, 联合了一个分类任务, 即判别句子中省略代词在句子所表示的成分, 使得模型能够融合零指代信息辅助翻译。通过在中英对话数据集上的实验, 验证了本文提出方法的有效性, 与基准模型相比, 翻译性能提升了1. 48个BLEU值。”
no code implementations • NLP4ConvAI (ACL) 2022 • Tong Zhang, Yong liu, Boyang Li, Peixiang Zhong, Chen Zhang, Hao Wang, Chunyan Miao
Conversational Recommendation Systems recommend items through language based interactions with users. In order to generate naturalistic conversations and effectively utilize knowledge graphs (KGs) containing background information, we propose a novel Bag-of-Entities loss, which encourages the generated utterances to mention concepts related to the item being recommended, such as the genre or director of a movie.
1 code implementation • ECCV 2020 • Xiangyu Zhu, Fan Yang, Di Huang, Chang Yu, Hao Wang, Jianzhu Guo, Zhen Lei, Stan Z. Li
However, most of their training data is constructed by 3D Morphable Model, whose space spanned is only a small part of the shape space.
no code implementations • 5 Feb 2025 • Jiaqing Zhang, Mingjia Yin, Hao Wang, Yawen Li, Yuyang Ye, Xingyu Lou, Junping Du, Enhong Chen
In the era of data-centric AI, the focus of recommender systems has shifted from model-centric innovations to data-centric approaches.
no code implementations • 5 Feb 2025 • Zhuowei Li, Haizhou Shi, Yunhe Gao, Di Liu, Zhenting Wang, Yuxiao Chen, Ting Liu, Long Zhao, Hao Wang, Dimitris N. Metaxas
Extensive experiments show that VISTA on average reduces hallucination by abount 40% on evaluated open-ended generation task, and it consistently outperforms existing methods on four benchmarks across four architectures under three decoding strategies.
no code implementations • 30 Jan 2025 • Shuaiqun Pan, Diederick Vermetten, Manuel López-Ibáñez, Thomas Bäck, Hao Wang
Surrogate models provide efficient alternatives to computationally demanding real-world processes but often require large datasets for effective training.
no code implementations • 28 Jan 2025 • Ashish Bastola, Hao Wang, Abolfazl Razi
Anomaly detection is a critical requirement for ensuring safety in autonomous driving.
no code implementations • 28 Jan 2025 • Mingyuan Li, Tong Jia, Hui Lu, Bowen Ma, Hao Wang, Dongyue Chen
Prohibited item detection based on X-ray images is one of the most effective security inspection methods.
1 code implementation • 28 Jan 2025 • Jianing Li, Ming Lu, Hao Wang, Chenyang Gu, Wenzhao Zheng, Li Du, Shanghang Zhang
To utilize these slice features, we propose SliceOcc, an RGB camera-based model specifically tailored for indoor 3D semantic occupancy prediction.
no code implementations • 27 Jan 2025 • Jiaru Zhang, Zesong Wang, Hao Wang, Tao Song, Huai-an Su, Rui Chen, Yang Hua, Xiangwei Zhou, Ruhui Ma, Miao Pan, Haibing Guan
However, the heterogeneity of devices and the complexity of models challenge the accuracy and generalizability of existing estimation methods.
no code implementations • 24 Jan 2025 • Pranali Roy Chowdhury, Tianxu Wang, Shohel Ahmed, Hao Wang
This work highlights the urgent need for a better understanding of the fatal role of methane in ecosystems for developing strategies to mitigate its effects amid climate change.
no code implementations • 23 Jan 2025 • Shuaiqun Pan, Diederick Vermetten, Manuel López-Ibáñez, Thomas Bäck, Hao Wang
Surrogate models are frequently employed as efficient substitutes for the costly execution of real-world processes.
no code implementations • 22 Jan 2025 • Yunfan Zhang, Zhiwei Xiong, Zhiqi Shen, Guosheng Lin, Hao Wang, Nicolas Vun
Generating high-quality textures for 3D scenes is crucial for applications in interior design, gaming, and augmented/virtual reality (AR/VR).
no code implementations • 22 Jan 2025 • Tianxu Wang, Jiwoon Sim, Hao Wang
The spatial pattern of the population is also closely tied to the distribution of toxins.
1 code implementation • 20 Jan 2025 • Jingwei Yi, Junhao Yin, Ju Xu, Peng Bao, Yongliang Wang, Wei Fan, Hao Wang
Vision-Language Models (VLMs) have demonstrated remarkable capabilities in understanding multimodal inputs and have been widely integrated into Retrieval-Augmented Generation (RAG) based conversational systems.
no code implementations • 17 Jan 2025 • Weibo Gao, Qi Liu, Linan Yue, Fangzhou Yao, Rui Lv, Zheng Zhang, Hao Wang, Zhenya Huang
Personalized learning represents a promising educational strategy within intelligent educational systems, aiming to enhance learners' practice efficiency.
no code implementations • 17 Jan 2025 • Wenfeng Feng, Chuzhan Hao, Yuewei Zhang, Jingyi Song, Hao Wang
Leveraging the autonomous decision-making capabilities of large language models (LLMs) demonstrates superior performance in reasoning tasks.
no code implementations • 14 Jan 2025 • Reza Miry, Amit K. Chakraborty, Russell Greiner, Mark A. Lewis, Hao Wang, Tianyu Guan, Pouria Ramazi
We employed two simulated datasets to train the model: one representing generated dynamical systems with randomly selected polynomial terms to model new disease behaviors, and another simulating noise-induced disease dynamics to account for noisy measurements.
no code implementations • 12 Jan 2025 • Junlong Ren, Gangjian Zhang, Haifeng Sun, Hao Wang
The data augmentation generates videos with various lengths and target moment locations to diversify temporal distributions.
no code implementations • 12 Jan 2025 • Zheng Zhang, Yihuai Lan, Yangsen Chen, Lei Wang, Xiang Wang, Hao Wang
This control not only ensures that NPCs can adapt to varying difficulty levels during gameplay, but also provides insights into the safety and fairness of LLM agents.
no code implementations • 10 Jan 2025 • Qian Chen, Yafeng Chen, Yanni Chen, Mengzhe Chen, Yingda Chen, Chong Deng, Zhihao Du, Ruize Gao, Changfeng Gao, Zhifu Gao, Yabin Li, Xiang Lv, Jiaqing Liu, Haoneng Luo, Bin Ma, Chongjia Ni, Xian Shi, Jialong Tang, Hui Wang, Hao Wang, Wen Wang, Yuxuan Wang, Yunlan Xu, Fan Yu, Zhijie Yan, Yexin Yang, Baosong Yang, Xian Yang, Guanrou Yang, Tianyu Zhao, Qinglin Zhang, Shiliang Zhang, Nan Zhao, Pei Zhang, Chong Zhang, Jinren Zhou
Previous models for voice interactions are categorized as native and aligned.
no code implementations • 9 Jan 2025 • Peizhuo Lv, Mengjie Sun, Hao Wang, XiaoFeng Wang, Shengzhi Zhang, Yuxuan Chen, Kai Chen, Limin Sun
To address those problems, we propose a novel black-box "knowledge watermark" approach, named RAG-WM, to detect IP infringement of RAGs.
3 code implementations • 7 Jan 2025 • Nvidia, :, Niket Agarwal, Arslan Ali, Maciej Bala, Yogesh Balaji, Erik Barker, Tiffany Cai, Prithvijit Chattopadhyay, Yongxin Chen, Yin Cui, Yifan Ding, Daniel Dworakowski, Jiaojiao Fan, Michele Fenzi, Francesco Ferroni, Sanja Fidler, Dieter Fox, Songwei Ge, Yunhao Ge, Jinwei Gu, Siddharth Gururani, Ethan He, Jiahui Huang, Jacob Huffman, Pooya Jannaty, Jingyi Jin, Seung Wook Kim, Gergely Klár, Grace Lam, Shiyi Lan, Laura Leal-Taixe, Anqi Li, Zhaoshuo Li, Chen-Hsuan Lin, Tsung-Yi Lin, Huan Ling, Ming-Yu Liu, Xian Liu, Alice Luo, Qianli Ma, Hanzi Mao, Kaichun Mo, Arsalan Mousavian, Seungjun Nah, Sriharsha Niverty, David Page, Despoina Paschalidou, Zeeshan Patel, Lindsey Pavao, Morteza Ramezanali, Fitsum Reda, Xiaowei Ren, Vasanth Rao Naik Sabavat, Ed Schmerling, Stella Shi, Bartosz Stefaniak, Shitao Tang, Lyne Tchapmi, Przemek Tredak, Wei-Cheng Tseng, Jibin Varghese, Hao Wang, Haoxiang Wang, Heng Wang, Ting-Chun Wang, Fangyin Wei, Xinyue Wei, Jay Zhangjie Wu, Jiashu Xu, Wei Yang, Lin Yen-Chen, Xiaohui Zeng, Yu Zeng, Jing Zhang, Qinsheng Zhang, Yuxuan Zhang, Qingqing Zhao, Artur Zolkowski
We position a world foundation model as a general-purpose world model that can be fine-tuned into customized world models for downstream applications.
no code implementations • 6 Jan 2025 • Xiwen Chen, Peijie Qiu, Wenhui Zhu, Huayu Li, Hao Wang, Aristeidis Sotiras, Yalin Wang, Abolfazl Razi
Since its introduction, the transformer has shifted the development trajectory away from traditional models (e. g., RNN, MLP) in time series forecasting, which is attributed to its ability to capture global dependencies within temporal tokens.
no code implementations • 6 Jan 2025 • Zihao Wen, Hang Shan, Hao Wang, Yu Cao, Liang He, Wenjing Ren, Chengjie Yin, Qingchuan Chou, Chaochao Lv, Haojie Su, Tao Tang, Qinghua Cai, Leyi Ni, Wen Xiao, Xiaolin Zhang, Kuanyi Li, Te Cao, Ming-Chih Chiu, Vincent H. Resh, Pablo Urrutia-Cordero
However, we know little about how local environmental conditions can influence these biodiversity drivers, and consequently how they indirectly shape the ecological stability of ecosystems.
no code implementations • 2 Jan 2025 • Hao Wang, Zhichao Chen, Licheng Pan, Xiaoyu Jiang, Yichen Song, Qunshan He, Xinggao Liu
Effective process monitoring is increasingly vital in industrial automation for ensuring operational safety, necessitating both high accuracy and efficiency.
no code implementations • 1 Jan 2025 • Hao Wang, Xiwen Chen, Ashish Bastola, Jiayou Qin, Abolfazl Razi
The emergence of generative AI and controllable diffusion has made image-to-image synthesis increasingly practical and efficient.
no code implementations • 1 Jan 2025 • Hao Wang, Cheng Deng, Zhidong Zhao
Recent generative models demonstrate impressive performance on synthesizing photographic images, which makes humans hardly to distinguish them from pristine ones, especially on realistic-looking synthetic facial images.
no code implementations • 29 Dec 2024 • Tianxu Wang, Kyunghan Choi, Hao Wang
These derivations highlight how different ways of using memory lead to distinct mathematical models.
1 code implementation • 29 Dec 2024 • Hao Zhang, Hao Wang, Xiucai Huang, Wenrui Chen, Zhen Kan
To tackle these challenges, we propose a Temporal-Logic-guided Hybrid policy framework (HyTL) which leverages three-level decision layers to improve the agent's performance.
1 code implementation • 27 Dec 2024 • Jianshuo Dong, Ziyuan Zhang, Qingjie Zhang, Han Qiu, Tianwei Zhang, Hao Wang, Hewu Li, Qi Li, Chao Zhang, Ke Xu
Auto-regressive large language models (LLMs) have yielded impressive performance in many real-world tasks.
no code implementations • 24 Dec 2024 • Haonan Li, Xudong Han, Zenan Zhai, Honglin Mu, Hao Wang, Zhenxuan Zhang, Yilin Geng, Shom Lin, Renxi Wang, Artem Shelmanov, Xiangyu Qi, Yuxia Wang, Donghai Hong, Youliang Yuan, Meng Chen, Haoqin Tu, Fajri Koto, Tatsuki Kuribayashi, Cong Zeng, Rishabh Bhardwaj, Bingchen Zhao, Yawen Duan, Yi Liu, Emad A. Alghamdi, Yaodong Yang, Yinpeng Dong, Soujanya Poria, PengFei Liu, Zhengzhong Liu, Xuguang Ren, Eduard Hovy, Iryna Gurevych, Preslav Nakov, Monojit Choudhury, Timothy Baldwin
To address this gap, we introduce Libra-Leaderboard, a comprehensive framework designed to rank LLMs through a balanced evaluation of performance and safety.
no code implementations • 23 Dec 2024 • Hao Wang, Hao Li, Junda Zhu, Xinyuan Wang, Chengwei Pan, Minlie Huang, Lei Sha
This approach preserves the semantic content of the original prompt while producing harmful content.
no code implementations • 23 Dec 2024 • Yang Xu, Yi Wang, Hao Wang
Understanding training dynamics and feature evolution is crucial for the mechanistic interpretability of large language models (LLMs).
no code implementations • 17 Dec 2024 • Shizhuo Deng, Bowen Han, Jiaqi Chen, Hao Wang, Dongyue Chen, Tong Jia
Noisy labels threaten the robustness of few-shot learning (FSL) due to the inexact features in a new domain.
no code implementations • 17 Dec 2024 • Hao Wang, Boyi Liu, Yufeng Zhang, Jie Chen
Leveraging Qwen2. 5-Coder-32B-Instruct, our approach achieves a pass rate of 0. 305 on LiveCodeBench-Hard, surpassing the pass@100 performance of GPT4o-0513 (0. 245).
no code implementations • 17 Dec 2024 • Aldo Pareja, Nikhil Shivakumar Nayak, Hao Wang, KrishnaTeja Killamsetty, Shivchander Sudalairaj, Wenlong Zhao, Seungwook Han, Abhishek Bhandwaldar, Guangxuan Xu, Kai Xu, Ligong Han, Luke Inglis, Akash Srivastava
The rise of large language models (LLMs) has created a significant disparity: industrial research labs with their computational resources, expert teams, and advanced infrastructures, can effectively fine-tune LLMs, while individual developers and small organizations face barriers due to limited resources.
no code implementations • 16 Dec 2024 • Shijia Zhou, Euijoon Ahn, Hao Wang, Ann Quinton, Narelle Kennedy, Pradeeba Sridar, Ralph Nanan, Jinman Kim
To address these inadequacies, we propose a novel Swoosh Activation Function (SAF) designed to enhance the regularization of heatmaps produced by landmark detection algorithms.
no code implementations • 11 Dec 2024 • Mingkun Lei, Xue Song, Beier Zhu, Hao Wang, Chi Zhang
Recent advancements in text-to-image models have improved the nuance of style transformations, yet significant challenges remain, particularly with overfitting to reference styles, limiting stylistic control, and misaligning with textual content.
no code implementations • 11 Dec 2024 • Shengheng Liu, Hao Wang, Mengguan Pan, Peng Liu, Yahui Ma, Yongming Huang
In this article, we present an intelligent framework for 5G new radio (NR) indoor positioning under a monostatic configuration.
no code implementations • 8 Dec 2024 • Qing Zhang, Haocheng Lv, Jie Liu, Zhiyun Chen, Jianyong Duan, Hao Wang, Li He, Mingying Xv
With the rise of large-scale language models (LLMs), it is currently popular and effective to convert multimodal information into text descriptions for multimodal multi-hop question answering.
1 code implementation • 7 Dec 2024 • Haizhou Shi, Yibin Wang, Ligong Han, huan zhang, Hao Wang
Estimating the uncertainty of responses of Large Language Models~(LLMs) remains a critical challenge.
no code implementations • 5 Dec 2024 • Lingfeng Ming, Bo Zeng, Chenyang Lyu, Tianqi Shi, Yu Zhao, Xue Yang, Yefeng Liu, Yiyu Wang, Linlong Xu, Yangyang Liu, Xiaohu Zhao, Hao Wang, Heng Liu, Hao Zhou, Huifeng Yin, Zifu Shang, Haijun Li, Longyue Wang, Weihua Luo, Kaifu Zhang
We have collected a substantial amount of multilingual data for several low-resource languages and conducted extensive continual pre-training using the Qwen2 models.
no code implementations • 4 Dec 2024 • Yu Feng, Shunsi Zhang, Jian Shu, HanFeng Zhao, Guoliang Pang, Chi Zhang, Hao Wang
Specifically, we use a single-view model pretrained on a large-scale human dataset to develop a multi-view body representation, aiming to extend the 2D knowledge of the single-view model to a multi-view diffusion model.
no code implementations • 4 Dec 2024 • Gangjian Zhang, Nanjie Yao, Shunsi Zhang, HanFeng Zhao, Guoliang Pang, Jian Shu, Hao Wang
This paper investigates the research task of reconstructing the 3D clothed human body from a monocular image.
no code implementations • 3 Dec 2024 • Xiangyu Jiang, Xiwen Chen, Hao Wang, Abolfazl Razi
This module can efficiently fuse the node feature and geographic position information through a novel Transpose Cross-attention mechanism.
1 code implementation • 3 Dec 2024 • Hao Wang, Wenhui Zhu, Xuanzhao Dong, Yanxi Chen, Xin Li, Peijie Qiu, Xiwen Chen, Vamsi Krishna Vasa, Yujian Xiong, Oana M. Dumitrascu, Abolfazl Razi, Yalin Wang
In this work, we propose Many-MobileNet, an efficient model fusion strategy for retinal disease classification using lightweight CNN architecture.
no code implementations • 3 Dec 2024 • Changzhi Zhou, Dandan song, Yuhang Tian, Zhijing Wu, Hao Wang, Xinyu Zhang, Jun Yang, ZiYi Yang, Shuhao Zhang
For the fine-tuning-dependent paradigm, we efficiently fine-tune LLMs using instruction-based multi-task learning.
1 code implementation • 2 Dec 2024 • Zhengnan Li, Haoxuan Li, Hao Wang, Jun Fang, Duoyin Li Yunxiao Qin
In this paper, we investigate the overfitting problem in channel-wise MLPs using Rademacher complexity theory, revealing that extreme values in time series data exacerbate this issue.
1 code implementation • 1 Dec 2024 • Wei Guo, Hao Wang, Luankang Zhang, Jin Yao Chin, Zhongzhou Liu, Kai Cheng, Qiushi Pan, Yi Quan Lee, Wanqi Xue, Tingjia Shen, Kenan Song, Kefan Wang, Wenjia Xie, Yuyang Ye, Huifeng Guo, Yong liu, Defu Lian, Ruiming Tang, Enhong Chen
In this paper, we aim to enhance the understanding of scaling laws by conducting comprehensive evaluations of large recommendation models.
no code implementations • 30 Nov 2024 • Tingjia Shen, Hao Wang, Chuhan Wu, Jin Yao Chin, Wei Guo, Yong liu, Huifeng Guo, Defu Lian, Ruiming Tang, Enhong Chen
In response, we introduce the Performance Law for SR models, which aims to theoretically investigate and model the relationship between model performance and data quality.
1 code implementation • 28 Nov 2024 • Hongda Liu, Yunfan Liu, Min Ren, Hao Wang, Yunlong Wang, Zhenan Sun
In skeleton-based action recognition, a key challenge is distinguishing between actions with similar trajectories of joints due to the lack of image-level details in skeletal representations.
Ranked #2 on
Skeleton Based Action Recognition
on NTU RGB+D 120
no code implementations • 22 Nov 2024 • Xia Han, Liyuan Lin, Hao Wang, Ruodu Wang
A diversification quotient (DQ) quantifies diversification in stochastic portfolio models based on a family of risk measures.
2 code implementations • 22 Nov 2024 • Xiang Xu, Hao Wang, Wei Guo, Luankang Zhang, Wanshan Yang, Runlong Yu, Yong liu, Defu Lian, Enhong Chen
Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.
no code implementations • 21 Nov 2024 • Xin Liu, Hao Wang, Shibei Xue, Dezong Zhao
On the LM-O and YCB-V datasets, our method outperforms other RGB-based single-model methods, achieving higher accuracy.
1 code implementation • 21 Nov 2024 • Yu Zhao, Huifeng Yin, Bo Zeng, Hao Wang, Tianqi Shi, Chenyang Lyu, Longyue Wang, Weihua Luo, Kaifu Zhang
Currently OpenAI o1 sparks a surge of interest in the study of large reasoning models (LRM).
no code implementations • 20 Nov 2024 • Ashish Bastola, Nishant Luitel, Hao Wang, Danda Pani Paudel, Roshani Poudel, Abolfazl Razi
While deep learning models are powerful tools that revolutionized many areas, they are also vulnerable to noise as they rely heavily on learning patterns and features from the exact details of the clean data.
no code implementations • 19 Nov 2024 • Haoyu Zhao, Hao Wang, Xingyue Zhao, Hongqiu Wang, Zhiyu Wu, Chengjiang Long, Hua Zou
Recent advancements in 3D generation models have opened new possibilities for simulating dynamic 3D object movements and customizing behaviors, yet creating this content remains challenging.
no code implementations • 18 Nov 2024 • Ronghui Han, Duanyu Feng, Hongyu Du, Hao Wang
To investigate this, we conduct a study on the impact of overall loss on existing time series methods with sequence decomposition.
no code implementations • 16 Nov 2024 • Jiajie Fan, Babak Gholami, Thomas Bäck, Hao Wang
Boundary Representation (B-Rep) is the de facto representation of 3D solids in Computer-Aided Design (CAD).
1 code implementation • 15 Nov 2024 • Hao Wang, Minghui Liao, Zhouyi Xie, Wenyu Liu, Xiang Bai
To address this issue, we propose a Ranking MIL (RankMIL) approach to adaptively filter those noisy samples.
no code implementations • 11 Nov 2024 • Esha Saha, Oscar Wang, Amit K. Chakraborty, Pablo Venegas Garcia, Russell Milne, Hao Wang
Bitumen extraction for the production of synthetic crude oil in Canada's Athabasca Oil Sands industry has recently come under spotlight for being a significant source of greenhouse gas emission.
no code implementations • 4 Nov 2024 • Gaochao Song, Chong Cheng, Hao Wang
In this paper we present a novel method for efficient and effective 3D surface reconstruction in open scenes.
no code implementations • 4 Nov 2024 • Guangxuan Xu, Kai Xu, Shivchander Sudalairaj, Hao Wang, Akash Srivastava
In this paper, we introduce Dr. SoW (Density Ratio of Strong over Weak) a cost-effective method that eliminates the reliance for human annotation by leveraging off-the-shelf LLMs for preference data annotation.
1 code implementation • 4 Nov 2024 • Weibo Gao, Qi Liu, Linan Yue, Fangzhou Yao, Hao Wang, Yin Gu, Zheng Zhang
Motivated by the success of collaborative modeling in various domains, such as recommender systems, we aim to investigate how collaborative signals among learners contribute to the diagnosis of human cognitive states (i. e., knowledge proficiency) in the context of intelligent education.
no code implementations • 31 Oct 2024 • Hongying Liu, Hao Wang, Haoran Chu, Yibo Wu
An unsolved issue in widely used methods such as Support Vector Data Description (SVDD) and Small Sphere and Large Margin SVM (SSLM) for anomaly detection is their nonconvexity, which hampers the analysis of optimal solutions in a manner similar to SVMs and limits their applicability in large-scale scenarios.
no code implementations • 31 Oct 2024 • Wenjia Xie, Hao Wang, Luankang Zhang, Rui Zhou, Defu Lian, Enhong Chen
Sequential recommendation (SR) aims to predict items that users may be interested in based on their historical behavior sequences.
no code implementations • 25 Oct 2024 • Kristjan Greenewald, Yuancheng Yu, Hao Wang, Kai Xu
Training generative models with differential privacy (DP) typically involves injecting noise into gradient updates or adapting the discriminator's training procedure.
no code implementations • 21 Oct 2024 • Hao Wang, Jiajun Zhong, Yikun Li, Junrong Zhang, Rong Du
In this paper, a dataset of one-dimensional powder diffraction patterns was designed with new strategy to train Convolutional Neural Networks for predicting space groups.
no code implementations • 20 Oct 2024 • Xinyu Liang, Ziheng Wang, Hao Wang
Our developed ERGAN can capture diverse load patterns across various households, thereby enhancing the realism and diversity of the synthetic data generated.
no code implementations • 19 Oct 2024 • Ping Huang, Yuxin He, Hao Wang, Jingjing Chen, Qin Luo
Accurate short-term forecasts of passenger flow in metro systems under delay conditions are crucial for emergency response and service recovery, which pose significant challenges and are currently under-researched.
no code implementations • 16 Oct 2024 • Zihan You, Hao Wang, Qichao Zhao, Jinxiang Wang
However, current temporal fusion model use convolutional layer or deformable self-attention is not conducive to the exchange of global information of BEV space and has more computational cost.
1 code implementation • 14 Oct 2024 • Hongfu Liu, Hengguan Huang, Hao Wang, Xiangming Gu, Ye Wang
Large language models (LLMs) pose significant risks due to the potential for generating harmful content or users attempting to evade guardrails.
no code implementations • 13 Oct 2024 • Gangtao Han, Chunxiao Song, Song Wang, Hao Wang, Enqing Chen, Guanghui Wang
In this paper, we propose an occluded human pose estimation framework based on limb joint augmentation to enhance the generalization ability of the pose estimation model on the occluded human bodies.
no code implementations • 13 Oct 2024 • Xinyuan Wang, Victor Shea-Jay Huang, Renmiao Chen, Hao Wang, Chengwei Pan, Lei Sha, Minlie Huang
Existing jailbreak strategies mainly focus on maximizing attack success rate (ASR), frequently neglecting other critical factors, including the relevance of the jailbreak response to the query and the level of stealthiness.
no code implementations • 12 Oct 2024 • Kexin Li, Luwei Bai, Xiao Wang, Hao Wang
Anderson acceleration is an effective technique for enhancing the efficiency of fixed-point iterations; however, analyzing its convergence in nonsmooth settings presents significant challenges.
no code implementations • 11 Oct 2024 • Weijia Zhang, Jindong Han, Hao liu, Wei Fan, Hao Wang, Hui Xiong
To this end, we propose Meta-Transfer Learning Empowered Temporal Graph Networks (MetaTransfer) to transfer valuable knowledge from multiple data-rich metropolises to the data-scarce city to improve valuation performance.
1 code implementation • 8 Oct 2024 • Junxiong Tong, Mingjia Yin, Hao Wang, Qiushi Pan, Defu Lian, Enhong Chen
Cross-domain Recommendation systems leverage multi-domain user interactions to improve performance, especially in sparse data or new user scenarios.
1 code implementation • 4 Oct 2024 • Hengyi Wang, Shiwei Tan, Zhiqing Hong, Desheng Zhang, Hao Wang
Foundation Language Models (FLMs) such as BERT and its variants have achieved remarkable success in natural language processing.
1 code implementation • 2 Oct 2024 • Haonan Li, Xudong Han, Hao Wang, Yuxia Wang, Minghan Wang, Rui Xing, Yilin Geng, Zenan Zhai, Preslav Nakov, Timothy Baldwin
We introduce Loki, an open-source tool designed to address the growing problem of misinformation.
no code implementations • 25 Sep 2024 • Catalin-Viorel Dinu, Yash J. Patel, Xavier Bonet-Monroig, Hao Wang
Solving for the maximum of the lower bound, we obtain a simple expression of the optimal re-evaluation number.
no code implementations • 25 Sep 2024 • Lyudong Jin, Ming Tang, JiaYu Pan, Meng Zhang, Hao Wang
In the realm of emerging real-time networked applications like cyber-physical systems (CPS), the Age of Information (AoI) has merged as a pivotal metric for evaluating the timeliness.
no code implementations • 25 Sep 2024 • Kun Zhou, You Zhang, Shengkui Zhao, Hao Wang, Zexu Pan, Dianwen Ng, Chong Zhang, Chongjia Ni, Yukun Ma, Trung Hieu Nguyen, Jia Qi Yip, Bin Ma
Current emotional text-to-speech (TTS) systems face challenges in mimicking a broad spectrum of human emotions due to the inherent complexity of emotions and limitations in emotional speech datasets and models.
no code implementations • 25 Sep 2024 • Zhuonan Yu, Peijun Qin, Ruibing Sun, Sara Khademi, Zhen Xu, Qinchao Sun, Yanlong Tai, Bing Song, Tianruo Guo, Hao Wang
Conventionally, myelin is considered as an insulating layer to achieve saltatory conduction for the enhancement of the neural signal speed, which serves as the foundation of neuroscience.
1 code implementation • 21 Sep 2024 • Yuqing Huang, Rongyang Zhang, Xuesong He, Xuyang Zhi, Hao Wang, Xin Li, Feiyang Xu, Deguang Liu, Huadong Liang, Yi Li, Jian Cui, Zimu Liu, Shijin Wang, Guoping Hu, Guiquan Liu, Qi Liu, Defu Lian, Enhong Chen
To this end, we propose \textbf{\textit{ChemEval}}, which provides a comprehensive assessment of the capabilities of LLMs across a wide range of chemical domain tasks.
no code implementations • 20 Sep 2024 • Yuyan Chen, Hao Wang, Songzhou Yan, Sijia Liu, Yueze Li, Yi Zhao, Yanghua Xiao
The framework includes four distinctive tasks: Key Event Recognition, Mixed Event Recognition, Implicit Emotional Recognition, and Intention Recognition.
no code implementations • 17 Sep 2024 • Xiaobao Song, Hao Wang, Liwei Deng, Yuxin He, Wenming Cao, Chi-Sing Leungc
Time position embeddings capture the positional information of time steps, often serving as auxiliary inputs to enhance the predictive capabilities of time series models.
no code implementations • 10 Sep 2024 • Yang Wen, Anyu Lai, Bo Qian, Hao Wang, Wuzhen Shi, Wenming Cao
In this paper, we propose a Task Sequence Generator module that, in conjunction with the Task Intra-patch Block, effectively extracts task-specific features embedded in degraded images.
1 code implementation • 10 Sep 2024 • Hao Wang, Adityaya Dhande, Somil Bansal
In this work, we use the general framework of constrained optimal control, but given the safety state constraint, we convert it into an equivalent control constraint, resulting in a state and time-dependent control-constrained optimal control problem.
no code implementations • 5 Sep 2024 • Yang Wen, Anyu Lai, Bo Qian, Hao Wang, Wuzhen Shi, Wenming Cao
In this paper, we introduce a novel multi-task severe weather removal model that can effectively handle complex weather conditions in an adaptive manner.
no code implementations • 5 Sep 2024 • Hao Wang
Learning to rank is a rare technology compared with other techniques such as deep neural networks.