no code implementations • COLING 2022 • Jingyuan Wen, Yutian Luo, Nanyi Fei, Guoxing Yang, Zhiwu Lu, Hao Jiang, Jie Jiang, Zhao Cao
In few-shot text classification, a feasible paradigm for deploying VL-PTMs is to align the input samples and their category names via the text encoders.
no code implementations • 17 Apr 2024 • Hengyu Zhang, Junwei Pan, Dapeng Liu, Jie Jiang, Xiu Li
These patterns harbor substantial potential to significantly enhance CTR prediction performance.
1 code implementation • 21 Mar 2024 • Zhutian Lin, Junwei Pan, Shangyu Zhang, Ximei Wang, Xi Xiao, Shudong Huang, Lei Xiao, Jie Jiang
In this paper, we uncover a new challenge associated with BCE loss in scenarios with sparse positive feedback, such as CTR prediction: the gradient vanishing for negative samples.
no code implementations • 5 Mar 2024 • Zhonghai Wang, Jie Jiang, Yibing Zhan, Bohao Zhou, Yanhong Li, Chong Zhang, Liang Ding, Hua Jin, Jun Peng, Xu Lin, Weifeng Liu
3) We introduce a standardized benchmark for evaluating medical LLM in Anesthesiology.
2 code implementations • 29 Feb 2024 • Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, Bin Cui
We first classify RAG foundations according to how the retriever augments the generator, distilling the fundamental abstractions of the augmentation methodologies for various retrievers and generators.
no code implementations • 22 Feb 2024 • Junwei Pan, Wei Xue, Ximei Wang, Haibin Yu, Xun Liu, Shijie Quan, Xueming Qiu, Dapeng Liu, Lei Xiao, Jie Jiang
In this paper, we present an industry ad recommendation system, paying attention to the challenges and practices of learning appropriate representations.
no code implementations • 26 Jan 2024 • Lifu Zhang, Ji-An Li, Yang Hu, Jie Jiang, Rongjie Lai, Marcus K. Benna, Jian Shi
The memory consolidation from coupled storage components is revealed by both numerical simulations and experimental observations.
no code implementations • 13 Jan 2024 • Jie Jiang, Yuesheng Xu
We first developed a numerical method for solving the equation with DNNs as an approximate solution by designing a numerical quadrature that tailors to computing oscillatory integrals involving DNNs.
no code implementations • 28 Dec 2023 • Jipeng Jin, Zhaoxiang Zhang, Zhiheng Li, Xiaofeng Gao, Xiongwen Yang, Lei Xiao, Jie Jiang
Considering recency effect in memories, we propose a forgetting model based on Ebbinghaus Forgetting Curve to cope with negative feedback.
1 code implementation • 29 Nov 2023 • Wenhao Zhong, Jie Jiang
Inspired by the locality and implicit positional encoding of convolutions, a novel convolutional transformer is proposed to capture both local contexts and global structures more sufficiently for detector-free matching.
no code implementations • 6 Oct 2023 • Xingzhuo Guo, Junwei Pan, Ximei Wang, Baixu Chen, Jie Jiang, Mingsheng Long
Recent advances in deep foundation models have led to a promising trend of developing large recommendation models to leverage vast amounts of available data.
no code implementations • 19 Sep 2023 • Ximei Wang, Junwei Pan, Xingzhuo Guo, Dapeng Liu, Jie Jiang
Multi-domain learning (MDL) aims to train a model with minimal average risk across multiple overlapping but non-identical domains.
1 code implementation • 16 Aug 2023 • Liangcai Su, Junwei Pan, Ximei Wang, Xi Xiao, Shijie Quan, Xihua Chen, Jie Jiang
Surprisingly, negative transfer still occurs in existing MTL methods on samples that receive comparable feedback across tasks.
1 code implementation • 15 Aug 2023 • Haolin Zhou, Junwei Pan, Xinyi Zhou, Xihua Chen, Jie Jiang, Xiaofeng Gao, Guihai Chen
To fill this gap, we propose a Temporal Interest Network (TIN) to capture the semantic-temporal correlation simultaneously between behaviors and the target.
1 code implementation • 12 Jun 2023 • Rong-Cheng Tu, Yatai Ji, Jie Jiang, Weijie Kong, Chengfei Cai, Wenzhe Zhao, Hongfa Wang, Yujiu Yang, Wei Liu
MGSC promotes learning more representative global features, which have a great impact on the performance of downstream tasks, while MLTC reconstructs modal-fusion local tokens, further enhancing accurate comprehension of multimodal data.
1 code implementation • CVPR 2023 • Mengyin Liu, Jie Jiang, Chao Zhu, Xu-Cheng Yin
Firstly, we propose a self-supervised Vision-Language Semantic (VLS) segmentation method, which learns both fully-supervised pedestrian detection and contextual segmentation via self-generated explicit labels of semantic classes by vision-language models.
Ranked #5 on Pedestrian Detection on Caltech
1 code implementation • NeurIPS 2023 • Junguang Jiang, Baixu Chen, Junwei Pan, Ximei Wang, Liu Dapeng, Jie Jiang, Mingsheng Long
Auxiliary-Task Learning (ATL) aims to improve the performance of the target task by leveraging the knowledge obtained from related tasks.
no code implementations • 9 Jan 2023 • Xiangyu Li, Gongning Luo, Kuanquan Wang, Hongyu Wang, Jun Liu, Xinjie Liang, Jie Jiang, Zhenghao Song, Chunyue Zheng, Haokai Chi, Mingwang Xu, Yingte He, Xinghua Ma, Jingwen Guo, Yifan Liu, Chuanpu Li, Zeli Chen, Md Mahfuzur Rahman Siddiquee, Andriy Myronenko, Antoine P. Sanner, Anirban Mukhopadhyay, Ahmed E. Othman, Xingyu Zhao, Weiping Liu, Jinhuang Zhang, Xiangyuan Ma, Qinghui Liu, Bradley J. MacIntosh, Wei Liang, Moona Mazher, Abdul Qayyum, Valeriia Abramova, Xavier Lladó, Shuo Li
It is intended to resolve the above-mentioned problems and promote the development of both intracranial hemorrhage segmentation and anisotropic data processing.
no code implementations • 9 Dec 2022 • Jie Jiang, Zhimin Li, Jiangfeng Xiong, Rongwei Quan, Qinglin Lu, Wei Liu
Therefore, TAVS is distinguished from previous temporal segmentation datasets due to its multi-modal information, holistic view of categories, and hierarchical granularities.
no code implementations • 28 Nov 2022 • Enneng Yang, Junwei Pan, Ximei Wang, Haibin Yu, Li Shen, Xihua Chen, Lei Xiao, Jie Jiang, Guibing Guo
In this paper, we propose to measure the task dominance degree of a parameter by the total updates of each task on this parameter.
1 code implementation • CVPR 2023 • Yatai Ji, RongCheng Tu, Jie Jiang, Weijie Kong, Chengfei Cai, Wenzhe Zhao, Hongfa Wang, Yujiu Yang, Wei Liu
Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities.
Ranked #8 on Zero-Shot Video Retrieval on LSMDC
no code implementations • 27 Oct 2022 • Zuowu Zheng, Xiaofeng Gao, Junwei Pan, Qi Luo, Guihai Chen, Dapeng Liu, Jie Jiang
In this paper, we propose a novel model named AutoAttention, which includes all item/user/context side fields as the query, and assigns a learnable weight for each field pair between behavior fields and query fields.
no code implementations • 29 Aug 2022 • Boxi Wu, Jie Jiang, Haidong Ren, Zifan Du, Wenxiao Wang, Zhifeng Li, Deng Cai, Xiaofei He, Binbin Lin, Wei Liu
Various training criteria for these auxiliary outliers are proposed based on heuristic intuitions.
no code implementations • 21 Aug 2022 • Jingyu Lin, Jie Jiang, Yan Yan, Chunchao Guo, Hongfa Wang, Wei Liu, Hanzi Wang
We further propose a parallel design that integrates the convolutional network with a powerful self-attention mechanism to provide complementary clues between the attention path and convolutional path.
no code implementations • 7 Apr 2022 • Jie Jiang, Shaobo Min, Weijie Kong, Dihong Gong, Hongfa Wang, Zhifeng Li, Wei Liu
With multi-level representations for video and text, hierarchical contrastive learning is designed to explore fine-grained cross-modal relationships, i. e., frame-word, clip-phrase, and video-sentence, which enables HCMI to achieve a comprehensive semantic comparison between video and text modalities.
Ranked #1 on Video Retrieval on MSR-VTT-1kA (using extra training data)
no code implementations • 6 Sep 2021 • Xingjian He, Weining Wang, Zhiyong Xu, Hao Wang, Jie Jiang, Jing Liu
Compared with image scene parsing, video scene parsing introduces temporal information, which can effectively improve the consistency and accuracy of prediction.
no code implementations • 9 Feb 2021 • Jie Jiang, Xiaojun Chen
We formulate pure characteristics demand models under uncertainties of probability distributions as distributionally robust mathematical programs with stochastic complementarity constraints (DRMP-SCC).
Optimization and Control 90C15, 90C33, 90C26
no code implementations • 16 Sep 2020 • Ming Zhang, Jie Jiang
Viewing the negative cosmological constant as a dynamical quantity derived from the matter field, we study the weak cosmic censorship conjecture for the higher-dimensional asymptotically AdS Reissner-Nordstr\"om black hole.
General Relativity and Quantum Cosmology High Energy Physics - Theory
1 code implementation • TNNLS 2020 • Jun Fu, Jing Liu, Jie Jiang, Yong Li, Yongjun Bao, Hanqing Lu
We conduct extensive experiments to validate the effectiveness of our network and achieve new state-of-the-art segmentation performance on four challenging scene segmentation data sets, i. e., Cityscapes, ADE20K, PASCAL Context, and COCO Stuff data sets.
Ranked #8 on Semantic Segmentation on COCO-Stuff test
2 code implementations • 3 Jul 2020 • Kamal Choudhary, Kevin F. Garrity, Andrew C. E. Reid, Brian DeCost, Adam J. Biacchi, Angela R. Hight Walker, Zachary Trautt, Jason Hattrick-Simpers, A. Gilad Kusne, Andrea Centrone, Albert Davydov, Jie Jiang, Ruth Pachter, Gowoon Cheon, Evan Reed, Ankit Agrawal, Xiaofeng Qian, Vinit Sharma, Houlong Zhuang, Sergei V. Kalinin, Bobby G. Sumpter, Ghanshyam Pilania, Pinar Acar, Subhasish Mandal, Kristjan Haule, David Vanderbilt, Karin Rabe, Francesca Tavazza
The Joint Automated Repository for Various Integrated Simulations (JARVIS) is an integrated infrastructure to accelerate materials discovery and design using density functional theory (DFT), classical force-fields (FF), and machine learning (ML) techniques.
Materials Science Computational Physics
no code implementations • 10 May 2020 • Longteng Guo, Jing Liu, Xinxin Zhu, Xingjian He, Jie Jiang, Hanqing Lu
In this paper, we propose a Non-Autoregressive Image Captioning (NAIC) model with a novel training paradigm: Counterfactuals-critical Multi-Agent Learning (CMAL).
1 code implementation • Findings (ACL) 2021 • Jie Zhou, Shengding Hu, Xin Lv, Cheng Yang, Zhiyuan Liu, Wei Xu, Jie Jiang, Juanzi Li, Maosong Sun
Based on the datasets, we propose novel tasks such as multi-hop knowledge abstraction (MKA), multi-hop knowledge concretization (MKC) and then design a comprehensive benchmark.
1 code implementation • 25 Sep 2019 • Yikai Zhao, Peiqing Chen, Zidong Zhao, Tong Yang, Jie Jiang, Bin Cui, Gong Zhang, Steve Uhlig
First, we introduced RP Trees into the tasks of similarity measurement such that accuracy is improved.
1 code implementation • 25 Sep 2019 • Chenxingyu Zhao, Jie Gui, Yixiao Guo, Jie Jiang, Tong Yang, Bin Cui, Gong Zhang
Unlike the densification to fill the empty bins after they undesirably occur, our design goal is to balance the load so as to reduce the empty bins in advance.
no code implementations • 5 Sep 2019 • Jie Jiang, Banglin Deng, Zhaohui Chen
We consider the new version of the gedanken experiments proposed recently by Sorce and Wald to overcharge a static charged dilaton black hole.
High Energy Physics - Theory
no code implementations • 3 Jul 2019 • Jie Jiang, Qiuqiang Kong, Mark Plumbley, Nigel Gilbert
On the basis of energy disaggregation, we then investigate the performance of two deep-learning based frameworks for the task of on/off detection which aims at estimating whether an appliance is in operation or not.
no code implementations • 17 Apr 2019 • Lei Liu, Jie Jiang, Wenjing Jia, Saeed Amirgholipour, Michelle Zeibots, Xiangjian He
Counting people or objects with significantly varying scales and densities has attracted much interest from the research community and yet it remains an open problem.
no code implementations • 15 Mar 2019 • Kamal Choudhary, Marnik Bercx, Jie Jiang, Ruth Pachter, Dirk Lamoen, Francesca Tavazza
Solar-energy plays an important role in solving serious environmental problems and meeting high-energy demand.
Materials Science
no code implementations • LREC 2014 • Thierry Etchegoyhen, Lindsay Bywood, Mark Fishel, Panayota Georgakopoulou, Jie Jiang, Gerard van Loenhout, Arantza del Pozo, Mirjam Sepesy Mau{\v{c}}ec, Anja Turner, Martin Volk
This article describes a large-scale evaluation of the use of Statistical Machine Translation for professional subtitling.