no code implementations • Findings (EMNLP) 2021 • Jingwen Xu, Jing Zhang, Xirui Ke, Yuxiao Dong, Hong Chen, Cuiping Li, Yongbin Liu
Its general process is to first encode the implicit relation of an entity pair and then match the relation of a query entity pair with the relations of the reference entity pairs.
no code implementations • Findings (EMNLP) 2021 • Tao Huang, Hong Chen
To improve the privacy guarantee and efficiency, we combine a subsampling method with CGS and propose a novel LDA training algorithm with differential privacy, SUB-LDA.
no code implementations • Findings (EMNLP) 2021 • Yu Feng, Jing Zhang, Gaole He, Wayne Xin Zhao, Lemao Liu, Quan Liu, Cuiping Li, Hong Chen
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs).
no code implementations • ICML 2020 • Hong Chen, Guodong Liu, Heng Huang
Meanwhile, in these feature selection models, the interactions between features are often ignored or just discussed under prior structure information.
1 code implementation • 18 Mar 2024 • Yanling Wang, Jing Zhang, Lingxi Zhang, Lixin Liu, Yuxiao Dong, Cuiping Li, Hong Chen, Hongzhi Yin
Open-world semi-supervised learning (Open-world SSL) for node classification, that classifies unlabeled nodes into seen classes or multiple novel classes, is a practical but under-explored problem in the graph community.
no code implementations • 1 Mar 2024 • Wenjie Wei, Malu Zhang, Jilin Zhang, Ammar Belatreche, Jibin Wu, Zijing Xu, Xuerui Qiu, Hong Chen, Yang Yang, Haizhou Li
Specifically, we introduce two novel event-driven learning methods: the spike-timing-dependent event-driven (STD-ED) and membrane-potential-dependent event-driven (MPD-ED) algorithms.
no code implementations • 28 Feb 2024 • Xi Luo, Shiying Dong, Jinlong Hong, Bingzhao Gao, Hong Chen
This paper presents a neural network optimizer with soft-argmax operator to achieve an ecological gearshift strategy in real-time.
1 code implementation • 26 Feb 2024 • Haoyang Li, Jing Zhang, Hanbing Liu, Ju Fan, Xiaokang Zhang, Jun Zhu, Renjie Wei, Hongyan Pan, Cuiping Li, Hong Chen
To address the limitations, we introduce CodeS, a series of pre-trained language models with parameters ranging from 1B to 15B, specifically designed for the text-to-SQL task.
no code implementations • 20 Feb 2024 • Yuanguo Lin, Fan Lin, Guorong Cai, Hong Chen, Lixin Zou, Pengcheng Wu
In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution.
no code implementations • 19 Feb 2024 • Hong Chen, Chengtao Lv, Liang Ding, Haotong Qin, Xiabin Zhou, Yifu Ding, Xuebo Liu, Min Zhang, Jinyang Guo, Xianglong Liu, DaCheng Tao
Large language models (LLMs) have significantly advanced the field of natural language processing, while the expensive memory and computation consumption impede their practical deployment.
no code implementations • 7 Feb 2024 • Mingxuan Liu, Jiankai Tang, Haoxiang Li, Jiahao Qi, Siwei Li, Kegang Wang, Yuntao Wang, Hong Chen
Additionally, the power consumption of the transformer block is reduced by a factor of 12. 2, while maintaining decent performance as PhysFormer and other ANN-based models.
no code implementations • 28 Dec 2023 • Houlun Chen, Xin Wang, Hong Chen, Zihan Song, Jia Jia, Wenwu Zhu
To tackle these challenges, in this work we propose a Grounding-Prompter method, which is capable of conducting TSG in long videos through prompting LLM with multimodal information.
no code implementations • 21 Dec 2023 • Wei Feng, Xin Wang, Hong Chen, Zeyang Zhang, Zihan Song, Yuwei Zhou, Wenwu Zhu
Recently, researchers have attempted to investigate the capability of LLMs in handling videos and proposed several video LLM models.
no code implementations • 11 Dec 2023 • Kai Zhong, Luming Sun, Tao Ji, Cuiping Li, Hong Chen
They either learn to construct plans from scratch in a bottom-up manner or guide the plan generation behavior of traditional optimizer using hints.
no code implementations • 6 Dec 2023 • Haichao Sha, Ruixuan Liu, Yixuan Liu, Hong Chen
We prove that pre-projection enhances the convergence of DP-SGD by reducing the dependence of clipping error and bias to a fraction of the top gradient eigenspace, and in theory, limits cross-client variance to improve the convergence under heterogeneous federation.
1 code implementation • 30 Nov 2023 • Bin Huang, Xin Wang, Hong Chen, Zihan Song, Wenwu Zhu
Large language models (LLMs) have shown remarkable text understanding capabilities, which have been extended as Video LLMs to handle video data for comprehending visual details.
Dense Video Captioning Video-based Generative Performance Benchmarking (Consistency) +5
no code implementations • 10 Nov 2023 • Siao Tang, Xin Wang, Hong Chen, Chaoyu Guan, Zewen Wu, Yansong Tang, Wenwu Zhu
In this paper, we propose a novel post-training quantization method PCR (Progressive Calibration and Relaxing) for text-to-image diffusion models, which consists of a progressive calibration strategy that considers the accumulated quantization error across timesteps, and an activation relaxing strategy that improves the performance with negligible cost.
no code implementations • 8 Nov 2023 • Siao Tang, Xin Wang, Hong Chen, Chaoyu Guan, Yansong Tang, Wenwu Zhu
When retraining the searched architecture, we adopt a dynamic joint loss to maintain the consistency between supernet training and subnet retraining, which also provides informative objectives for each block and shortens the paths of gradient propagation.
no code implementations • 2 Nov 2023 • Hong Chen, Xin Wang, Guanning Zeng, YiPeng Zhang, Yuwei Zhou, Feilin Han, Wenwu Zhu
The video generator is further customized for the given multiple subjects by the proposed Disen-Mix Finetuning and Human-in-the-Loop Re-finetuning strategy, which can tackle the attribute binding problem of multi-subject generation.
no code implementations • 7 Oct 2023 • Zhixuan Chu, Huaiyu Guo, Xinyuan Zhou, Yijia Wang, Fei Yu, Hong Chen, Wanqing Xu, Xin Lu, Qing Cui, Longfei Li, Jun Zhou, Sheng Li
Large language models (LLMs) show promise for natural language tasks but struggle when applied directly to complex domains like finance.
no code implementations • 29 Sep 2023 • Zhen Liu, Hang Gao, Hao Ma, Shuo Cai, Yunfeng Hu, Ting Qu, Hong Chen, Xun Gong
Autonomous vehicle (AV) evaluation has been the subject of increased interest in recent years both in industry and in academia.
no code implementations • 23 Sep 2023 • Shasha Guo, Jing Zhang, Xirui Ke, Cuiping Li, Hong Chen
The above insights make diversifying question generation an intriguing task, where the first challenge is evaluation metrics for diversity.
no code implementations • 9 Sep 2023 • Yuanguo Lin, Hong Chen, Wei Xia, Fan Lin, Zongyue Wang, Yong liu
With the increasing complexity and diversity of educational data, Deep Learning techniques have shown significant advantages in addressing the challenges associated with analyzing and modeling this data.
1 code implementation • 5 Sep 2023 • Siwei Li, Mingxuan Liu, Yating Zhang, Shu Chen, Haoxiang Li, Zifei Dou, Hong Chen
Image deblurring is a critical task in the field of image restoration, aiming to eliminate blurring artifacts.
no code implementations • 31 Aug 2023 • Yuxuan Hu, Jing Zhang, Chen Zhao, Cuiping Li, Hong Chen
By projecting the whole transform model into a subspace, we enable matrix operations between the weight matrices in the model and features in a reduced-dimensional space, leading to significant reductions in model parameters and computing resources.
no code implementations • 23 Aug 2023 • Enze Liu, Zhiyuan Lin, Judith Y. T. Wang, Hong Chen
The use of MD has enabled the modelling of passenger dynamics in response to train delays and station crowdedness, and a real-time optimisation for rescheduling of train services in view of the change in demand as a result of passengers' behavioural response to disruption.
1 code implementation • ICCV 2023 • Pan Du, Suyun Zhao, Zisen Sheng, Cuiping Li, Hong Chen
Specifically, WAD captures adaptive weights and high-quality pseudo labels to target instances by exploring point mutual information (PMI) in representation space to maximize the role of unlabeled data and filter unknown categories.
1 code implementation • 20 Aug 2023 • Mingxuan Liu, Jie Gan, Rui Wen, Tao Li, Yongli Chen, Hong Chen
To fill the gap, we propose a Spiking-Diffusion model, which is based on the vector quantized discrete diffusion model.
Ranked #1 on Image Generation on EMNIST-Letters
no code implementations • 6 Jul 2023 • Aparna Ananthasubramaniam, Hong Chen, Jason Yan, Kenan Alkiek, Jiaxin Pei, Agrima Seth, Lavinia Dunagan, MinJe Choi, Benjamin Litterer, David Jurgens
Linguistic style matching (LSM) in conversations can be reflective of several aspects of social influence such as power or persuasion.
no code implementations • 26 Jun 2023 • Lingxi Zhang, Jing Zhang, Yanling Wang, Shulin Cao, Xinmei Huang, Cuiping Li, Hong Chen, Juanzi Li
The generalization problem on KBQA has drawn considerable attention.
no code implementations • 22 May 2023 • Huadai Liu, Rongjie Huang, Xuan Lin, Wenqiang Xu, Maozong Zheng, Hong Chen, Jinzheng He, Zhou Zhao
To mitigate the data scarcity in learning visual acoustic information, we 1) introduce a self-supervised learning framework to enhance both the visual-text encoder and denoiser decoder; 2) leverage the diffusion transformer scalable in terms of parameters and capacity to learn visual scene information.
no code implementations • 17 May 2023 • Hao Zheng, Jinbao Wang, XianTong Zhen, Hong Chen, Jingkuan Song, Feng Zheng
Recently, Transformers have emerged as the go-to architecture for both vision and language modeling tasks, but their computational efficiency is limited by the length of the input sequence.
1 code implementation • 5 May 2023 • Hong Chen, YiPeng Zhang, Simin Wu, Xin Wang, Xuguang Duan, Yuwei Zhou, Wenwu Zhu
To tackle the problems, we propose DisenBooth, an identity-preserving disentangled tuning framework for subject-driven text-to-image generation.
no code implementations • 3 May 2023 • Chen Zhu, Liang Du, Hong Chen, Shuang Zhao, Zixun Sun, Xin Wang, Wenwu Zhu
To tackle this problem, inspired by the Global Workspace Theory in conscious processing, which posits that only a specific subset of the product features are pertinent while the rest can be noisy and even detrimental to human-click behaviors, we propose a CTR model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction, termed DELTA.
1 code implementation • 2 May 2023 • Yuxin Dong, Tieliang Gong, Hong Chen, Chen Li
However, the current generalization error bounds within this framework are still far from optimal, while substantial improvements on these bounds are quite challenging due to the intractability of high-dimensional information quantities.
no code implementations • WWW 2023 • Shengsheng Qian, Hong Chen, Dizhan Xue, Quan Fang, Changsheng Xu
To tackle these challenges, we propose an Open-World Social Event Classifier (OWSEC) model in this paper.
1 code implementation • ICML 2023 • Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han
In this paper, we show that this assumption makes the above methods incapable when the ID model is trained with class-imbalanced data. Fortunately, by analyzing the causal relations between ID/OOD classes and features, we identify several common scenarios where the OOD-to-ID probabilities should be the ID-class-prior distribution and propose two strategies to modify existing inference-time detection methods: 1) replace the uniform distribution with the ID-class-prior distribution if they explicitly use the uniform distribution; 2) otherwise, reweight their scores according to the similarity between the ID-class-prior distribution and the softmax outputs of the pre-trained model.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 11 Apr 2023 • Yixuan Liu, Suyun Zhao, Li Xiong, YuHan Liu, Hong Chen
In this work, a general framework (APES) is built up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
1 code implementation • ICCV 2023 • Zhendong Wang, Jianmin Bao, Wengang Zhou, Weilun Wang, Hezhen Hu, Hong Chen, Houqiang Li
We find that existing detectors struggle to detect images generated by diffusion models, even if we include generated images from a specific diffusion model in their training data.
no code implementations • 20 Feb 2023 • Jun Chen, Hong Chen, Xue Jiang, Bin Gu, Weifu Li, Tieliang Gong, Feng Zheng
Triplet learning, i. e. learning from triplet data, has attracted much attention in computer vision tasks with an extremely large number of categories, e. g., face recognition and person re-identification.
no code implementations • 20 Feb 2023 • Jiahuan Wang, Jun Chen, Hong Chen, Bin Gu, Weifu Li, Xin Tang
Recently, some mixture algorithms of pointwise and pairwise learning (PPL) have been formulated by employing the hybrid error metric of "pointwise loss + pairwise loss" and have shown empirical effectiveness on feature selection, ranking and recommendation tasks.
1 code implementation • 12 Feb 2023 • Haoyang Li, Jing Zhang, Cuiping Li, Hong Chen
Due to the structural property of the SQL queries, the seq2seq model takes the responsibility of parsing both the schema items (i. e., tables and columns) and the skeleton (i. e., SQL keywords).
Ranked #1 on Semantic Parsing on spider
no code implementations • CVPR 2023 • Jinlong Kang, Liyuan Shang, Suyun Zhao, Hong Chen, Cuiping Li, Zeyu Gan
In many real scenarios, data are often divided into a handful of artificial super categories in terms of expert knowledge rather than the representations of images.
no code implementations • ICCV 2023 • Wenjie Wei, Malu Zhang, Hong Qu, Ammar Belatreche, Jian Zhang, Hong Chen
As a temporal encoding scheme for SNNs, Time-To-First-Spike (TTFS) encodes information using the timing of a single spike, which allows spiking neurons to transmit information through sparse spike trains and results in lower power consumption and higher computational efficiency compared to traditional rate-based encoding counterparts.
no code implementations • 7 Dec 2022 • Feiping Nie, Hong Chen, Rong Wang, Xuelong Li
This paper presents an algorithm to solve the Soft k-Means problem globally.
1 code implementation • 30 Nov 2022 • Yuxin Dong, Tieliang Gong, Shujian Yu, Hong Chen, Chen Li
The matrix-based R\'enyi's entropy allows us to directly quantify information measures from given data, without explicit estimation of the underlying probability distribution.
no code implementations • 21 Nov 2022 • Xin Wang, Hong Chen, Si'ao Tang, Zihao Wu, Wenwu Zhu
Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form.
no code implementations • 1 Nov 2022 • Mengdie Wang, Liyuan Shang, Suyun Zhao, Yiming Wang, Hong Chen, Cuiping Li, XiZhao Wang
Accordingly, the query results, guided by oracles with distinctive demands, may drive the OCC's clustering results in a desired orientation.
2 code implementations • 16 Oct 2022 • Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, Nanyun Peng
This task requires machines to 1) understand long text inputs and 2) produce a globally consistent image sequence that illustrates the contents of the story.
1 code implementation • 16 Oct 2022 • Hong Chen, Duc Minh Vo, Hiroya Takamura, Yusuke Miyao, Hideki Nakayama
Existing automatic story evaluation methods place a premium on story lexical level coherence, deviating from human preference.
no code implementations • 28 Sep 2022 • Libin Wang, Yulong Wang, Shiyuan Wang, Youheng Liu, Yutao Hu, Longlong Chen, Hong Chen
Tensor Robust Principal Component Analysis (TRPCA), which aims to recover a low-rank tensor corrupted by sparse noise, has attracted much attention in many real applications.
no code implementations • 31 Aug 2022 • Dustin Carrión-Ojeda, Hong Chen, Adrian El Baz, Sergio Escalera, Chaoyu Guan, Isabelle Guyon, Ihsan Ullah, Xin Wang, Wenwu Zhu
We present the design and baseline results for a new challenge in the ChaLearn meta-learning series, accepted at NeurIPS'22, focusing on "cross-domain" meta-learning.
no code implementations • 23 Aug 2022 • Zining Chen, Weiqiu Wang, Zhicheng Zhao, Aidong Men, Hong Chen
Recently, out-of-distribution (OOD) generalization has attracted attention to the robustness and generalization ability of deep learning based models, and accordingly, many strategies have been made to address different aspects related to this issue.
no code implementations • 20 Aug 2022 • Yang Zhao, Wenqiang Xu, Xuan Lin, Jingjing Huo, Hong Chen, Zhou Zhao
The task of argument mining aims to detect all possible argumentative components and identify their relationships automatically.
no code implementations • 15 Jun 2022 • Adrian El Baz, Ihsan Ullah, Edesio Alcobaça, André C. P. L. F. Carvalho, Hong Chen, Fabio Ferreira, Henry Gouk, Chaoyu Guan, Isabelle Guyon, Timothy Hospedales, Shell Hu, Mike Huisman, Frank Hutter, Zhengying Liu, Felix Mohr, Ekrem Öztürk, Jan N. van Rijn, Haozhe Sun, Xin Wang, Wenwu Zhu
Although deep neural networks are capable of achieving performance superior to humans on various tasks, they are notorious for requiring large amounts of data and computing resources, restricting their success to domains where such resources are available.
1 code implementation • NAACL 2022 • Rujun Han, Hong Chen, Yufei Tian, Nanyun Peng
Stories or narratives are comprised of a sequence of events.
1 code implementation • 18 Apr 2022 • Yunhao Du, Binyu Zhang, Xiangning Ruan, Fei Su, Zhicheng Zhao, Hong Chen
For the textual representation, one global embedding, three local embeddings and a color-type prompt embedding are extracted to represent various granularities of semantic features.
no code implementations • 18 Apr 2022 • Ruixuan Liu, Yanlin Wang, Yang Cao, Lingjuan Lyu, Weike Pan, Yun Chen, Hong Chen
Collecting and training over sensitive personal data raise severe privacy concerns in personalized recommendation systems, and federated learning can potentially alleviate the problem by training models over decentralized user data. However, a theoretically private solution in both the training and serving stages of federated recommendation is essential but still lacking. Furthermore, naively applying differential privacy (DP) to the two stages in federated recommendation would fail to achieve a satisfactory trade-off between privacy and utility due to the high-dimensional characteristics of model gradients and hidden representations. In this work, we propose a federated news recommendation method for achieving a better utility in model training and online serving under a DP guarantee. We first clarify the DP definition over behavior data for each round in the life-circle of federated recommendation systems. Next, we propose a privacy-preserving online serving mechanism under this definition based on the idea of decomposing user embeddings with public basic vectors and perturbing the lower-dimensional combination coefficients.
no code implementations • CVPR 2022 • Duc Minh Vo, Hong Chen, Akihiro Sugimoto, Hideki Nakayama
We propose an end-to-end Novel Object Captioning with Retrieved vocabulary from External Knowledge method (NOC-REK), which simultaneously learns vocabulary retrieval and caption generation, successfully describing novel objects outside of the training dataset.
no code implementations • 9 Mar 2022 • Xuebin Zhao, Hong Chen, Yingjie Wang, Weifu Li, Tieliang Gong, Yulong Wang, Feng Zheng
Recently, the scheme of model-X knockoffs was proposed as a promising solution to address controlled feature selection under high-dimensional finite-sample settings.
1 code implementation • ACL 2022 • Jing Zhang, Xiaokang Zhang, Jifan Yu, Jian Tang, Jie Tang, Cuiping Li, Hong Chen
Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning.
no code implementations • 16 Feb 2022 • Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong Chen, Xing Xie
In this way, all the clients can participate in the model learning in FL, and the final model can be big and powerful enough.
no code implementations • 11 Feb 2022 • Hong Chen, Murray Zed Frank
In dynamic capital structure models with an investor break-even condition, the firm's Bellman equation may not generate a contraction mapping, so the standard existence and uniqueness conditions do not apply.
1 code implementation • 7 Jan 2022 • Suyun Zhao, Zhigang Dai, XiZhao Wang, Peng Ni, Hengheng Luo, Hong Chen, Cuiping Li
First, a rule induction method based on consistence degree, called Consistence-based Value Reduction (CVR), is proposed and used as basis to accelerate.
no code implementations • 12 Dec 2021 • Tieliang Gong, Yuxin Dong, Hong Chen, Bo Dong, Chen Li
Subsampling is an important technique to tackle the computational challenges brought by big data.
1 code implementation • 12 Dec 2021 • Yu Feng, Jing Zhang, Xiaokang Zhang, Lemao Liu, Cuiping Li, Hong Chen
Embedding-based methods are popular for Knowledge Base Question Answering (KBQA), but few current models have numerical reasoning skills and thus struggle to answer ordinal constrained questions.
no code implementations • 9 Dec 2021 • Tielang Gong, Yuxin Dong, Hong Chen, Bo Dong, Wei Feng, Chen Li
Our results show that the Markov dependence impacts on the generalization error in the way that sample size would be discounted by a multiplicative factor depending on the spectral gap of underlying Markov chain.
no code implementations • 4 Dec 2021 • Bowen Hao, Hongzhi Yin, Cuiping Li, Hong Chen
As each occasional group has extremely sparse interactions with items, traditional group recommendation methods can not learn high-quality group representations.
no code implementations • 4 Dec 2021 • Bowen Hao, Hongzhi Yin, Jing Zhang, Cuiping Li, Hong Chen
In terms of the pretext task, in addition to considering the intra-correlations of users and items by the embedding reconstruction task, we add embedding contrastive learning task to capture inter-correlations of users and items.
1 code implementation • NeurIPS 2021 • Hong Chen, Yudong Chen, Xin Wang, Ruobing Xie, Rui Wang, Feng Xia, Wenwu Zhu
However, learning such disentangled representations from multi-feedback data is challenging because i) multi-feedback is complex: there exist complex relations among different types of feedback (e. g., click, unclick, and dislike, etc) as well as various user intentions, and ii) multi-feedback is noisy: there exists noisy (useless) information both in features and labels, which may deteriorate the recommendation performance.
no code implementations • 14 Nov 2021 • Yongjun Yan, Nan Li, Jinlong Hong, Bingzhao Gao, Hong Chen, Jing Sun, Ziyou Song
However, the comprehensive comparison between different coasting strategies and online performance of the eco-coasting strategy using road grade preview are still unclear because of the oversimplification and the integer variable in the optimal control problems.
no code implementations • Findings (EMNLP) 2021 • Hong Chen, Hiroya Takamura, Hideki Nakayama
Generating texts in scientific papers requires not only capturing the content contained within the given input but also frequently acquiring the external information called \textit{context}.
no code implementations • ICLR 2022 • Yingjie Wang, Xianrui Zhong, Fengxiang He, Hong Chen, DaCheng Tao
Moreover, the error bound for non-stationary time series contains a discrepancy measure for the shifts of the data distributions over time.
no code implementations • INLG (ACL) 2021 • Hong Chen, Raphael Shu, Hiroya Takamura, Hideki Nakayama
In this paper, we focus on planning a sequence of events assisted by event graphs, and use the events to guide the generator.
no code implementations • 5 Feb 2021 • Hong Chen, Yifei HUANG, Hiroya Takamura, Hideki Nakayama
To enrich the candidate concepts, a commonsense knowledge graph is created for each image sequence from which the concept candidates are proposed.
Ranked #19 on Visual Storytelling on VIST
no code implementations • 3 Feb 2021 • Liangxi Liu, Xi Jiang, Feng Zheng, Hong Chen, Guo-Jun Qi, Heng Huang, Ling Shao
On the client side, a prior loss that uses the global posterior probabilistic parameters delivered from the server is designed to guide the local training.
no code implementations • 11 Jan 2021 • Hong Chen, Ee Hou Yong
Therefore, to understand controlling social systems with conformity, discrete-time modelling is used and the energy cost scaling laws are derived.
Physics and Society
no code implementations • ICCV 2021 • Pan Du, Suyun Zhao, Hui Chen, Shuwen Chai, Hong Chen, Cuiping Li
However, its performance deteriorates under class distribution mismatch, wherein the unlabeled data contain many samples out of the class distribution of labeled data.
1 code implementation • 28 Dec 2020 • Bowen Hao, Jing Zhang, Cuiping Li, Hong Chen, Hongzhi Yin
On the one hand, the framework enables training multiple supervised ranking models upon the pseudo labels produced by multiple unsupervised ranking models.
2 code implementations • 14 Dec 2020 • Bo Chen, Jing Zhang, Xiaokang Zhang, Xiaobin Tang, Lingfan Cai, Hong Chen, Cuiping Li, Peng Zhang, Jie Tang
In this paper, we propose CODE, which first pre-trains an expert linking model by contrastive learning on AMiner such that it can capture the representation and matching patterns of experts without supervised signals, then it is fine-tuned between AMiner and external sources to enhance the models transferability in an adversarial manner.
1 code implementation • 13 Dec 2020 • Bowen Hao, Jing Zhang, Hongzhi Yin, Cuiping Li, Hong Chen
Cold-start problem is a fundamental challenge for recommendation tasks.
no code implementations • NeurIPS 2020 • Yingjie Wang, Hong Chen, Feng Zheng, Chen Xu, Tieliang Gong, Yanhong Chen
For high-dimensional observations in real environment, e. g., Coronal Mass Ejections (CMEs) data, the learning performance of previous methods may be degraded seriously due to the complex non-Gaussian noise and the insufficiency of prior knowledge on variable structure.
1 code implementation • 17 Sep 2020 • Ruixuan Liu, Yang Cao, Hong Chen, Ruoyang Guo, Masatoshi Yoshikawa
In this work, by leveraging the \textit{privacy amplification} effect in the recently proposed shuffle model of differential privacy, we achieve the best of two worlds, i. e., accuracy in the curator model and strong privacy without relying on any trusted party.
no code implementations • 24 Mar 2020 • Ruixuan Liu, Yang Cao, Masatoshi Yoshikawa, Hong Chen
To prevent privacy leakages from gradients that are calculated on users' sensitive data, local differential privacy (LDP) has been considered as a privacy guarantee in federated SGD recently.
1 code implementation • ICONIP 2019 2020 • Hong Chen, Hisashi Koga
Specifically, it complements either the edge label information or the structural information which Graph2vec misses with the embeddings of the line graphs.
no code implementations • 22 Nov 2019 • Xinglong Zhang, Jiahang Liu, Xin Xu, Shuyou Yu, Hong Chen
Robust model predictive control (MPC) is a well-known control technique for model-based control with constraints and uncertainties.
1 code implementation • 31 Oct 2019 • Yue Ma, Xiaojie Wang, Zhenjiang Dong, Hong Chen
Dialogue embeddings are learned by a LSTM at the middle of the network, and updated by the feeding of all turn embeddings.
no code implementations • 12 Dec 2018 • Chongyang Zhang, Guofeng Zhu, Minxin Chen, Hong Chen, Chenjian Wu
The computational complexity of Multiscale Fast Spectral Clustering is O(nlogn) and its memory cost is O(m).
no code implementations • EMNLP 2018 • Hong Chen, Zhenhua Fan, Hao Lu, Alan L. Yuille, Shu Rong
We introduce PreCo, a large-scale English dataset for coreference resolution.
3 code implementations • 16 Oct 2018 • Hong Chen, Yifei HUANG, Hideki Nakayama
Object co-segmentation is the task of segmenting the same objects from multiple images.
no code implementations • NeurIPS 2017 • Xiaoqian Wang, Hong Chen, Weidong Cai, Dinggang Shen, Heng Huang
Linear regression models have been successfully used to function estimation and model selection in high-dimensional data analysis.
no code implementations • NeurIPS 2017 • Hong Chen, Xiaoqian Wang, Cheng Deng, Heng Huang
Among them, learning models with grouped variables have shown competitive performance for prediction and variable selection.
no code implementations • 5 Nov 2017 • Yulong Wang, Yuan Yan Tang, Luoqing Li, Hong Chen
In this paper, we propose a modal regression based atomic representation and classification (MRARC) framework to alleviate such limitation.
no code implementations • NeurIPS 2016 • Hong Chen, Haifeng Xia, Heng Huang, Weidong Cai
Nystr\"{o}m method has been used successfully to improve the computational efficiency of kernel ridge regression (KRR).