no code implementations • EMNLP 2020 • Wenxuan Zhang, Yang Deng, Jing Ma, Wai Lam
Product-related question answering platforms nowadays are widely employed in many E-commerce sites, providing a convenient way for potential customers to address their concerns during online shopping.
no code implementations • Findings (EMNLP) 2021 • Zhiwei Yang, Jing Ma, Hechang Chen, Yunke Zhang, Yi Chang
Specifically, we first utilize a two-phase module to generate span representations by aggregating context information based on a bottom-up and top-down transformer network.
1 code implementation • 25 Feb 2025 • Hongzhan Lin, Yang Deng, Yuxuan Gu, Wenxuan Zhang, Jing Ma, See-Kiong Ng, Tat-Seng Chua
Large Language Models (LLMs) have significantly advanced the fact-checking studies.
no code implementations • 19 Feb 2025 • Shuai Niu, Jing Ma, Hongzhan Lin, Liang Bai, Zhihua Wang, Wei Bi, Yida Xu, Guo Li, Xian Yang
Large language models (LLMs) have shown remarkable performance in vision-language tasks, but their application in the medical field remains underexplored, particularly for integrating structured time series data with unstructured clinical notes.
no code implementations • 17 Feb 2025 • Zirui He, Haiyan Zhao, Yiran Qiao, Fan Yang, Ali Payani, Jing Ma, Mengnan Du
We demonstrate how the features we identify can effectively steer model outputs to align with given instructions.
no code implementations • 16 Feb 2025 • Yuefei Chen, Vivek K. Singh, Jing Ma, Ruxiang Tang
Counterfactual reasoning is widely recognized as one of the most challenging and intricate aspects of causality in artificial intelligence.
no code implementations • 13 Feb 2025 • Ruichao Yang, Jing Ma, Wei Gao, Hongzhan Lin
Although rumor detection and stance detection are distinct tasks, they can complement each other.
no code implementations • 27 Jan 2025 • Yuanfu Sun, Zhengnan Ma, Yi Fang, Jing Ma, Qiaoyu Tan
The growing importance of textual and relational systems has driven interest in enhancing large language models (LLMs) for graph-structured data, particularly Text-Attributed Graphs (TAGs), where samples are represented by textual descriptions interconnected by edges.
1 code implementation • 5 Jan 2025 • Peihai Jiang, Xixiang Lyu, Yige Li, Jing Ma
The BTU defense leverages these properties to identify aberrant embedding parameters and subsequently removes backdoor behaviors using a fine-grained unlearning technique.
no code implementations • 17 Dec 2024 • Yuxi Sun, Wei Gao, Jing Ma, Hongzhan Lin, Ziyang Luo, Wenxuan Zhang
This suggests that modeling human moral judgment with the emulating humans moral strategy is promising for improving the ethical behaviors of LLMs.
1 code implementation • 28 Nov 2024 • Rao Fu, Ziyang Luo, Hongzhan Lin, Zhen Ye, Jing Ma
By integrating visual elements and embedded programming logic, ScratchEval requires the model to process both visual information and code structure, thereby comprehensively evaluating its programming intent understanding ability.
no code implementations • 20 Nov 2024 • Ziyang Luo, HaoNing Wu, Dongxu Li, Jing Ma, Mohan Kankanhalli, Junnan Li
To further streamline our evaluation, we introduce VideoAutoBench as an auxiliary benchmark, where human annotators label winners in a subset of VideoAutoArena battles.
1 code implementation • 19 Nov 2024 • Tonmoy Hossain, Jing Ma, Jundong Li, Miaomiao Zhang
In this paper, we introduce a novel framework that for the first time develops invariant shape representation learning (ISRL) to further strengthen the robustness of image classifiers.
no code implementations • 12 Nov 2024 • Chuyi Kong, Ziyang Luo, Hongzhan Lin, Zhiyuan Fan, Yaxin Fan, Yuxi Sun, Jing Ma
The advanced role-playing capabilities of Large Language Models (LLMs) have paved the way for developing Role-Playing Agents (RPAs).
no code implementations • 12 Nov 2024 • Shuai Niu, Jing Ma, Liang Bai, Zhihua Wang, Yida Xu, Yunya Song, Xian Yang
ClinRaGen incorporates a unique knowledge-augmented attention mechanism to merge domain knowledge with time series EHR data, utilizing a stepwise rationale distillation strategy to produce both textual and time series-based clinical rationales.
1 code implementation • 8 Nov 2024 • Jianzhao Huang, Hongzhan Lin, Ziyan Liu, Ziyang Luo, Guang Chen, Jing Ma
The proliferation of Internet memes in the age of social media necessitates effective identification of harmful ones.
no code implementations • 25 Oct 2024 • Yinhan He, Wendy Zheng, Yaochen Zhu, Jing Ma, Saumitra Mishra, Natraj Raman, Ninghao Liu, Jundong Li
Methodologically, we design a significant subgraph generator and a counterfactual subgraph autoencoder in our GlobalGCE, where the subgraphs and the rules can be effectively generated.
1 code implementation • 1 Oct 2024 • Ziyang Luo, Xin Li, Hongzhan Lin, Jing Ma, Lidong Bing
To this end, our study introduces the Adaptive Modular Response Evolution (AMR-Evol) framework, which employs a two-stage process to refine response distillation.
no code implementations • 15 Sep 2024 • Jing Ma
Causal inference has been a pivotal challenge across diverse domains such as medicine and economics, demanding a complicated integration of human knowledge, mathematical reasoning, and data mining capabilities.
no code implementations • 15 Sep 2024 • Jing Ma
Concluding with a discussion on potential future research directions, this review seeks to articulate the continuing development and future potential of causality in enhancing the trustworthiness of graph machine learning.
no code implementations • 28 Aug 2024 • Yiran Qiao, Yu Yin, Chen Chen, Jing Ma
On top of that, we design a causally certified defense strategy to handle adversarial attacks on latent causal factors.
1 code implementation • 20 Aug 2024 • Yuwei Zhao, Ziyang Luo, Yuchen Tian, Hongzhan Lin, Weixiang Yan, Annan Li, Jing Ma
Recent advancements in large language models (LLMs) have showcased impressive code generation capabilities, primarily evaluated through language-to-code benchmarks.
no code implementations • 4 Aug 2024 • Yiren Lu, Jing Ma, Yu Yin
In this work, we introduce a novel RF editing pipeline that significantly enhances consistency by requiring the inpainting of only a single reference image.
no code implementations • 16 Jul 2024 • Yushun Dong, Song Wang, Zhenyu Lei, Zaiyi Zheng, Jing Ma, Chen Chen, Jundong Li
Fairness-aware graph learning has gained increasing attention in recent years.
no code implementations • 20 Jun 2024 • Yaochen Zhu, Yinhan He, Jing Ma, Mengxuan Hu, Sheng Li, Jundong Li
Depending on the type of unobserved variables and the specific CI task, various consequences can be incurred if these latent variables are carelessly handled, such as biased estimation of causal effects, incomplete understanding of causal mechanisms, lack of individual-level causal consideration, etc.
1 code implementation • 17 Jun 2024 • Shengkang Wang, Hongzhan Lin, Ziyang Luo, Zhen Ye, Guang Chen, Jing Ma
Large vision-language models (LVLMs) have significantly improved multimodal reasoning tasks, such as visual question answering and image captioning.
no code implementations • 4 Jun 2024 • Ruichao Yang, Wei Gao, Jing Ma, Hongzhan Lin, Bo wang
Learning multi-task models for jointly detecting stance and verifying rumors poses challenges due to the need for training data of stance at post level and rumor veracity at claim level, which are difficult to obtain.
no code implementations • 29 May 2024 • Zhe Hu, Tuo Liang, Jing Li, Yiren Lu, Yunlai Zhou, Yiran Qiao, Jing Ma, Yu Yin
Through extensive experimentation and analysis of recent commercial or open-sourced large (vision) language models, we assess their capability to comprehend the complex interplay of the narrative humor inherent in these comics.
1 code implementation • 6 May 2024 • Bo wang, Jing Ma, Hongzhan Lin, Zhiwei Yang, Ruichao Yang, Yuan Tian, Yi Chang
To detect fake news from a sea of diverse, crowded and even competing narratives, in this paper, we propose a novel defense-based explainable fake news detection framework.
1 code implementation • 2 May 2024 • Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang
In the fast-evolving domain of artificial intelligence, large language models (LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance, healthcare, and law: domains characterized by their reliance on professional expertise, challenging data acquisition, high-stakes, and stringent regulatory compliance.
1 code implementation • 1 May 2024 • Hongzhan Lin, Zixin Chen, Ziyang Luo, Mingfei Cheng, Jing Ma, Guang Chen
Current methods for Multimodal Sarcasm Target Identification (MSTI) predominantly focus on superficial indicators in an end-to-end manner, overlooking the nuanced understanding of multimodal sarcasm conveyed through both the text and image.
3 code implementations • 15 Apr 2024 • Kaixin Li, Yuchen Tian, Qisheng Hu, Ziyang Luo, Zhiyong Huang, Jing Ma
Programming often involves converting detailed and complex specifications into code, a process during which developers typically utilize visual aids to more effectively convey concepts.
no code implementations • 2 Apr 2024 • Xiang Xiang, Zihan Zhang, Jing Ma, Yao Deng
Parkinson's Disease (PD) is the second most common neurodegenerative disorder.
1 code implementation • 24 Jan 2024 • Hongzhan Lin, Ziyang Luo, Wei Gao, Jing Ma, Bo wang, Ruichao Yang
Then we propose to fine-tune a small language model as the debate judge for harmfulness inference, to facilitate multimodal fusion between the harmfulness rationales and the intrinsic multimodal information within memes.
Ranked #2 on
Hateful Meme Classification
on Harm-P
no code implementations • 11 Jan 2024 • Liangwei Yang, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, Philip S. Yu
This paper answers a fundamental question in artificial neural network (ANN) design: We do not need to build ANNs layer-by-layer sequentially to guarantee the Directed Acyclic Graph (DAG) property.
no code implementations • 3 Jan 2024 • Hongzhan Lin, Ziyang Luo, Bo wang, Ruichao Yang, Jing Ma
The exponential growth of social media has profoundly transformed how information is created, disseminated, and absorbed, exceeding any precedent in the digital age.
1 code implementation • CVPR 2024 • Jing Ma
Test-time adaptation (TTA) is a technique to improve the performance of a pre-trained source model on a target distribution without using any labeled data.
no code implementations • 19 Dec 2023 • Hui Wu, Yi Gan, Feng Yuan, Jing Ma, Wei Zhu, Yutao Xu, Hong Zhu, Yuhua Zhu, Xiaoli Liu, Jinghui Gu, Peng Zhao
A customized Scaled-Dot-Product-Attention kernel is designed to match our fusion policy based on the segment KV cache solution.
1 code implementation • 9 Dec 2023 • Hongzhan Lin, Ziyang Luo, Jing Ma, Long Chen
The age of social media is rife with memes.
1 code implementation • 25 Oct 2023 • Ruichao Yang, Wei Gao, Jing Ma, Hongzhan Lin, Zhiwei Yang
This model only requires bag-level labels for training but is capable of inferring both sentence-level misinformation and article-level veracity, aided by relevant social media conversations that are attentively contextualized with news sentences.
no code implementations • 28 Aug 2023 • Song Wang, Jing Ma, Lu Cheng, Jundong Li
These auxiliary sets contain several labeled training samples that can enhance the model performance regarding fairness in meta-test tasks, thereby allowing for the transfer of learned useful fairness-oriented knowledge to meta-test tasks.
1 code implementation • 18 Aug 2023 • Liangwei Yang, Zhiwei Liu, Chen Wang, Mingdai Yang, Xiaolong Liu, Jing Ma, Philip S. Yu
To address this issue, we propose a novel approach, graph-based alignment and uniformity (GraphAU), that explicitly considers high-order connectivities in the user-item bipartite graph.
no code implementations • 17 Jul 2023 • Jing Ma, Chen Chen, Anil Vullikanti, Ritwick Mishra, Gregory Madden, Daniel Borrajo, Jundong Li
In this paper, we study the problem of causal effect estimation with treatment entangled in a graph.
no code implementations • 17 Jul 2023 • Jing Ma, Ruocheng Guo, Aidong Zhang, Jundong Li
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
3 code implementations • 14 Jun 2023 • Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang
Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+.
Ranked #7 on
Code Generation
on CodeContests
no code implementations • 7 Jun 2023 • Hejie Cui, Jiaying Lu, ran Xu, Shiyu Wang, Wenjing Ma, Yue Yu, Shaojun Yu, Xuan Kan, Chen Ling, Liang Zhao, Zhaohui S. Qin, Joyce C. Ho, Tianfan Fu, Jing Ma, Mengdi Huai, Fei Wang, Carl Yang
This comprehensive review aims to provide an overview of the current state of Healthcare Knowledge Graphs (HKGs), including their construction, utilization models, and applications across various healthcare and biomedical research domains.
1 code implementation • 5 Jun 2023 • Yaochen Zhu, Jing Ma, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li
But since sensitive features may also affect user interests in a fair manner (e. g., race on culture-based preferences), indiscriminately eliminating all the influences of sensitive features inevitably degenerate the recommendations quality and necessary diversities.
1 code implementation • 8 May 2023 • Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang
We demonstrate that our PKG framework can enhance the performance of "black-box" LLMs on a range of domain knowledge-intensive tasks that require factual (+7. 9%), tabular (+11. 9%), medical (+3. 0%), and multimodal (+8. 1%) knowledge.
no code implementations • 4 Apr 2023 • Hongzhan Lin, Jing Ma, Ruichao Yang, Zhiwei Yang, Mingfei Cheng
The truth is significantly hampered by massive rumors that spread along with breaking news or popular topics.
no code implementations • 16 Feb 2023 • XIAOYU GUO, Jing Ma, Arkaitz Zubiaga
Memes have gained popularity as a means to share visual ideas through the Internet and social media by mixing text, images and videos, often for humorous purposes.
1 code implementation • 16 Feb 2023 • XIAOYU GUO, Jing Ma, Arkaitz Zubiaga
This paper describes the participation of our NUAA-QMUL-AIIT team in the Memotion 3 shared task on meme emotion analysis.
1 code implementation • 6 Feb 2023 • Ziyang Luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing Ma, Qingwen Lin, Daxin Jiang
The conventional dense retrieval paradigm relies on encoding images and texts into dense representations using dual-stream encoders, however, it faces challenges with low retrieval speed in large-scale retrieval scenarios.
1 code implementation • 3 Jan 2023 • Yaochen Zhu, Jing Ma, Jundong Li
Traditional RSs estimate user interests and predict their future behaviors by utilizing correlations in the observational historical activities, their profiles, and the content of interacted items.
1 code implementation • ICCV 2023 • Ziyang Luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang
To address this issue, we propose a novel sparse retrieval paradigm for ITR that exploits sparse representations in the vocabulary space for images and texts.
1 code implementation • 2 Dec 2022 • Hongzhan Lin, Pengyao Yi, Jing Ma, Haiyun Jiang, Ziyang Luo, Shuming Shi, Ruifang Liu
The spread of rumors along with breaking events seriously hinders the truth in the era of social media.
1 code implementation • 25 Nov 2022 • Yushun Dong, Song Wang, Jing Ma, Ninghao Liu, Jundong Li
In this paper, we study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
no code implementations • 3 Nov 2022 • Qiuchen Zhang, Jing Ma, Jian Lou, Li Xiong, Xiaoqian Jiang
PATE combines an ensemble of "teacher models" trained on sensitive data and transfers the knowledge to a "student" model through the noisy aggregation of teachers' votes for labeling unlabeled public data which the student model will be trained on.
no code implementations • 16 Oct 2022 • Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, Jundong Li
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?".
1 code implementation • 10 Oct 2022 • Qiuchen Zhang, Hong kyu Lee, Jing Ma, Jian Lou, Carl Yang, Li Xiong
The key idea is to decouple the feature projection and message passing via a DP PageRank algorithm which learns the structure information and uses the top-$K$ neighbors determined by the PageRank for feature aggregation.
1 code implementation • COLING 2022 • Zhiwei Yang, Jing Ma, Hechang Chen, Hongzhan Lin, Ziyang Luo, Yi Chang
Existing fake news detection methods aim to classify a piece of news as true or false and provide veracity explanations, achieving remarkable performances.
Ranked #3 on
Fake News Detection
on RAWFC
no code implementations • 7 Jul 2022 • Jing Ma, Mengting Wan, Longqi Yang, Jundong Li, Brent Hecht, Jaime Teevan
Hypergraphs provide an effective abstraction for modeling multi-way group interactions among nodes, where each hyperedge can connect any number of nodes.
1 code implementation • CVPR 2024 • Jing Ma, Xiang Xiang, Ke Wang, Yuchuan Wu, Yongbin Li
Black-Box Knowledge Distillation (B2KD) is a formulated problem for cloud-to-edge model compression with invisible data and models hosted on the server.
no code implementations • 24 Apr 2022 • Zheng Huang, Jing Ma, Yushun Dong, Natasha Zhang Foutz, Jundong Li
Noticeably, LBSNs have offered unparalleled access to abundant heterogeneous relational information about users and POIs (including user-user social relations, such as families or colleagues; and user-POI visiting relations).
2 code implementations • 21 Apr 2022 • Yushun Dong, Jing Ma, Song Wang, Chen Chen, Jundong Li
Recently, algorithmic fairness has been extensively studied in graph-based applications.
no code implementations • Findings (NAACL) 2022 • Ziyang Luo, Yadong Xi, Jing Ma, Zhiwei Yang, Xiaoxi Mao, Changjie Fan, Rongsheng Zhang
In contrast, Transformer Decoder with the causal attention masks is naturally sensitive to the word order.
1 code implementation • Findings (NAACL) 2022 • Hongzhan Lin, Jing Ma, Liangliang Chen, Zhiwei Yang, Mingfei Cheng, Guang Chen
Massive false rumors emerging along with breaking news or trending topics severely hinder the truth.
no code implementations • 6 Apr 2022 • Ruichao Yang, Jing Ma, Hongzhan Lin, Wei Gao
The diffusion of rumors on microblogs generally follows a propagation tree structure, that provides valuable clues on how an original message is transmitted and responded by users over time.
no code implementations • 14 Feb 2022 • Ziyang Luo, Zhipeng Hu, Yadong Xi, Rongsheng Zhang, Jing Ma
Different to these heavy-cost models, we introduce a lightweight image captioning framework (I-Tuning), which contains a small number of trainable parameters.
no code implementations • 30 Jan 2022 • Ziyang Luo, Yadong Xi, Rongsheng Zhang, Jing Ma
Before training the captioning models, an extra object detector is utilized to recognize the objects in the image at first.
1 code implementation • 10 Jan 2022 • Jing Ma, Ruocheng Guo, Mengting Wan, Longqi Yang, Aidong Zhang, Jundong Li
In this framework, we generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
1 code implementation • 24 Nov 2021 • Xiang Xiang, Yuwen Tan, Qian Wan, Jing Ma
Such images form a new training set (i. e., support set) so that the incremental model is hoped to recognize a basenji (i. e., query) as a basenji next time.
no code implementations • 26 Oct 2021 • Yichen Zhou, Weidong Liu, Jing Ma, Xinghao Zhen, Yonggang Li
Further, to mitigate the impact of MMA, a defense strategy based on multi-index information active disturbance rejection control is proposed to improve the stability and anti-disturbance ability of the power system, which considers the impact factors of both mode damping and disturbance compensation.
1 code implementation • 13 Oct 2021 • Jiangshu Du, Yingtong Dou, Congying Xia, Limeng Cui, Jing Ma, Philip S. Yu
The COVID-19 pandemic poses a great threat to global public health.
no code implementations • EMNLP 2021 • Hongzhan Lin, Jing Ma, Mingfei Cheng, Zhiwei Yang, Liangliang Chen, Guang Chen
Rumors are rampant in the era of social media.
no code implementations • 29 Sep 2021 • Ziyang Luo, Yadong Xi, Jing Ma, Xiaoxi Mao, Changjie Fan
A common limitation of Transformer Encoder's self-attention mechanism is that it cannot automatically capture the information of word order, so one needs to feed the explicit position encodings into the target model.
no code implementations • 3 Sep 2021 • Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Sivasubramanium Bhavani, Joyce C. Ho
Tensor factorization has been proved as an efficient unsupervised learning approach for health data analysis, especially for computational phenotyping, where the high-dimensional Electronic Health Records (EHRs) with patients' history of medical procedures, medications, diagnosis, lab tests, etc., are converted to meaningful and interpretable medical concepts.
no code implementations • 22 Aug 2021 • Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Joyce C. Ho
Representation learning on static graph-structured data has shown a significant impact on many real-world applications.
no code implementations • 29 Jul 2021 • Jing Ma, Yiyang Sun, Junjie Liu, Huaxiong Huang, Xiaoshuang Zhou, Shixin Xu
The experimental results showed that the QIDNN model with 7 interactive features achieve the state-of-art accuracy $83. 25\%$.
1 code implementation • NeurIPS 2021 • Han Xie, Jing Ma, Li Xiong, Carl Yang
Federated learning has emerged as an important paradigm for training machine learning models in different domains.
no code implementations • 29 May 2021 • Junjie Liu, Yiyang Sun, Jing Ma, Jiachen Tu, Yuhui Deng, Ping He, Huaxiong Huang, Xiaoshuang Zhou, Shixin Xu
Evaluation of the risk of getting stroke is important for the prevention and treatment of stroke in China.
BIG-bench Machine Learning
Interpretable Machine Learning
+1
1 code implementation • 29 May 2021 • Jing Ma, Yushun Dong, Zheng Huang, Daniel Mietchen, Jundong Li
Besides, as the confounders may be time-varying during COVID-19 (e. g., vigilance of residents changes in the course of the pandemic), it is even more difficult to capture them.
2 code implementations • 24 Dec 2020 • Zhendong Chu, Jing Ma, Hongning Wang
Crowdsourcing provides a practical way to obtain large amounts of labeled data at a low cost.
Ranked #1 on
Image Classification
on LabelMe
no code implementations • COLING 2020 • Jing Ma, Wei Gao
Rumors are manufactured with no respect for accuracy, but can circulate quickly and widely by {``}word-of-post{''} through social media conversations.
1 code implementation • SEMEVAL 2020 • XIAOYU GUO, Jing Ma, Arkaitz Zubiaga
This paper describes our contribution to SemEval 2020 Task 8: Memotion Analysis.
no code implementations • 30 Jun 2020 • Liqiang Lin, Qingqing Jia, Zheng Cheng, Yanyan Jiang, Yanwen Guo, Jing Ma
The development of efficient models for predicting specific properties through machine learning is of great importance for the innovation of chemistry and material science.
Ranked #7 on
Formation Energy
on QM9
no code implementations • 21 Jun 2020 • Jing Ma, Qiuchen Zhang, Joyce C. Ho, Li Xiong
In this paper, we propose SkeTenSmooth, a novel tensor factorization framework that uses adaptive sampling to compress the tensor in a temporally streaming fashion and preserves the underlying global structure.
1 code implementation • 13 Mar 2020 • Wenxuan Zhang, Wai Lam, Yang Deng, Jing Ma
In this paper, we propose the Review-guided Answer Helpfulness Prediction (RAHP) model that not only considers the interactions between QA pairs but also investigates the opinion coherence between the answer and crowds' opinions reflected in the reviews, which is another important factor to identify helpful answers.
no code implementations • 26 Aug 2019 • Jing Ma, Qiuchen Zhang, Jian Lou, Joyce C. Ho, Li Xiong, Xiaoqian Jiang
We propose DPFact, a privacy-preserving collaborative tensor factorization method for computational phenotyping using EHR.
no code implementations • ACL 2019 • Jing Ma, Wei Gao, Shafiq Joty, Kam-Fai Wong
Claim verification is generally a task of verifying the veracity of a given claim, which is critical to many downstream applications.
1 code implementation • ACL 2018 • Jing Ma, Wei Gao, Kam-Fai Wong
Automatic rumor detection is technically very challenging.
no code implementations • SEMEVAL 2017 • Yufei Xie, Maoquan Wang, Jing Ma, Jian Jiang, Zhao Lu
In the main Subtask C, our primary submission was ranked fourth, with a MAP of 13. 48 and accuracy of 97. 08.
no code implementations • ACL 2017 • Jing Ma, Wei Gao, Kam-Fai Wong
How fake news goes viral via social media?