no code implementations • NAACL (WNU) 2022 • Zhilin Wang, Anna Jafarpour, Maarten Sap
It is important to define meaningful and interpretable automatic evaluation metrics for open-domain dialog research.
no code implementations • NAACL (WNU) 2022 • Zhilin Wang, Pablo E. Torres
Internet forums such as Reddit offer people a platform to ask for advice when they encounter various issues at work, school or in relationships.
1 code implementation • 16 Nov 2023 • Zhilin Wang, Yi Dong, Jiaqi Zeng, Virginia Adams, Makesh Narsimhan Sreedhar, Daniel Egert, Olivier Delalleau, Jane Polak Scowcroft, Neel Kant, Aidan Swope, Oleksii Kuchaiev
To alleviate this problem, we collect HelpSteer, a multi-attribute helpfulness dataset annotated for the various aspects that make responses helpful.
no code implementations • 20 Oct 2023 • Zhilin Wang, Qin Hu, Xukai Zou
We first uncover the deficiencies of similarity metrics that high-dimensional local models, including benign and poisoned models, may be evaluated to have the same similarity while being significantly different in the parameter values.
1 code implementation • 9 Oct 2023 • Zhilin Wang, Yu Ying Chiu, Yu Cheung Chiu
Just as computational simulations of atoms, molecules and cells have shaped the way we study the sciences, true-to-life simulations of human-like agents can be valuable tools for studying human behavior.
1 code implementation • 9 Oct 2023 • Yi Dong, Zhilin Wang, Makesh Narsimhan Sreedhar, Xianchao Wu, Oleksii Kuchaiev
Model alignment with human preferences is an essential step in making Large Language Models (LLMs) helpful and consistent with human values.
no code implementations • 19 Mar 2023 • Weizhe Lin, Zhilin Wang, Bill Byrne
The widely used Fact-based Visual Question Answering (FVQA) dataset contains visually-grounded questions that require information retrieval using common sense knowledge graphs to answer.
no code implementations • 2 Apr 2022 • Weizhe Lin, Linjun Shou, Ming Gong, Pei Jian, Zhilin Wang, Bill Byrne, Daxin Jiang
Knowledge graph (KG) based Collaborative Filtering is an effective approach to personalizing recommendation systems for relatively static domains such as movies and books, by leveraging structured information from KG to enrich both item and user representations.
no code implementations • 18 Feb 2022 • Zhilin Wang, Qin Hu, Ruinian Li, Minghui Xu, Zehui Xiong
Since each client has a limited amount of computing resources, the problem of allocating computing resources into training and mining needs to be carefully addressed.
no code implementations • 16 Oct 2021 • Qin Hu, Zhilin Wang, Minghui Xu, Xiuzhen Cheng
Mobile crowdsensing (MCS) counting on the mobility of massive workers helps the requestor accomplish various sensing tasks with more flexibility and lower cost.
no code implementations • 5 Oct 2021 • Zhilin Wang, Qin Hu
Then, we analyze the concrete functions of BCFL from the perspective of mechanism design and illustrate what problems blockchain addresses specifically for FL.
1 code implementation • NLP4ConvAI (ACL) 2022 • Zhilin Wang, Xuhui Zhou, Rik Koncel-Kedziorski, Alex Marin, Fei Xia
Personal attributes represent structured information about a person, such as their hobbies, pets, family, likes and dislikes.
no code implementations • NAACL (NUSE) 2021 • Zhilin Wang, Weizhe Lin, Xiaodong Wu
While many different aspects of human experiences have been studied by the NLP community, none has captured its full richness.
no code implementations • 17 Mar 2020 • Xiaodong Wu, Weizhe Lin, Zhilin Wang, Elena Rastorgueva
Online forums and social media platforms provide noisy but valuable data every day.
3 code implementations • 6 Dec 2019 • Xu Qin, Zhilin Wang
In this paper, we propose a novel end-to-end Neuron Attention Stage-by-Stage Net (NASNet), which can solve all types of rain model tasks efficiently.
3 code implementations • 18 Nov 2019 • Xu Qin, Zhilin Wang, Yuanchao Bai, Xiaodong Xie, Huizhu Jia
The FFA-Net architecture consists of three key components: 1) A novel Feature Attention (FA) module combines Channel Attention with Pixel Attention mechanism, considering that different channel-wise features contain totally different weighted information and haze distribution is uneven on the different image pixels.
Ranked #1 on Image Dehazing on KITTI
no code implementations • WS 2019 • Zhilin Wang, Elena Rastorgueva, Weizhe Lin, Xiaodong Wu
This model is built upon the BERT Next Sentence Prediction model and reduces the time complexity for clustering all posts in a corpus from O(n{\^{}}2) to O(n) with respect to the number of posts.