no code implementations • Findings (NAACL) 2022 • Ruibo Liu, Ge Zhang, Xinyu Feng, Soroush Vosoughi
Although current large-scale generative language models (LMs) can show impressive insights about factual knowledge, they do not exhibit similar success with respect to human values judgements (e. g., whether or not the generations of an LM are moral).
1 code implementation • 14 Jan 2023 • Zhenyu Yang, Ge Zhang, Jia Wu, Jian Yang, Quan Z. Sheng, Shan Xue, Chuan Zhou, Charu Aggarwal, Hao Peng, Wenbin Hu, Edwin Hancock, Pietro Li`o
Traditional approaches to learning a set of graphs tend to rely on hand-crafted features, such as substructures.
no code implementations • 1 Jan 2023 • Ruibo Liu, Chenyan Jia, Ge Zhang, Ziyu Zhuang, Tony X Liu, Soroush Vosoughi
We present Second Thought, a new learning paradigm that enables language models (LMs) to re-align with human values.
1 code implementation • 1 Jan 2023 • Ge Zhang, Yizhi Li, Yaoyao Wu, Linyuan Zhang, Chenghua Lin, Jiayi Geng, Shi Wang, Jie Fu
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese.
no code implementations • 5 Dec 2022 • Yizhi Li, Ruibin Yuan, Ge Zhang, Yinghao Ma, Chenghua Lin, Xingran Chen, Anton Ragni, Hanzhi Yin, Zhijie Hu, Haoyu He, Emmanouil Benetos, Norbert Gyenge, Ruibo Liu, Jie Fu
The deep learning community has witnessed an exponentially growing interest in self-supervised learning (SSL).
1 code implementation • 5 Nov 2022 • Yizhi Li, Ge Zhang, Bohao Yang, Chenghua Lin, Shi Wang, Anton Ragni, Jie Fu
In addition to verifying the existence of regional bias in LMs, we find that the biases on regional groups can be strongly influenced by the geographical clustering of the groups.
1 code implementation • 4 Nov 2022 • Adam Nik, Ge Zhang, Xingran Chen, Mingyu Li, Jie Fu
This paper details our participation in the Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE) workshop @ EMNLP 2022, where we take part in Subtask 1 of Shared Task 3.
1 code implementation • 31 Oct 2022 • Xingran Chen, Ge Zhang, Adam Nik, Mingyu Li, Jie Fu
In this paper, we present our approach and empirical observations for Cause-Effect Signal Span Detection -- Subtask 2 of Shared task 3~\cite{tan-etal-2022-event} at CASE 2022.
no code implementations • 5 Jul 2022 • Ge Zhang
Although neural networks can solve very complex machine-learning problems, the theoretical reason for their generalizability is still not fully understood.
no code implementations • SemEval (NAACL) 2022 • Zhiyong Wang, Ge Zhang, Nineli Lashkarashvili
This paper describes our system for the SemEval2022 task of matching dictionary glosses to word embeddings.
no code implementations • 31 May 2022 • Ge Zhang, Jia Wu, Jian Yang, Shan Xue, Wenbin Hu, Chuan Zhou, Hao Peng, Quan Z. Sheng, Charu Aggarwal
To frame this survey, we propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling.
no code implementations • 21 Nov 2021 • Kaiyuan Liu, Xingyu Li, Yurui Lai, Ge Zhang, Hang Su, Jiachen Wang, Chunxu Guo, Jisong Guan, Yi Zhou
Despite its great success, deep learning severely suffers from robustness; that is, deep neural networks are very vulnerable to adversarial attacks, even the simplest ones.
no code implementations • 19 Oct 2021 • Ge Zhang, Shaohui Mei, Mingyang Ma, Yan Feng, Qian Du
Spectral unmixing (SU) expresses the mixed pixels existed in hyperspectral images as the product of endmember and abundance, which has been widely used in hyperspectral imagery analysis.
no code implementations • 17 May 2021 • Ge Zhang, Or Litany, Srinath Sridhar, Leonidas Guibas
We present StrobeNet, a method for category-level 3D reconstruction of articulating objects from one or more unposed RGB images.
1 code implementation • 7 Feb 2021 • Miguel Ruiz-Garcia, Ge Zhang, Samuel S. Schoenholz, Andrea J. Liu
In underparameterized networks, such dynamical loss functions can lead to successful training for networks that fail to find a deep minima of the standard cross-entropy loss.
no code implementations • 1 Jan 2021 • Shuhang Wang, Eugene Cheah, Elham Yousef Kalafi, Mercy Asiedu, Alex Benjamin, Vivek Kumar Singh, Ge Zhang, Viksit Kumar, Anthony Edward Samir
Transfer learning often employs all or part of the weights of a pre-trained net-work to the problem at hand; this limits the flexibility of new neural architectures.
no code implementations • 7 Dec 2020 • Ruibin Yuan, Ge Zhang, Anqiao Yang, Xinyue Zhang
In this paper, we propose to adapt the method of mutual information maximization into the task of Chinese lyrics conditioned melody generation to improve the generation quality and diversity.
no code implementations • 28 Aug 2020 • Ge Zhang, Mike A. Merrill, Yang Liu, Jeffrey Heer, Tim Althoff
Large scale analysis of source code, and in particular scientific source code, holds the promise of better understanding the data science process, identifying analytical best practices, and providing insights to the builders of scientific toolkits.
no code implementations • 17 Jun 2020 • Jianrong Wang, Ge Zhang, Zhen-Yu Wu, XueWei Li, Li Liu
Compared with static views, abundant dynamic properties between video frames are beneficial to refined depth estimation, especially for dynamic objects.