no code implementations • 22 Dec 2018 • Mozhi Zhang, Yoshinari Fujinuma, Jordan Boyd-Graber
Text classification must sometimes be applied in a low-resource language with no labeled training data.
Cross-Lingual Document Classification Document Classification +3
2 code implementations • ICLR 2020 • Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, Stefanie Jegelka
Neural networks have succeeded in many reasoning tasks.
1 code implementation • 4 Jun 2019 • Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber
Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings.
no code implementations • ACL 2019 • Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber
Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings.
1 code implementation • EMNLP 2020 • Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, Jordan Boyd-Graber
Cross-lingual word embeddings transfer knowledge between languages: models trained on high-resource languages can predict in low-resource languages.
no code implementations • ACL 2020 • Mozhi Zhang, Yoshinari Fujinuma, Michael J. Paul, Jordan Boyd-Graber
Cross-lingual word embeddings (CLWE) are often evaluated on bilingual lexicon induction (BLI).
Bilingual Lexicon Induction Cross-Lingual Word Embeddings +2
3 code implementations • ICLR 2021 • Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S. Du, Ken-ichi Kawarabayashi, Stefanie Jegelka
Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e. g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features.
no code implementations • NeurIPS 2021 • Jingling Li, Mozhi Zhang, Keyulu Xu, John P. Dickerson, Jimmy Ba
Our framework measures a network's robustness via the predictive power in its representations -- the test performance of a linear model trained on the learned representations using a small set of clean labels.
no code implementations • 10 May 2021 • Keyulu Xu, Mozhi Zhang, Stefanie Jegelka, Kenji Kawaguchi
Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution.
1 code implementation • ACL 2021 • Mozhi Zhang, Wei Wang, Budhaditya Deb, Guoqing Zheng, Milad Shokouhi, Ahmed Hassan Awadallah
Reply suggestion models help users process emails and chats faster.
1 code implementation • 20 May 2023 • Mozhi Zhang, Hang Yan, Yaqian Zhou, Xipeng Qiu
We use prompts that contains entity category information to construct label prototypes, which enables our model to fine-tune with only the support set.
2 code implementations • 5 Oct 2023 • Qinyuan Cheng, Tianxiang Sun, Wenwei Zhang, Siyin Wang, Xiangyang Liu, Mozhi Zhang, Junliang He, Mianqiu Huang, Zhangyue Yin, Kai Chen, Xipeng Qiu
We analyze the primary types of hallucinations in different types of models and their causes.
no code implementations • 15 Nov 2023 • Kyle Seelman, Mozhi Zhang, Jordan Boyd-Graber
To facilitate user interaction with these neural topic models, we have developed an interactive interface.
no code implementations • 3 Apr 2024 • Mozhi Zhang, Mianqiu Huang, Rundong Shi, Linsen Guo, Chong Peng, Peng Yan, Yaqian Zhou, Xipeng Qiu
Large language models optimized with techniques like RLHF have achieved good alignment in being helpful and harmless.