Search Results for author: Mozhi Zhang

Found 14 papers, 7 papers with code

Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization

1 code implementation4 Jun 2019 Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber

Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings.

Cross-Lingual Word Embeddings Translation +2

Are Girls Neko or Sh\=ojo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization

no code implementations ACL 2019 Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber

Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings.

Cross-Lingual Word Embeddings Translation +2

Interactive Refinement of Cross-Lingual Word Embeddings

1 code implementation EMNLP 2020 Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, Jordan Boyd-Graber

Cross-lingual word embeddings transfer knowledge between languages: models trained on high-resource languages can predict in low-resource languages.

Active Learning Cross-Lingual Word Embeddings +3

How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks

3 code implementations ICLR 2021 Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S. Du, Ken-ichi Kawarabayashi, Stefanie Jegelka

Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e. g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features.

How Does a Neural Network's Architecture Impact Its Robustness to Noisy Labels?

no code implementations NeurIPS 2021 Jingling Li, Mozhi Zhang, Keyulu Xu, John P. Dickerson, Jimmy Ba

Our framework measures a network's robustness via the predictive power in its representations -- the test performance of a linear model trained on the learned representations using a small set of clean labels.

Learning with noisy labels

Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth

no code implementations10 May 2021 Keyulu Xu, Mozhi Zhang, Stefanie Jegelka, Kenji Kawaguchi

Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution.

PromptNER: A Prompting Method for Few-shot Named Entity Recognition via k Nearest Neighbor Search

1 code implementation20 May 2023 Mozhi Zhang, Hang Yan, Yaqian Zhou, Xipeng Qiu

We use prompts that contains entity category information to construct label prototypes, which enables our model to fine-tune with only the support set.

few-shot-ner Few-shot NER +4

Labeled Interactive Topic Models

no code implementations15 Nov 2023 Kyle Seelman, Mozhi Zhang, Jordan Boyd-Graber

To facilitate user interaction with these neural topic models, we have developed an interactive interface.

Topic Models

Calibrating the Confidence of Large Language Models by Eliciting Fidelity

no code implementations3 Apr 2024 Mozhi Zhang, Mianqiu Huang, Rundong Shi, Linsen Guo, Chong Peng, Peng Yan, Yaqian Zhou, Xipeng Qiu

Large language models optimized with techniques like RLHF have achieved good alignment in being helpful and harmless.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.