Search Results for author: Yuhui Zhang

Found 21 papers, 9 papers with code

Machine Learning-guided Lipid Nanoparticle Design for mRNA Delivery

no code implementations2 Aug 2023 Daisy Yi Ding, Yuhui Zhang, Yuan Jia, Jiuzhi Sun

While RNA technologies hold immense therapeutic potential in a range of applications from vaccination to gene editing, the broad implementation of these technologies is hindered by the challenge of delivering these agents effectively.

FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

1 code implementation8 Jun 2023 Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Wenxuan Wu, Chulin Xie, Yuhang Yao, Kai Zhang, Qifan Zhang, Yuhui Zhang, Salman Avestimehr, Chaoyang He

This paper introduces FedMLSecurity, a benchmark designed to simulate adversarial attacks and corresponding defense mechanisms in Federated Learning (FL).

Federated Learning

Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models

1 code implementation27 May 2023 Yuhui Zhang, Michihiro Yasunaga, Zhengping Zhou, Jeff Z. HaoChen, James Zou, Percy Liang, Serena Yeung

Language models have been shown to exhibit positive scaling, where performance improves as models are scaled up in terms of size, compute, or data.

Question Answering

Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning

no code implementations19 Apr 2023 Takumi Nakagawa, Yutaro Sanada, Hiroki Waida, Yuhui Zhang, Yuichiro Wada, Kōsaku Takanashi, Tomonori Yamada, Takafumi Kanamori

To this end, inspired by recent works on denoising and the success of the cosine-similarity-based objective functions in representation learning, we propose the denoising Cosine-Similarity (dCS) loss.

Denoising Representation Learning

Deep Clustering with a Constraint for Topological Invariance based on Symmetric InfoNCE

no code implementations6 Mar 2023 Yuhui Zhang, Yuichiro Wada, Hiroki Waida, Kaito Goto, Yusaku Hino, Takafumi Kanamori

To address the problem, we propose a constraint utilizing symmetric InfoNCE, which helps an objective of deep clustering method in the scenario train the model so as to be efficient for not only non-complex topology but also complex topology datasets.

Clustering Deep Clustering

DGP-Net: Dense Graph Prototype Network for Few-Shot SAR Target Recognition

no code implementations19 Feb 2023 Xiangyu Zhou, QianRu Wei, Yuhui Zhang

The inevitable feature deviation of synthetic aperture radar (SAR) image due to the special imaging principle (depression angle variation) leads to poor recognition accuracy, especially in few-shot learning (FSL).

Few-Shot Learning

Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation

1 code implementation8 Feb 2023 Yuhui Zhang, Shih-Cheng Huang, Zhengping Zhou, Matthew P. Lungren, Serena Yeung

Given the prevalence of 3D medical imaging technologies such as MRI and CT that are widely used in diagnosing and treating diverse diseases, 3D segmentation is one of the fundamental tasks of medical image analysis.

Image Segmentation Medical Image Segmentation +2

Diagnosing and Rectifying Vision Models using Language

1 code implementation8 Feb 2023 Yuhui Zhang, Jeff Z. HaoChen, Shih-Cheng Huang, Kuan-Chieh Wang, James Zou, Serena Yeung

Our proposed method can discover high-error data slices, identify influential attributes and further rectify undesirable model behaviors, without requiring any visual data.

Contrastive Learning

Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning

2 code implementations3 Mar 2022 Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, James Zou

Our systematic analysis demonstrates that this gap is caused by a combination of model initialization and contrastive learning optimization.

Contrastive Learning Fairness +2

Language Models as Recommender Systems: Evaluations and Limitations

no code implementations NeurIPS Workshop ICBINB 2021 Yuhui Zhang, Hao Ding, Zeren Shui, Yifei Ma, James Zou, Anoop Deoras, Hao Wang

Pre-trained language models (PLMs) such as BERT and GPT learn general text representations and encode extensive world knowledge; thus, they can be efficiently and accurately adapted to various downstream tasks.

Movie Recommendation Session-Based Recommendations

On the Opportunities and Risks of Foundation Models

3 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Collision dominated, ballistic, and viscous regimes of terahertz plasmonic detection by graphene

no code implementations21 Dec 2020 Yuhui Zhang, Michael S. Shur

When the kinematic viscosity ({\nu}) is above a certain critical viscosity value, {\nu}NR, the plasmonic FETs always operates in the viscous non-resonant regime regardless of channel length (L).

Applied Physics Plasma Physics

Inducing Grammar from Long Short-Term Memory Networks by Shapley Decomposition

no code implementations ACL 2020 Yuhui Zhang, Allen Nie

The principle of compositionality has deep roots in linguistics: the meaning of an expression is determined by its structure and the meanings of its constituents.

Enhancing Transformer with Sememe Knowledge

no code implementations WS 2020 Yuhui Zhang, Chenghao Yang, Zhengping Zhou, Zhiyuan Liu

While large-scale pretraining has achieved great success in many NLP tasks, it has not been fully studied whether external linguistic knowledge can improve data-driven models.

Language Modelling

Jiuge: A Human-Machine Collaborative Chinese Classical Poetry Generation System

no code implementations ACL 2019 Guo Zhipeng, Xiaoyuan Yi, Maosong Sun, Wenhao Li, Cheng Yang, Jiannan Liang, Huimin Chen, Yuhui Zhang, Ruoyu Li

By exposing the options of poetry genres, styles and revision modes, Jiuge, acting as a professional assistant, allows constant and active participation of users in poetic creation.

Cultural Vocal Bursts Intensity Prediction

Large-scale Generative Modeling to Improve Automated Veterinary Disease Coding

no code implementations29 Nov 2018 Yuhui Zhang, Allen Nie, James Zou

We compare the performance of our model with several baselines in a challenging cross-hospital setting with substantial domain shift.

Cannot find the paper you are looking for? You can Submit a new open access paper.