Search Results for author: Yuhui Zhang

Found 31 papers, 15 papers with code

Compact SPICE model for TeraFET resonant detectors

no code implementations27 Jul 2024 Xueqing Liu, Yuhui Zhang, Trond Ytterdal, Michael Shur

By calculating the equivalent components for each segment of the channel: conductance, capacitance, and inductance, based on the voltages at the segment's nodes, our model accommodates non-uniform variations along the channel.

Robust VAEs via Generating Process of Noise Augmented Data

no code implementations26 Jul 2024 Hiroo Irobe, Wataru Aoki, Kimihiro Yamazaki, Yuhui Zhang, Takumi Nakagawa, Hiroki Waida, Yuichiro Wada, Takafumi Kanamori

Advancing defensive mechanisms against adversarial attacks in generative models is a critical research topic in machine learning.

μ-Bench: A Vision-Language Benchmark for Microscopy Understanding

1 code implementation1 Jul 2024 Alejandro Lozano, Jeffrey Nirschl, James Burgess, Sanket Rajan Gupte, Yuhui Zhang, Alyssa Unell, Serena Yeung-Levy

Recent advances in microscopy have enabled the rapid generation of terabytes of image data in cell biology and biomedical research.

Cell Detection Classification +2

Heterogeneous Federated Learning with Splited Language Model

no code implementations24 Mar 2024 Yifan Shi, Yuhui Zhang, Ziyue Huang, Xiaofeng Yang, Li Shen, Wei Chen, Xueqian Wang

Federated Split Learning (FSL) is a promising distributed learning paradigm in practice, which gathers the strengths of both Federated Learning (FL) and Split Learning (SL) paradigms, to ensure model privacy while diminishing the resource overhead of each client, especially on large transformer models in a resource-constrained environment, e. g., Internet of Things (IoT).

Federated Learning Language Modelling

VideoAgent: Long-form Video Understanding with Large Language Model as Agent

1 code implementation15 Mar 2024 Xiaohan Wang, Yuhui Zhang, Orr Zohar, Serena Yeung-Levy

Long-form video understanding represents a significant challenge within computer vision, demanding a model capable of reasoning over long multi-modal sequences.

Language Modelling Large Language Model +2

Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data

1 code implementation16 Jan 2024 Yuhui Zhang, Elaine Sui, Serena Yeung-Levy

However, this assumption is under-explored due to the poorly understood geometry of the multi-modal contrastive space, where a modality gap exists.

Text-to-Image Generation Video Captioning

Describing Differences in Image Sets with Natural Language

1 code implementation CVPR 2024 Lisa Dunlap, Yuhui Zhang, Xiaohan Wang, Ruiqi Zhong, Trevor Darrell, Jacob Steinhardt, Joseph E. Gonzalez, Serena Yeung-Levy

To aid in this discovery process, we explore the task of automatically describing the differences between two $\textbf{sets}$ of images, which we term Set Difference Captioning.

Language Modelling

Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation

no code implementations27 Nov 2023 Yuhui Zhang, Brandon McKinzie, Zhe Gan, Vaishaal Shankar, Alexander Toshev

Recent advances in image tokenizers, such as VQ-VAE, have enabled text-to-image generation using auto-regressive methods, similar to language modeling.

Language Modelling Text-to-Image Generation

Can large language models provide useful feedback on research papers? A large-scale empirical analysis

1 code implementation3 Oct 2023 Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Ding, Xinyu Yang, Kailas Vodrahalli, Siyu He, Daniel Smith, Yian Yin, Daniel McFarland, James Zou

We first quantitatively compared GPT-4's generated feedback with human peer reviewer feedback in 15 Nature family journals (3, 096 papers in total) and the ICLR machine learning conference (1, 709 papers).

Machine Learning-guided Lipid Nanoparticle Design for mRNA Delivery

no code implementations2 Aug 2023 Daisy Yi Ding, Yuhui Zhang, Yuan Jia, Jiuzhi Sun

While RNA technologies hold immense therapeutic potential in a range of applications from vaccination to gene editing, the broad implementation of these technologies is hindered by the challenge of delivering these agents effectively.

FedSecurity: Benchmarking Attacks and Defenses in Federated Learning and Federated LLMs

1 code implementation8 Jun 2023 Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Wenxuan Wu, Chulin Xie, Yuhang Yao, Kai Zhang, Qifan Zhang, Yuhui Zhang, Carlee Joe-Wong, Salman Avestimehr, Chaoyang He

This paper introduces FedSecurity, an end-to-end benchmark that serves as a supplementary component of the FedML library for simulating adversarial attacks and corresponding defense mechanisms in Federated Learning (FL).

Benchmarking Federated Learning

Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models

1 code implementation27 May 2023 Yuhui Zhang, Michihiro Yasunaga, Zhengping Zhou, Jeff Z. HaoChen, James Zou, Percy Liang, Serena Yeung

Language models have been shown to exhibit positive scaling, where performance improves as models are scaled up in terms of size, compute, or data.

Negation Question Answering +1

Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning

no code implementations19 Apr 2023 Takumi Nakagawa, Yutaro Sanada, Hiroki Waida, Yuhui Zhang, Yuichiro Wada, Kōsaku Takanashi, Tomonori Yamada, Takafumi Kanamori

To this end, inspired by recent works on denoising and the success of the cosine-similarity-based objective functions in representation learning, we propose the denoising Cosine-Similarity (dCS) loss.

Denoising Representation Learning

Deep Clustering with a Constraint for Topological Invariance based on Symmetric InfoNCE

no code implementations6 Mar 2023 Yuhui Zhang, Yuichiro Wada, Hiroki Waida, Kaito Goto, Yusaku Hino, Takafumi Kanamori

To address the problem, we propose a constraint utilizing symmetric InfoNCE, which helps an objective of deep clustering method in the scenario train the model so as to be efficient for not only non-complex topology but also complex topology datasets.

Clustering Deep Clustering

DGP-Net: Dense Graph Prototype Network for Few-Shot SAR Target Recognition

no code implementations19 Feb 2023 Xiangyu Zhou, QianRu Wei, Yuhui Zhang

The inevitable feature deviation of synthetic aperture radar (SAR) image due to the special imaging principle (depression angle variation) leads to poor recognition accuracy, especially in few-shot learning (FSL).

Few-Shot Learning

Diagnosing and Rectifying Vision Models using Language

1 code implementation8 Feb 2023 Yuhui Zhang, Jeff Z. HaoChen, Shih-Cheng Huang, Kuan-Chieh Wang, James Zou, Serena Yeung

Our proposed method can discover high-error data slices, identify influential attributes and further rectify undesirable model behaviors, without requiring any visual data.

Contrastive Learning

Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation

1 code implementation8 Feb 2023 Yuhui Zhang, Shih-Cheng Huang, Zhengping Zhou, Matthew P. Lungren, Serena Yeung

Given the prevalence of 3D medical imaging technologies such as MRI and CT that are widely used in diagnosing and treating diverse diseases, 3D segmentation is one of the fundamental tasks of medical image analysis.

Image Segmentation Medical Image Segmentation +2

Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning

2 code implementations3 Mar 2022 Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, James Zou

Our systematic analysis demonstrates that this gap is caused by a combination of model initialization and contrastive learning optimization.

Contrastive Learning Fairness +2

Language Models as Recommender Systems: Evaluations and Limitations

no code implementations NeurIPS Workshop ICBINB 2021 Yuhui Zhang, Hao Ding, Zeren Shui, Yifei Ma, James Zou, Anoop Deoras, Hao Wang

Pre-trained language models (PLMs) such as BERT and GPT learn general text representations and encode extensive world knowledge; thus, they can be efficiently and accurately adapted to various downstream tasks.

Movie Recommendation Session-Based Recommendations +1

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Collision dominated, ballistic, and viscous regimes of terahertz plasmonic detection by graphene

no code implementations21 Dec 2020 Yuhui Zhang, Michael S. Shur

When the kinematic viscosity ({\nu}) is above a certain critical viscosity value, {\nu}NR, the plasmonic FETs always operates in the viscous non-resonant regime regardless of channel length (L).

Applied Physics Plasma Physics

Inducing Grammar from Long Short-Term Memory Networks by Shapley Decomposition

no code implementations ACL 2020 Yuhui Zhang, Allen Nie

The principle of compositionality has deep roots in linguistics: the meaning of an expression is determined by its structure and the meanings of its constituents.

Enhancing Transformer with Sememe Knowledge

no code implementations WS 2020 Yuhui Zhang, Chenghao Yang, Zhengping Zhou, Zhiyuan Liu

While large-scale pretraining has achieved great success in many NLP tasks, it has not been fully studied whether external linguistic knowledge can improve data-driven models.

Language Modelling

Jiuge: A Human-Machine Collaborative Chinese Classical Poetry Generation System

no code implementations ACL 2019 Guo Zhipeng, Xiaoyuan Yi, Maosong Sun, Wenhao Li, Cheng Yang, Jiannan Liang, Huimin Chen, Yuhui Zhang, Ruoyu Li

By exposing the options of poetry genres, styles and revision modes, Jiuge, acting as a professional assistant, allows constant and active participation of users in poetic creation.

Cultural Vocal Bursts Intensity Prediction

Large-scale Generative Modeling to Improve Automated Veterinary Disease Coding

no code implementations29 Nov 2018 Yuhui Zhang, Allen Nie, James Zou

We compare the performance of our model with several baselines in a challenging cross-hospital setting with substantial domain shift.

Cannot find the paper you are looking for? You can Submit a new open access paper.