no code implementations • 30 Jan 2025 • Haiyang Huang, Tianhui Meng, Weijia Jia
In this paper, we jointly consider prompt security, service latency, and system resource optimization in Edge-Cloud LLM (EC-LLM) systems under various prompt attacks.
no code implementations • 19 Jan 2025 • Qiuxia Wu, Haiyang Huang, Kunming Su, Zhiyong Wang, Kun Hu
Despite achieving encouraging results, a significant issue remains: these methods often overlook the variability in point clouds sampled from a single 3D object surface.
2 code implementations • 19 Dec 2024 • Yingfan Wang, Yiyang Sun, Haiyang Huang, Cynthia Rudin
Dimension reduction (DR) algorithms have proven to be extremely useful for gaining insight into large-scale high-dimensional datasets, particularly finding clusters in transcriptomic data.
1 code implementation • 24 Nov 2024 • Haiyang Huang, Yingfan Wang, Cynthia Rudin
To explain this, we provide evidence that parameterized approaches lack the ability to repulse negative pairs, and the choice of loss function also has an impact.
no code implementations • 10 Mar 2023 • Haiyang Huang, Newsha Ardalani, Anna Sun, Liu Ke, Hsien-Hsin S. Lee, Anjali Sridhar, Shruti Bhosale, Carole-Jean Wu, Benjamin Lee
We propose three optimization techniques to mitigate sources of inefficiencies, namely (1) Dynamic gating, (2) Expert Buffering, and (3) Expert load balancing.
no code implementations • 22 Apr 2022 • Haiyang Huang, Zhi Chen, Cynthia Rudin
Experimental results provide evidence that our method can discover multiple concepts within a single image and outperforms state-of-the-art unsupervised methods on complex datasets such as Cityscapes and COCO-Stuff.
no code implementations • 20 Mar 2021 • Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting.
2 code implementations • 8 Dec 2020 • Yingfan Wang, Haiyang Huang, Cynthia Rudin, Yaron Shaposhnik
In this work, our main goal is to understand what aspects of DR methods are important for preserving both local and global structure: it is difficult to design a better method without a true understanding of the choices we make in our algorithms and their empirical impact on the lower-dimensional embeddings they produce.
Ranked #2 on
Data Augmentation
on GA1457