no code implementations • 2 Sep 2024 • Guanglei Zhou, Bhargav Korrapati, Gaurav Rajavendra Reddy, Jiang Hu, Yiran Chen, Dipto G. Thakurta
Generation of VLSI layout patterns is essential for a wide range of Design For Manufacturability (DFM) studies.
no code implementations • 12 Jun 2024 • Jiaojiao Zhang, Jiang Hu, Anthony Man-Cho So, Mikael Johansson
Many machine learning tasks, such as principal component analysis and low-rank matrix completion, give rise to manifold optimization problems.
no code implementations • 19 Mar 2024 • Jiang Hu, Quanzheng Li
Our key observation is that the associated generalized Fisher information matrix is either low-rank or extremely small-scaled.
1 code implementation • 24 Sep 2023 • Sekeun Kim, Kyungsang Kim, Jiang Hu, Cheng Chen, Zhiliang Lyu, Ren Hui, Sunghwan Kim, Zhengliang Liu, Aoxiao Zhong, Xiang Li, Tianming Liu, Quanzheng Li
The Segmentation Anything Model (SAM) has gained significant attention for its robust generalization capabilities across diverse downstream tasks.
1 code implementation • 16 Sep 2023 • Cheng Chen, Juzheng Miao, Dufan Wu, Zhiling Yan, Sekeun Kim, Jiang Hu, Aoxiao Zhong, Zhengliang Liu, Lichao Sun, Xiang Li, Tianming Liu, Pheng-Ann Heng, Quanzheng Li
The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks.
no code implementations • 4 Sep 2023 • Jiaojiao Zhang, Jiang Hu, Mikael Johansson
We propose a novel algorithm for solving the composite Federated Learning (FL) problem.
no code implementations • 29 Aug 2023 • Zhengliang Liu, Yiwei Li, Peng Shu, Aoxiao Zhong, Longtao Yang, Chao Ju, Zihao Wu, Chong Ma, Jie Luo, Cheng Chen, Sekeun Kim, Jiang Hu, Haixing Dai, Lin Zhao, Dajiang Zhu, Jun Liu, Wei Liu, Dinggang Shen, Tianming Liu, Quanzheng Li, Xiang Li
This paper introduces Radiology-Llama2, a large language model specialized for radiology through a process known as instruction tuning.
no code implementations • 31 Mar 2023 • Jinxin Wang, Jiang Hu, Shixiang Chen, Zengde Deng, Anthony Man-Cho So
We focus on a class of non-smooth optimization problems over the Stiefel manifold in the decentralized setting, where a connected network of $n$ agents cooperatively minimize a finite-sum objective function with each component being weakly convex in the ambient Euclidean space.
no code implementations • 16 Mar 2023 • Jiang Hu, Kangkang Deng, Na Li, Quanzheng Li
With a computationally efficient approximation of the second-order information, natural gradient methods have been successful in solving large-scale structured optimization problems.
no code implementations • 15 Jul 2022 • Jiang Hu, Ruicheng Ao, Anthony Man-Cho So, MingHan Yang, Zaiwen Wen
Moreover, we show that if the loss function satisfies certain convexity and smoothness conditions and the input-output map satisfies a Riemannian Jacobian stability condition, then our proposed method enjoys a local linear -- or, under the Lipschitz continuity of the Riemannian Jacobian of the input-output map, even quadratic -- rate of convergence.
no code implementations • 30 Mar 2022 • Jingyu Pan, Chen-Chia Chang, Zhiyao Xie, Ang Li, Minxue Tang, Tunhou Zhang, Jiang Hu, Yiran Chen
To further strengthen the results, we co-design a customized ML model FLNet and its personalization under the decentralized training scenario.
no code implementations • 19 Feb 2021 • Lang Feng, Jiayi Huang, Jeff Huang, Jiang Hu
Data-Flow Integrity (DFI) is a well-known approach to effectively detecting a wide range of software attacks.
Hardware Architecture
no code implementations • 3 Dec 2020 • Chen-Chia Chang, Jingyu Pan, Tunhou Zhang, Zhiyao Xie, Jiang Hu, Weiyi Qi, Chun-Wei Lin, Rongjian Liang, Joydeep Mitra, Elias Fallon, Yiran Chen
The rise of machine learning technology inspires a boom of its applications in electronic design automation (EDA) and helps improve the degree of automation in chip designs.
no code implementations • 27 Nov 2020 • Zhiyao Xie, Rongjian Liang, Xiaoqing Xu, Jiang Hu, Yixiao Duan, Yiran Chen
Net length is a key proxy metric for optimizing timing and power across various stages of a standard digital design flow.
no code implementations • 26 Nov 2020 • Zhiyao Xie, Haoxing Ren, Brucek Khailany, Ye Sheng, Santosh Santosh, Jiang Hu, Yiran Chen
Moreover, the proposed CNN model is general and transferable to different designs.
no code implementations • 26 Nov 2020 • Zhiyao Xie, Guan-Qi Fang, Yu-Hung Huang, Haoxing Ren, Yanqing Zhang, Brucek Khailany, Shao-Yun Fang, Jiang Hu, Yiran Chen, Erick Carvajal Barboza
Experimental results on benchmark circuits show that our approach achieves 25% improvement in design quality or 37% reduction in sampling cost compared to random forest method, which is the kernel of a highly cited previous work.
no code implementations • 26 Nov 2020 • Zhiyao Xie, Hai Li, Xiaoqing Xu, Jiang Hu, Yiran Chen
IR drop constraint is a fundamental requirement enforced in almost all chip designs.
no code implementations • 17 Nov 2020 • Erick Carvajal Barboza, Sara Jacob, Mahesh Ketkar, Michael Kishinevsky, Paul Gratz, Jiang Hu
Design bugs that affect processor performance rather than its functionality are especially difficult to catch, particularly in new microarchitectures.
1 code implementation • 3 Sep 2018 • Jiang Hu, Bo Jiang, Lin Lin, Zaiwen Wen, Yaxiang Yuan
In particular, we are interested in applications that the Euclidean Hessian itself consists of a computational cheap part and a significantly expensive part.
Optimization and Control
2 code implementations • 7 Aug 2017 • Jiang Hu, Andre Milzarek, Zaiwen Wen, Yaxiang Yuan
Optimization on Riemannian manifolds widely arises in eigenvalue computation, density functional theory, Bose-Einstein condensates, low rank nearest correlation, image registration, and signal processing, etc.
Optimization and Control