no code implementations • 24 May 2024 • Haoze He, Juncheng Billy Li, Xuan Jiang, Heather Miller
In this work, we introduce a method for selecting sparse sub-matrices that aim to minimize the performance gap between PEFT vs. full fine-tuning (FT) while also reducing both fine-tuning computational cost and memory cost.
2 code implementations • 5 Oct 2023 • Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, Christopher Potts
The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks.
no code implementations • 8 Apr 2020 • Matthew Weidner, Heather Miller, Christopher Meiklejohn
Although it reproduces common CRDT semantics, the semidirect product can be viewed as a restricted kind of operational transformation, thus forming a bridge between these two opposing techniques for constructing replicated data types.
Distributed, Parallel, and Cluster Computing
1 code implementation • 7 Feb 2018 • Christopher Meiklejohn, Heather Miller
Partisan is a topology-agnostic distributed programming model and distribution layer that supports several network topologies for different application scenarios: full mesh, peer-to-peer, client-server, and publish-subscribe.
Distributed, Parallel, and Cluster Computing