Search Results for author: Purvish Jajal

Found 5 papers, 5 papers with code

Detecting Music Performance Errors with Transformers

1 code implementation3 Jan 2025 Benjamin Shiue-Hal Chou, Purvish Jajal, Nicholas John Eliopoulos, Tim Nadolsky, Cheng-Yun Yang, Nikita Ravi, James C. Davis, Kristen Yeon-Ji Yun, Yung-Hsiang Lu

Beginner musicians often struggle to identify specific errors in their performances, such as playing incorrect notes or rhythms.

Token Turing Machines are Efficient Vision Models

1 code implementation11 Sep 2024 Purvish Jajal, Nick John Eliopoulos, Benjamin Shiue-Hal Chou, George K. Thiruvathukal, James C. Davis, Yung-Hsiang Lu

By ensuring that there are fewer process tokens than memory tokens, we are able to reduce the inference time of the network while maintaining its accuracy.

Image Classification Semantic Segmentation

Pruning One More Token is Enough: Leveraging Latency-Workload Non-Linearities for Vision Transformers on the Edge

1 code implementation1 Jul 2024 Nick John Eliopoulos, Purvish Jajal, James C. Davis, Gaowen Liu, George K. Thiravathukal, Yung-Hsiang Lu

For similar latency (within 5. 2% or 7ms) across devices we achieve 78. 6%-84. 5% ImageNet1K accuracy, while the state-of-the-art, Token Merging, achieves 45. 8%-85. 4%.

Analysis of Failures and Risks in Deep Learning Model Converters: A Case Study in the ONNX Ecosystem

1 code implementation30 Mar 2023 Purvish Jajal, Wenxin Jiang, Arav Tewari, Erik Kocinare, Joseph Woo, Anusha Sarraf, Yung-Hsiang Lu, George K. Thiruvathukal, James C. Davis

We find that the node conversion stage of a model converter accounts for ~75% of the defects and 33% of reported failure are related to semantically incorrect models.

Cannot find the paper you are looking for? You can Submit a new open access paper.