no code implementations • 6 Dec 2023 • Haixun Wang, Taesik Na
Instead of converting unstructured data (web pages, customer reviews, etc) to structured data, we instead convert structured data (product inventory, catalogs, taxonomies, etc) into textual data, which can be easily integrated into the text corpus that trains LLMs.
no code implementations • 11 Nov 2023 • Xiaochen Wang, Xiao Xiao, Ruhan Zhang, Xuan Zhang, Taesik Na, Tejaswi Tenneti, Haixun Wang, Fenglong Ma
Efficient and accurate product relevance assessment is critical for user experiences and business success.
no code implementations • 12 Sep 2022 • Yuqing Xie, Taesik Na, Xiao Xiao, Saurav Manchanda, Young Rao, Zhihong Xu, Guanghua Shu, Esther Vasiete, Tejaswi Tenneti, Haixun Wang
To train the model efficiently on noisy data, we propose a self-adversarial learning method and a cascade training method.
no code implementations • NeurIPS 2020 • Bita Darvish Rouhani, Daniel Lo, Ritchie Zhao, Ming Liu, Jeremy Fowers, Kalin Ovtcharov , Anna Vinogradsky, Sarah Massengill , Lita Yang, Ray Bittner, Alessandro Forin, Haishan Zhu, Taesik Na, Prerak Patel, Shuai Che, Lok Chand Koppaka , Xia Song, Subhojit Som, Kaustav Das, Saurabh T, Steve Reinhardt , Sitaram Lanka, Eric Chung, Doug Burger
In this paper, we explore the limits of Microsoft Floating Point (MSFP), a new class of datatypes developed for production cloud-scale inferencing on custom hardware.
no code implementations • ICLR 2019 • Taesik Na, Minah Lee, Burhan A. Mudassar, Priyabrata Saha, Jong Hwan Ko, Saibal Mukhopadhyay
We evaluate our proposed method for various machine learning tasks including object detection on MS-COCO 2014 dataset, multiple object tracking problem on MOT-Challenge dataset, and human activity classification on UCF 101 dataset.
no code implementations • 11 Feb 2018 • Jong Hwan Ko, Taesik Na, Mohammad Faisal Amir, Saibal Mukhopadhyay
The lossless or lossy encoding of the feature space is proposed to enhance the maximum input rate supported by the edge platform and/or reduce the energy of the edge platform.
no code implementations • 12 Oct 2017 • Duckhwan Kim, Taesik Na, Sudhakar Yalamanchili, Saibal Mukhopadhyay
This paper presents, NeuroTrainer, an intelligent memory module with in-memory accelerators that forms the building block of a scalable architecture for energy efficient training for deep neural networks.
Hardware Architecture
1 code implementation • ICLR 2018 • Taesik Na, Jong Hwan Ko, Saibal Mukhopadhyay
Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks.