no code implementations • 16 Sep 2024 • Joseph Suh, Suhong Moon, Minwoo Kang, David M. Chan
Assessing personality traits using large language models (LLMs) has emerged as an interesting and challenging area of research.
1 code implementation • 2 Sep 2024 • Suhong Moon, Siddharth Jha, Lutfi Eren Erdogan, Sehoon Kim, Woosang Lim, Kurt Keutzer, Amir Gholami
To address those challenges, we present a novel framework for generating synthetic data for tool retrieval applications and an efficient data-driven tool retrieval strategy using small encoder models.
1 code implementation • 1 Sep 2024 • Lutfi Eren Erdogan, Nicholas Lee, Siddharth Jha, Sehoon Kim, Ryan Tabrizi, Suhong Moon, Coleman Hooper, Gopala Anumanchipalli, Kurt Keutzer, Amir Gholami
Recent large language models (LLMs) have enabled the development of advanced agentic systems that can integrate various tools and APIs to fulfill user queries through function calling.
1 code implementation • 9 Jul 2024 • Suhong Moon, Marwa Abdulhai, Minwoo Kang, Joseph Suh, Widyadewi Soedarmadji, Eran Kohen Behar, David M. Chan
Large language models (LLMs) are trained from vast repositories of text authored by millions of distinct authors, reflecting an enormous diversity of human traits.
1 code implementation • 7 Dec 2023 • Sehoon Kim, Suhong Moon, Ryan Tabrizi, Nicholas Lee, Michael W. Mahoney, Kurt Keutzer, Amir Gholami
To address this, we introduce LLMCompiler, which executes functions in parallel to efficiently orchestrate multiple function calls.
1 code implementation • NeurIPS 2023 • Sehoon Kim, Karttikeya Mangalam, Suhong Moon, Jitendra Malik, Michael W. Mahoney, Amir Gholami, Kurt Keutzer
To address this, we propose Big Little Decoder (BiLD), a framework that can improve inference efficiency and latency for a wide range of text generation applications.
no code implementations • 7 Dec 2022 • Seongbeom Park, Suhong Moon, Jinkyu Kim
Text-to-image generation methods produce high-resolution and high-quality images, but these methods should not produce immoral images that may contain inappropriate content from the perspective of commonsense morality.
1 code implementation • 10 Nov 2022 • Yujin Jeong, Seongbeom Park, Suhong Moon, Jinkyu Kim
Here, we propose a model that predicts visual commonsense immorality in a zero-shot manner.
no code implementations • 7 Jul 2022 • Suhong Moon, Domas Buracas, Seunghyun Park, Jinkyu Kim, John Canny
It also uses a purely-dynamic local dispersive force (Brownian motion) that shows improved performance over other methods and does not require knowledge of other particle coordinates.