no code implementations • 28 Jan 2025 • Vivaan Sandwar, Bhav Jain, Rishan Thangaraj, Ishaan Garg, Michael Lam, Kevin Zhu
Debate is a commonly used form of human communication catered towards problem-solving because of its efficiency.
1 code implementation • 18 Dec 2024 • Vijay Goyal, Mustafa Khan, Aprameya Tirupati, Harveer Saini, Michael Lam, Kevin Zhu
We find that Ground Truth prompting results in a 55\% performance increase on GSM8K for a distilled Llama 3. 1 8B Instruct compared to the same model distilled without prompting.
no code implementations • 29 Oct 2024 • Shrey Shah, Alex Lin, Scott Lin, Josh Patel, Michael Lam, Kevin Zhu
Accurate damage prediction is crucial for disaster preparedness and response strategies, particularly given the frequent earthquakes in Turkey.
no code implementations • 25 Oct 2024 • Ray Li, Tanishka Bagade, Kevin Martinez, Flora Yasmin, Grant Ayala, Michael Lam, Kevin Zhu
Large language models (LLMs) have achieved a degree of success in generating coherent and contextually relevant text, yet they remain prone to a significant challenge known as hallucination: producing information that is not substantiated by the input or external knowledge.
1 code implementation • 23 Oct 2024 • William Cagas, Chan Ko, Blake Hsiao, Shryuk Grandhi, Rishi Bhattacharya, Kevin Zhu, Michael Lam
The proliferation of machine learning models in diverse clinical applications has led to a growing need for high-fidelity, medical image training data.
no code implementations • 1 Sep 2024 • Patricia Dao, Jashmitha Sappa, Saanvi Terala, Tyson Wong, Michael Lam, Kevin Zhu
Traditional crime prediction techniques are slow and inefficient when generating predictions as crime increases rapidly \cite{r15}.
no code implementations • 27 Aug 2024 • Samir Kassam, Angelo Markham, Katie Vo, Yashas Revanakara, Michael Lam, Kevin Zhu
Preoperative Magnetic Resonance Imaging (MRI) images are often ineffective during surgery due to factors such as brain shift, which alters the position of brain structures and tumors.
no code implementations • 26 Aug 2024 • Elysia Shi, Adithri Manda, London Chowdhury, Runeema Arun, Kevin Zhu, Michael Lam
When using AI to detect signs of depressive disorder, AI models habitually draw preemptive conclusions.
1 code implementation • ICLR 2020 • Hao Li, Pratik Chaudhari, Hao Yang, Michael Lam, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto
Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.
no code implementations • 2 Aug 2019 • Cuong V. Nguyen, Alessandro Achille, Michael Lam, Tal Hassner, Vijay Mahadevan, Stefano Soatto
As an application, we apply our procedure to study two properties of a task sequence: (1) total complexity and (2) sequential heterogeneity.
1 code implementation • ICCV 2019 • Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Stefano Soatto, Pietro Perona
We demonstrate that this embedding is capable of predicting task similarities that match our intuition about semantic and taxonomic relations between different visual tasks (e. g., tasks based on classifying different types of plants are similar) We also demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a new task.
1 code implementation • CVPR 2017 • Behrooz Mahasseni, Michael Lam, Sinisa Todorovic
The summarizer is the autoencoder long short-term memory network (LSTM) aimed at, first, selecting video frames, and then decoding the obtained summarization for reconstructing the input video.
no code implementations • CVPR 2017 • Michael Lam, Behrooz Mahasseni, Sinisa Todorovic
This motivates us to formulate our problem as a sequential search for informative parts over a deep feature map produced by a deep Convolutional Neural Network (CNN).
no code implementations • CVPR 2015 • Michael Lam, Janardhan Rao Doppa, Sinisa Todorovic, Thomas G. Dietterich
The mainstream approach to structured prediction problems in computer vision is to learn an energy function such that the solution minimizes that function.