no code implementations • 20 Oct 2023 • Xinyu Hu, Pengfei Tang, Simiao Zuo, Zihan Wang, Bowen Song, Qiang Lou, Jian Jiao, Denis Charles
In Evoke, there are two instances of a same LLM: one as a reviewer (LLM-Reviewer), it scores the current prompt; the other as an author (LLM-Author), it edits the prompt by considering the edit history and the reviewer's feedback.
1 code implementation • 13 Jul 2023 • Hong Sun, Xue Li, Yinchuan Xu, Youkow Homma, Qi Cao, Min Wu, Jian Jiao, Denis Charles
This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization for Large Language Models (LLM).
no code implementations • 30 Jun 2023 • Simiao Zuo, Pengfei Tang, Xinyu Hu, Qiang Lou, Jian Jiao, Denis Charles
For model-free enhancement, we collect unlabeled web queries to augment domain knowledge; and we collect web search results to enrich the information of ads queries.
1 code implementation • 15 Dec 2022 • Simiao Zuo, Xiaodong Liu, Jian Jiao, Denis Charles, Eren Manavoglu, Tuo Zhao, Jianfeng Gao
Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers.
no code implementations • 12 Sep 2022 • Xue Li, Wei Shen, Denis Charles
In this paper, we propose TEDL, a two-stage learning approach to quantify uncertainty for deep learning models in classification tasks, inspired by our findings in experimenting with Evidential Deep Learning (EDL) method, a recently proposed uncertainty quantification approach based on the Dempster-Shafer theory.
1 code implementation • 27 Oct 2021 • Joseph J. Pfeiffer III, Denis Charles, Davis Gilton, Young Hun Jung, Mehul Parsana, Erik Anderson
We introduce a secure multi-party compute (MPC) protocol that utilizes "helper" parties to train models, so that once data leaves the browser, no downstream system can individually construct a complete picture of the user activity.
no code implementations • 17 Oct 2020 • Shuxi Zeng, Murat Ali Bayir, Joesph J. Pfeiffer III, Denis Charles, Emre Kiciman
We describe a causal transfer random forest (CTRF) that combines existing training data with a small amount of data from a randomized experiment to train a model which is robust to the feature shifts and therefore transfers to a new targeting distribution.
no code implementations • 15 Oct 2020 • Razieh Nabi, Joel Pfeiffer, Murat Ali Bayir, Denis Charles, Emre Kiciman
This assumption is violated in settings where units are related through a network of dependencies.
no code implementations • 18 Mar 2020 • Aniket Anand Deshmukh, Abhimanu Kumar, Levi Boyles, Denis Charles, Eren Manavoglu, Urun Dogan
In the usual self-supervision, we learn implicit labels from the training data for a secondary task.
no code implementations • 18 Feb 2020 • Abhimanu Kumar, Aniket Anand Deshmukh, Urun Dogan, Denis Charles, Eren Manavoglu
We show faster convergence rate with valid transformations for convex as well as certain family of non-convex objectives along with the proof of convergence to the original set of optima.
1 code implementation • 12 Sep 2018 • Rishabh Iyer, Nimit Acharya, Tanuja Bompada, Denis Charles, Eren Manavoglu
Through extensive experiments, we demonstrate the utility of of our OL framework; how the two OL schemes relate to each other and how they trade-off between the new and historical data.
no code implementations • 18 Apr 2018 • John Moore, Joel Pfeiffer, Kai Wei, Rishabh Iyer, Denis Charles, Ran Gilad-Bachrach, Levi Boyles, Eren Manavoglu
In real world systems, the predictions of deployed Machine Learned models affect the training data available to build subsequent models.
no code implementations • 16 Sep 2014 • Patrice Simard, David Chickering, Aparna Lakshmiratan, Denis Charles, Leon Bottou, Carlos Garcia Jurado Suarez, David Grangier, Saleema Amershi, Johan Verwey, Jina Suh
Based on the machine's output, the teacher can revise the definition of the task or make it more precise.