Search Results for author: Mayukh Das

Found 14 papers, 3 papers with code

Quantifying Bias from Decoding Techniques in Natural Language Generation

1 code implementation COLING 2022 Mayukh Das, Wolf Tilo Balke

To this extent, we also analyze the trade-off between bias scores and human-annotated generation quality throughout the decoder space.

Text Generation

COIN: Chance-Constrained Imitation Learning for Uncertainty-aware Adaptive Resource Oversubscription Policy

no code implementations13 Jan 2024 Lu Wang, Mayukh Das, Fangkai Yang, Chao Duo, Bo Qiao, Hang Dong, Si Qin, Chetan Bansal, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang

We address the challenge of learning safe and robust decision policies in presence of uncertainty in context of the real scientific problem of adaptive resource oversubscription to enhance resource efficiency while ensuring safety against resource congestion risk.

Imitation Learning Management

Tree DNN: A Deep Container Network

no code implementations7 Dec 2022 Brijraj Singh, Swati Gupta, Mayukh Das, Praveen Doreswamy Naidu, Sharan Kumar Allur

TreeDNN helps in training the model with multiple datasets simultaneously, where each branch of the tree may need a different training dataset.

Multi-Task Learning

AutoCoMet: Smart Neural Architecture Search via Co-Regulated Shaping Reinforcement

no code implementations29 Mar 2022 Mayukh Das, Brijraj Singh, Harsh Kanti Chheda, Pawan Sharma, Pradeep NS

Designing suitable deep model architectures, for AI-driven on-device apps and features, at par with rapidly evolving mobile hardware and increasingly complex target scenarios is a difficult task.

Neural Architecture Search

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

2 code implementations6 Dec 2021 Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang

Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.

Data Augmentation

User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams

1 code implementation16 Dec 2019 Alexander L. Hayes, Mayukh Das, Phillip Odom, Sriraam Natarajan

One of the key advantages of Inductive Logic Programming systems is the ability of the domain experts to provide background knowledge as modes that allow for efficient search through the space of hypotheses.

Inductive logic programming

One-Shot Induction of Generalized Logical Concepts via Human Guidance

no code implementations15 Dec 2019 Mayukh Das, Nandini Ramanan, Janardhan Rao Doppa, Sriraam Natarajan

First, we define a distance measure between candidate concept representations that improves the efficiency of search for target concept and generalization.

Inductive logic programming valid

Knowledge-augmented Column Networks: Guiding Deep Learning with Advice

no code implementations31 May 2019 Mayukh Das, Devendra Singh Dhami, Yang Yu, Gautam Kunapuli, Sriraam Natarajan

Recently, deep models have had considerable success in several tasks, especially with low-level representations.

BIG-bench Machine Learning

Human-Guided Column Networks: Augmenting Deep Learning with Advice

no code implementations ICLR 2019 Mayukh Das, Yang Yu, Devendra Singh Dhami, Gautam Kunapuli, Sriraam Natarajan

While extremely successful in several applications, especially with low-level representations; sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models.

Human-Guided Learning of Column Networks: Augmenting Deep Learning with Advice

no code implementations15 Apr 2019 Mayukh Das, Yang Yu, Devendra Singh Dhami, Gautam Kunapuli, Sriraam Natarajan

Recently, deep models have been successfully applied in several applications, especially with low-level representations.

Preference-Guided Planning: An Active Elicitation Approach

no code implementations19 Apr 2018 Mayukh Das, Phillip Odom, Md. Rakibul Islam, Janardhan Rao, Doppa, Dan Roth, Sriraam Natarajan

Planning with preferences has been employed extensively to quickly generate high-quality plans.

Cannot find the paper you are looking for? You can Submit a new open access paper.