no code implementations • 7 Feb 2025 • Kush R. Varshney
We point out the the similarities and differences between oral composition and LLM generation, and comment on the implications to society and AI policy.
no code implementations • 15 Jan 2025 • Kush R. Varshney, Zahra Ashktorab, Djallel Bouneffouf, Matthew Riemer, Justin D. Weisz
Much of the research focus on AI alignment seeks to align large language models and other foundation models to the context-less and generic values of helpfulness, harmlessness, and honesty.
1 code implementation • 10 Dec 2024 • Inkit Padhi, Manish Nagireddy, Giandomenico Cornacchia, Subhajit Chaudhury, Tejaswini Pedapati, Pierre Dognin, Keerthiram Murugesan, Erik Miehling, Martín Santillán Cooper, Kieran Fraser, Giulio Zizzo, Muhammad Zaid Hameed, Mark Purcell, Michael Desmond, Qian Pan, Zahra Ashktorab, Inge Vejsbjerg, Elizabeth M. Daly, Michael Hind, Werner Geyer, Ambrish Rawat, Kush R. Varshney, Prasanna Sattigeri
We introduce the Granite Guardian models, a suite of safeguards designed to provide risk detection for prompts and responses, enabling safe and responsible use in combination with any large language model (LLM).
no code implementations • 20 Oct 2024 • Hangzhi Guo, Pranav Narayanan Venkit, Eunchae Jang, Mukund Srinath, Wenbo Zhang, Bonam Mingole, Vipul Gupta, Kush R. Varshney, S. Shyam Sundar, Amulya Yadav
The widespread adoption of large language models (LLMs) and generative AI (GenAI) tools across diverse applications has amplified the importance of addressing societal biases inherent within these technologies.
no code implementations • 23 Sep 2024 • Ambrish Rawat, Stefan Schoepf, Giulio Zizzo, Giandomenico Cornacchia, Muhammad Zaid Hameed, Kieran Fraser, Erik Miehling, Beat Buesser, Elizabeth M. Daly, Mark Purcell, Prasanna Sattigeri, Pin-Yu Chen, Kush R. Varshney
As generative AI, particularly large language models (LLMs), become increasingly integrated into production applications, new attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
no code implementations • 19 Aug 2024 • Inkit Padhi, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Manish Nagireddy, Pierre Dognin, Kush R. Varshney
Aligning large language models (LLMs) to value systems has emerged as a significant area of research within the fields of AI and NLP.
no code implementations • 19 Mar 2024 • Pierre Dognin, Jesus Rios, Ronny Luss, Inkit Padhi, Matthew D Riemer, Miao Liu, Prasanna Sattigeri, Manish Nagireddy, Kush R. Varshney, Djallel Bouneffouf
Developing value-aligned AI agents is a complex undertaking and an ongoing challenge in the field of AI.
no code implementations • 15 Mar 2024 • Conor M. Artman, Aditya Mate, Ezinne Nwankwo, Aliza Heching, Tsuyoshi Idé, Jiří\, Navrátil, Karthikeyan Shanmugam, Wei Sun, Kush R. Varshney, Lauri Goldkind, Gidi Kroch, Jaclyn Sawyer, Ian Watson
We developed a common algorithmic solution addressing the problem of resource-constrained outreach encountered by social change organizations with different missions and operations: Breaking Ground -- an organization that helps individuals experiencing homelessness in New York transition to permanent housing and Leket -- the national food bank of Israel that rescues food from farms and elsewhere to feed the hungry.
no code implementations • 9 Mar 2024 • Swapnaja Achintalwar, Adriana Alvarado Garcia, Ateret Anaby-Tavor, Ioana Baldini, Sara E. Berger, Bishwaranjan Bhattacharjee, Djallel Bouneffouf, Subhajit Chaudhury, Pin-Yu Chen, Lamogha Chiazor, Elizabeth M. Daly, Kirushikesh DB, Rogério Abreu de Paula, Pierre Dognin, Eitan Farchi, Soumya Ghosh, Michael Hind, Raya Horesh, George Kour, Ja Young Lee, Nishtha Madaan, Sameep Mehta, Erik Miehling, Keerthiram Murugesan, Manish Nagireddy, Inkit Padhi, David Piorkowski, Ambrish Rawat, Orna Raz, Prasanna Sattigeri, Hendrik Strobelt, Sarathkrishna Swaminathan, Christoph Tillmann, Aashka Trivedi, Kush R. Varshney, Dennis Wei, Shalisha Witherspooon, Marcel Zalmanovici
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations.
no code implementations • 13 Feb 2024 • Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Yuguang Yao, Chris Yuhao Liu, Xiaojun Xu, Hang Li, Kush R. Varshney, Mohit Bansal, Sanmi Koyejo, Yang Liu
We explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning.
no code implementations • 25 Jan 2024 • William Kidder, Jason D'Cruz, Kush R. Varshney
We propose that the method of empathy has special significance for honoring the right to be an exception that is distinct from the value of predictive accuracy, at which LLMs excel.
no code implementations • 10 Sep 2023 • Kush R. Varshney
Prior work has explicated the coloniality of artificial intelligence (AI) development and deployment through mechanisms such as extractivism, automation, sociological essentialism, surveillance, and containment.
no code implementations • 22 May 2023 • Ioana Baldini, Chhavi Yadav, Manish Nagireddy, Payel Das, Kush R. Varshney
We demonstrate that the newly created dataset BBNLI-next is more challenging than BBNLI: on average, BBNLI-next reduces the accuracy of state-of-the-art NLI models from 95. 3%, as observed by BBNLI, to a strikingly low 57. 5%.
no code implementations • 2 Apr 2023 • Baihan Lin, Djallel Bouneffouf, Guillermo Cecchi, Kush R. Varshney
By incorporating psychotherapy and reinforcement learning techniques, the framework enables AI chatbots to learn and adapt to human preferences and values in a safe and ethical way, contributing to the development of a more human-centric and responsible AI.
no code implementations • 17 Feb 2023 • Manish Nagireddy, Moninder Singh, Samuel C. Hoffman, Evaline Ju, Karthikeyan Natesan Ramamurthy, Kush R. Varshney
In this paper, focusing specifically on compositions of functions arising from the different pillars, we aim to reduce this gap, develop new insights for trustworthy ML, and answer questions such as the following.
no code implementations • 13 Dec 2022 • Prasanna Sattigeri, Soumya Ghosh, Inkit Padhi, Pierre Dognin, Kush R. Varshney
The dropping of training points is done in principle, but in practice does not require the model to be refit.
no code implementations • 2 Nov 2022 • Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh
Interpretable and explainable machine learning has seen a recent surge of interest.
no code implementations • 13 Oct 2022 • Sourya Basu, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney, Payel Das
We also provide a novel group-theoretic definition for fairness in NLG.
1 code implementation • 22 Aug 2022 • Zhenhuan Yang, Yan Lok Ko, Kush R. Varshney, Yiming Ying
We conduct numerical experiments on both synthetic and real-world datasets to validate the effectiveness of the minimax framework and the proposed optimization algorithm.
no code implementations • 22 Jan 2022 • Zhenhuan Yang, Shu Hu, Yunwen Lei, Kush R. Varshney, Siwei Lyu, Yiming Ying
We further provide its utility analysis in the nonconvex-strongly-concave setting which is the first-ever-known result in terms of the primal population risk.
no code implementations • NeurIPS 2021 • Isha Puri, Amit Dhurandhar, Tejaswini Pedapati, Karthikeyan Shanmugam, Dennis Wei, Kush R. Varshney
We experiment on nonlinear synthetic functions and are able to accurately model as well as estimate feature attributions and even higher order terms in some cases, which is a testament to the representational power as well as interpretability of such architectures.
no code implementations • 20 Oct 2021 • Q. Vera Liao, Kush R. Varshney
In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey our own and other recent HCI works that take human-centered approaches to design, evaluate, and provide conceptual and methodological tools for XAI.
no code implementations • 29 Sep 2021 • Moninder Singh, Gevorg Ghalachyan, Kush R. Varshney, Reginald E. Bryant
To ensure trust in AI models, it is becoming increasingly apparent that evaluation of models must be extended beyond traditional performance metrics, like accuracy, to other dimensions, such as fairness, explainability, adversarial robustness, and distribution shift.
no code implementations • 24 Sep 2021 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations.
no code implementations • 18 Aug 2021 • Kahini Wadhawan, Payel Das, Barbara A. Han, Ilya R. Fischhoff, Adrian C. Castellanos, Arvind Varsani, Kush R. Varshney
Current methods for viral discovery target evolutionarily conserved proteins that accurately identify virus families but remain unable to distinguish the zoonotic potential of newly discovered viruses.
2 code implementations • Findings (ACL) 2021 • Diego Garcia-Olano, Yasumasa Onoe, Ioana Baldini, Joydeep Ghosh, Byron C. Wallace, Kush R. Varshney
Pre-trained language models induce dense entity representations that offer strong performance on entity-centric NLP tasks, but such representations are not immediately interpretable.
1 code implementation • 2 Jun 2021 • Soumya Ghosh, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jiri Navratil, Prasanna Sattigeri, Kush R. Varshney, Yunfeng Zhang
In this paper, we describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
no code implementations • 29 Jan 2021 • Tim Draws, Zoltán Szlávik, Benjamin Timmermans, Nava Tintarev, Kush R. Varshney, Michael Hind
Systems aiming to aid consumers in their decision-making (e. g., by implementing persuasive techniques) are more likely to be effective when consumers trust them.
Decision Making
Fairness
Human-Computer Interaction
no code implementations • 1 Jan 2021 • Lu Cheng, Kush R. Varshney, Huan Liu
In this survey, we provide a systematic framework of Socially Responsible AI Algorithms that aims to examine the subjects of AI indifference and the need for socially responsible AI algorithms, define the objectives, and introduce the means by which we may achieve these objectives.
no code implementations • 22 Dec 2020 • Kartik Ahuja, Amit Dhurandhar, Kush R. Varshney
Non-convex optimization problems are challenging to solve; the success and computational expense of a gradient descent algorithm or variant depend heavily on the initialization strategy.
3 code implementations • ICLR 2021 • Kartik Ahuja, Jun Wang, Amit Dhurandhar, Karthikeyan Shanmugam, Kush R. Varshney
Recently, invariant risk minimization (IRM) was proposed as a promising solution to address out-of-distribution (OOD) generalization.
no code implementations • 15 Oct 2020 • Charvi Rastogi, Yunfeng Zhang, Dennis Wei, Kush R. Varshney, Amit Dhurandhar, Richard Tomsett
We, then, conduct a second user experiment which shows that our time allocation strategy with explanation can effectively de-anchor the human and improve collaborative performance when the AI model has low confidence and is incorrect.
no code implementations • 10 Jun 2020 • Sainyam Galhotra, Karthikeyan Shanmugam, Prasanna Sattigeri, Kush R. Varshney
In this work, we consider fairness in the integration component of data management, aiming to identify features that improve prediction without adding any bias to the dataset.
3 code implementations • ICML 2020 • Kartik Ahuja, Karthikeyan Shanmugam, Kush R. Varshney, Amit Dhurandhar
The standard risk minimization paradigm of machine learning is brittle when operating in environments whose test distributions are different from the training distribution due to spurious correlations.
no code implementations • 5 Feb 2020 • Yunfeng Zhang, Rachel K. E. Bellamy, Kush R. Varshney
Today, AI is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern.
no code implementations • 18 Nov 2019 • Shivashankar Subramanian, Ioana Baldini, Sushma Ravichandran, Dmitriy A. Katz-Rogozhnikov, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Kush R. Varshney, Annmarie Wang, Pradeep Mangalath, Laura B. Kleiman
More than 200 generic drugs approved by the U. S. Food and Drug Administration for non-cancer indications have shown promise for treating cancer.
no code implementations • 9 Nov 2019 • Samuel C. Maina, Reginald E. Bryant, William O. Goal, Robert-Florian Samoilescu, Kush R. Varshney, Komminist Weldemariam
Our evaluation confirmed that the approach was able to produce synthetic datasets that preserved a high level of subgroup differentiation as identified initially in the original dataset.
no code implementations • 30 Oct 2019 • Michiel A. Bakker, Duy Patrick Tu, Humberto Riverón Valdés, Krishna P. Gummadi, Kush R. Varshney, Adrian Weller, Alex Pentland
We introduce a framework for dynamic adversarial discovery of information (DADI), motivated by a scenario where information (a feature set) is used by third parties with unknown objectives.
no code implementations • 29 Oct 2019 • Newton M. Kinyanjui, Timothy Odonga, Celia Cintas, Noel C. F. Codella, Rameswar Panda, Prasanna Sattigeri, Kush R. Varshney
We find that the majority of the data in the the two datasets have ITA values between 34. 5{\deg} and 48{\deg}, which are associated with lighter skin, and is consistent with under-representation of darker skinned populations in these datasets.
no code implementations • ICML 2020 • Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, Kush R. Varshney
Moreover, the same classifier yields the lack of a trade-off with respect to ideal distributions while yielding a trade-off when accuracy is measured with respect to the given (possibly biased) dataset.
no code implementations • 25 Sep 2019 • Hazar Yueksel, Kush R. Varshney, Brian Kingsbury
Using this condition, we formulate an optimization problem to learn a more general classification function.
no code implementations • 8 Sep 2019 • Yaoli Mao, Dakuo Wang, Michael Muller, Kush R. Varshney, Ioana Baldini, Casey Dugan, AleksandraMojsilović
Our findings suggest that besides the glitches in the collaboration readiness, technology readiness, and coupling of work dimensions, the tensions that exist in the common ground building process influence the collaboration outcomes, and then persist in the actual collaboration process.
2 code implementations • 6 Sep 2019 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability.
1 code implementation • 9 Jul 2019 • Michael Oberst, Fredrik D. Johansson, Dennis Wei, Tian Gao, Gabriel Brat, David Sontag, Kush R. Varshney
Overlap between treatment groups is required for non-parametric estimation of causal effects.
no code implementations • 5 Jun 2019 • Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilović
Using machine learning in high-stakes applications often requires predictions to be accompanied by explanations comprehensible to the domain user, who has ultimate responsibility for decisions and outcomes.
no code implementations • 8 May 2019 • Chirag Nagpal, Dennis Wei, Bhanukiran Vinzamuri, Monica Shekhar, Sara E. Berger, Subhro Das, Kush R. Varshney
The dearth of prescribing guidelines for physicians is one key driver of the current opioid epidemic in the United States.
no code implementations • 14 Dec 2018 • Pranay K. Lohia, Karthikeyan Natesan Ramamurthy, Manish Bhide, Diptikalyan Saha, Kush R. Varshney, Ruchir Puri
Whereas previous post-processing approaches for increasing the fairness of predictions of biased classifiers address only group fairness, we propose a method for increasing both individual and group fairness.
no code implementations • 30 Nov 2018 • Vidya Muthukumar, Tejaswini Pedapati, Nalini Ratha, Prasanna Sattigeri, Chai-Wah Wu, Brian Kingsbury, Abhishek Kumar, Samuel Thomas, Aleksandra Mojsilovic, Kush R. Varshney
Recent work shows unequal performance of commercial face classification services in the gender classification task across intersectional groups defined by skin type and gender.
no code implementations • 12 Nov 2018 • Michael Hind, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, Kush R. Varshney
Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions.
no code implementations • 3 Nov 2018 • Minh N. B. Nguyen, Samuel Thomas, Anne E. Gattiker, Sujatha Kashyap, Kush R. Varshney
We introduce SimplerVoice: a key message and visual description generator system to help low-literate adults navigate the information-dense world with confidence, on their own.
13 code implementations • 3 Oct 2018 • Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang
Such architectural design and abstractions enable researchers and developers to extend the toolkit with their new algorithms and improvements, and to use it for performance benchmarking.
no code implementations • 22 Sep 2018 • Ravi Kiran Raman, Roman Vaculin, Michael Hind, Sekou L. Remy, Eleftheria K. Pissadaki, Nelson Kibichii Bore, Roozbeh Daneshvar, Biplav Srivastava, Kush R. Varshney
Large-scale computational experiments, often running over weeks and over large datasets, are used extensively in fields such as epidemiology, meteorology, computational biology, and healthcare to understand phenomena, and design high-stakes policies affecting everyday health and economy.
no code implementations • 22 Aug 2018 • Matthew Arnold, Rachel K. E. Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ramamurthy, Darrell Reimer, Alexandra Olteanu, David Piorkowski, Jason Tsay, Kush R. Varshney
We envision such documents to contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers.
no code implementations • 16 Jul 2018 • Evan Patterson, Ioana Baldini, Aleksandra Mojsilovic, Kush R. Varshney
Your computer is continuously executing programs, but does it really understand them?
no code implementations • 3 Jul 2018 • Been Kim, Kush R. Varshney, Adrian Weller
This is the Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), which was held in Stockholm, Sweden, July 14, 2018.
no code implementations • 25 Jun 2018 • Kush R. Varshney, Prashant Khanduri, Pranay Sharma, Shan Zhang, Pramod K. Varshney
Such arguments, however, fail to acknowledge that the overall decision-making system is composed of two entities: the learned model and a human who fuses together model outputs with his or her own information.
no code implementations • 29 May 2018 • Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic
The adoption of machine learning in high-stakes applications such as healthcare and law has lagged in part because predictions are not accompanied by explanations comprehensible to the domain user, who often holds the ultimate responsibility for decisions and outcomes.
1 code implementation • 25 May 2018 • Karthikeyan Natesan Ramamurthy, Kush R. Varshney, Krishnan Mody
We propose the labeled \v{C}ech complex, the plain labeled Vietoris-Rips complex, and the locally scaled labeled Vietoris-Rips complex to perform persistent homology inference of decision boundaries in classification tasks.
no code implementations • 24 May 2018 • Prasanna Sattigeri, Samuel C. Hoffman, Vijil Chenthamarakshan, Kush R. Varshney
In this paper, we introduce the Fairness GAN, an approach for generating a dataset that is plausibly similar to a given multimedia dataset, but is more fair with respect to protected attributes in allocative decision making.
no code implementations • 24 May 2018 • Bernat Guillen Pegueroles, Bhanukiran Vinzamuri, Karthikeyan Shanmugam, Steve Hedden, Jonathan D. Moyer, Kush R. Varshney
Almost all existing Granger causal algorithms condition on a large number of variables (all but two variables) to test for effects between a pair of variables.
1 code implementation • 16 Apr 2018 • Alexandra Olteanu, Carlos Castillo, Jeremy Boy, Kush R. Varshney
In this paper, we focus on quantifying the impact of violent events on various types of hate speech, from offensive and derogatory to intimidation and explicit calls for violence.
Social and Information Networks Computers and Society
no code implementations • 29 Mar 2018 • Kush R. Varshney
This essay examines how what is considered to be artificial intelligence (AI) has changed over time and come to intersect with the expertise of the author.
no code implementations • NeurIPS 2017 • Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, Kush R. Varshney
Non-discrimination is a recognized objective in algorithmic decision making.
no code implementations • 16 Nov 2017 • Tejas Dharamsi, Payel Das, Tejaswini Pedapati, Gregory Bramble, Vinod Muthusamy, Horst Samulowitz, Kush R. Varshney, Yuvaraj Rajamanickam, John Thomas, Justin Dauwels
In this work, we present a cloud-based deep neural network approach to provide decision support for non-specialist physicians in EEG analysis and interpretation.
no code implementations • 5 Nov 2017 • Dennis Wei, Karthikeyan Natesan Ramamurthy, Kush R. Varshney
Preserving the privacy of individuals by protecting their sensitive attributes is an important consideration during microdata release.
no code implementations • 8 Aug 2017 • Been Kim, Dmitry M. Malioutov, Kush R. Varshney, Adrian Weller
This is the Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017), which was held in Sydney, Australia, August 10, 2017.
1 code implementation • 11 Apr 2017 • Flavio P. Calmon, Dennis Wei, Karthikeyan Natesan Ramamurthy, Kush R. Varshney
Non-discrimination is a recognized objective in algorithmic decision making.
no code implementations • 5 Oct 2016 • Kush R. Varshney, Homa Alemzadeh
Machine learning algorithms increasingly influence our decisions and interact with us in all parts of our daily lives.
no code implementations • 8 Jul 2016 • Been Kim, Dmitry M. Malioutov, Kush R. Varshney
This is the Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), which was held in New York, NY, June 23, 2016.
no code implementations • 8 Jul 2016 • Kush R. Varshney
This is the Proceedings of the ICML Workshop on #Data4Good: Machine Learning in Social Good Applications, which was held on June 24, 2016 in New York.
no code implementations • 18 Jun 2016 • Guolong Su, Dennis Wei, Kush R. Varshney, Dmitry M. Malioutov
As a contribution to interpretable machine learning research, we develop a novel optimization framework for learning accurate and sparse two-level Boolean rules.
no code implementations • 15 Jun 2016 • Prasanna Sattigeri, Aurélie Lozano, Aleksandra Mojsilović, Kush R. Varshney, Mahmoud Naghshineh
Innovation is among the key factors driving a country's economic and social growth.
no code implementations • 21 Apr 2016 • Aleksandr Y. Aravkin, Kush R. Varshney, Liu Yang
Matrix factorization is a key component of collaborative filtering-based recommendation systems because it allows us to complete sparse user-by-item ratings matrices under a low-rank assumption that encodes the belief that similar users give similar ratings and that similar items garner similar ratings.
no code implementations • 16 Jan 2016 • Kush R. Varshney
Machine learning algorithms are increasingly influencing our decisions and interacting with us in all parts of our daily lives.
no code implementations • 23 Nov 2015 • Guolong Su, Dennis Wei, Kush R. Varshney, Dmitry M. Malioutov
Experiments show that the two-level rules can yield noticeably better performance than one-level rules due to their dramatically larger modeling capacity, and the two algorithms based on the Hamming distance formulation are generally superior to the other two-level rule learning methods in our comparison.
1 code implementation • 5 Nov 2013 • Lav R. Varshney, Florian Pinel, Kush R. Varshney, Debarun Bhattacharjya, Angela Schoergendorfer, Yi-Min Chee
Computational creativity is an emerging branch of artificial intelligence that places computers in the center of the creative process.