no code implementations • 29 Jan 2025 • Mateusz Nowak, Wojciech Jarosz, Peter Chin
The standard 3D Gaussian Splatting model struggles to represent view-dependent content, since it cannot differentiate an object within the scene from the light interacting with its specular surfaces, which produce highlights or reflections.
1 code implementation • 16 Jan 2025 • Tobias Fiedler, Leon Hermann, Florian Müller, Sarel Cohen, Peter Chin, Tobias Friedrich, Eilon Vaadia
In contrast, labeled data and pre-trained models for the closely related task of speech recognition from audio are widely available.
no code implementations • 2 Dec 2024 • Ryan Yu, Mateusz Nowak, Qintong Xie, Michelle Yilin Feng, Peter Chin
Current approximate Coarse Correlated Equilibria (CCE) algorithms struggle with equilibrium approximation for games in large stochastic environments but are theoretically guaranteed to converge to a strong solution concept.
1 code implementation • 7 Nov 2024 • Moshik Hershcovitch, Andrew Wood, Leshem Choshen, Guy Girmonsky, Roy Leibovitz, Ilias Ennmouri, Michal Malka, Peter Chin, Swaminathan Sundararaman, Danny Harnik
With the growth of model sizes and the scale of their deployment, their sheer size burdens the infrastructure requiring more network and more storage to accommodate these.
no code implementations • 22 Oct 2024 • Aditya Vikram Singh, Ethan Rathbun, Emma Graham, Lisa Oakley, Simona Boboila, Alina Oprea, Peter Chin
Recent advances in multi-agent reinforcement learning (MARL) have created opportunities to solve complex real-world tasks.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • 3 Oct 2024 • Thang Nguyen, Peter Chin, Yu-Wing Tai
In this paper, we introduce Reward-RAG, a novel approach designed to enhance the Retrieval-Augmented Generation (RAG) model through Reward-Driven Supervision.
no code implementations • 3 Oct 2024 • Pedro Colon-Hernandez, Nanxi Liu, Chelsea Joe, Peter Chin, Claire Yin, Henry Lieberman, Yida Xin, Cynthia Breazeal
Generating commonsense assertions within a given story context remains a difficult task for modern language models.
1 code implementation • 25 Sep 2024 • Junyan Cheng, Peter Chin
In this study, we propose a novel asset pricing approach, LLM Agent-based Asset Pricing Models (AAPM), which fuses qualitative discretionary investment analysis from LLM agents and quantitative manual financial economic factors to predict excess asset returns.
no code implementations • 14 Jun 2024 • Ryan Yu, Alex Olshevsky, Peter Chin
In this study we answer the question: can we take tree search algorithms trained through self-play from perfect information settings and adapt them to simultaneous move games without significant loss of performance?
1 code implementation • 5 Apr 2024 • Moshik Hershcovitch, Leshem Choshen, Andrew Wood, Ilias Enmouri, Peter Chin, Swaminathan Sundararaman, Danny Harnik
With the growth of model sizes and scale of their deployment, their sheer size burdens the infrastructure requiring more network and more storage to accommodate these.
1 code implementation • 4 Feb 2024 • Mustafa Hajij, Mathilde Papillon, Florian Frantzen, Jens Agerberg, Ibrahem AlJabea, Rubén Ballester, Claudio Battiloro, Guillermo Bernárdez, Tolga Birdal, Aiden Brent, Peter Chin, Sergio Escalera, Simone Fiorellino, Odin Hoff Gardaa, Gurusankar Gopalakrishnan, Devendra Govil, Josef Hoppe, Maneel Reddy Karri, Jude Khouja, Manuel Lecha, Neal Livesay, Jan Meißner, Soham Mukherjee, Alexander Nikitin, Theodore Papamarkou, Jaro Prílepok, Karthikeyan Natesan Ramamurthy, Paul Rosen, Aldo Guzmán-Sáenz, Alessandro Salatiello, Shreyas N. Samaga, Simone Scardapane, Michael T. Schaub, Luca Scofano, Indro Spinelli, Lev Telyatnikov, Quang Truong, Robin Walters, Maosheng Yang, Olga Zaghen, Ghada Zamzmi, Ali Zia, Nina Miolane
We introduce TopoX, a Python software suite that provides reliable and user-friendly building blocks for computing and machine learning on topological domains that extend graphs: hypergraphs, simplicial, cellular, path and combinatorial complexes.
no code implementations • 13 Aug 2023 • Quang Truong, Peter Chin
Graph Neural Networks (GNNs), despite achieving remarkable performance across different tasks, are theoretically bounded by the 1-Weisfeiler-Lehman test, resulting in limitations in terms of graph expressivity.
Ranked #7 on
Graph Classification
on NCI109
no code implementations • 3 Aug 2023 • Junyan Cheng, Peter Chin
Bridging the huge disparity between neural and symbolic representation can potentially enable the incorporation of symbolic thinking into neural networks from essence.
no code implementations • 10 Feb 2023 • Pedro Colon-Hernandez, Henry Lieberman, Yida Xin, Claire Yin, Cynthia Breazeal, Peter Chin
Contextualized or discourse aware commonsense inference is the task of generating coherent commonsense assertions (i. e., facts) from a given story, and a particular sentence from that story.
no code implementations • 5 Dec 2022 • Weiyu Zong, Mingqian Feng, Griffin Heyrich, Peter Chin
However, when these novel methods are used to handle high-dimensional multivariate forecasting problems, their performance is highly restricted by a practical training time and a reasonable GPU memory configuration.
no code implementations • 28 Sep 2022 • Chau Pham, Trung Dang, Peter Chin
Persistence diagrams (PDs), often characterized as sets of death and birth of homology class, have been known for providing a topological representation of a graph structure, which is often useful in machine learning tasks.
no code implementations • 9 Jul 2022 • Trung Dang, Simon Kornblith, Huy Thong Nguyen, Peter Chin, Maryam Khademi
In this work, we study different approaches to self-supervised pretraining of object detection models.
no code implementations • 10 Mar 2022 • Laura Greige, Fernando De Mesentier Silva, Meredith Trotter, Chris Lawrence, Peter Chin, Dilip Varadarajan
In the context of competitive multiplayer games, collusion happens when two or more teams decide to collaborate towards a common goal, with the intention of gaining an unfair advantage from this cooperation.
no code implementations • 21 Feb 2022 • Andrew Wood, Moshik Hershcovitch, Daniel Waddington, Sarel Cohen, Meredith Wolf, Hongjun Suh, Weiyu Zong, Peter Chin
Dimensionality reduction algorithms are frequently used to augment downstream tasks such as machine learning, data science, and also are exploratory methods for understanding complex phenomena.
no code implementations • 21 Feb 2022 • Andrew Wood, Moshik Hershcovitch, Daniel Waddington, Sarel Cohen, Peter Chin
Bayesian inference allows machine learning models to express uncertainty.
no code implementations • 5 Jan 2022 • Hieu Le, Hans Walker, Dung Tran, Peter Chin
Although deep neural networks have achieved great performance on classification tasks, recent studies showed that well trained networks can be fooled by adding subtle noises.
no code implementations • 8 Dec 2021 • Trung Dang, Dung Tran, Peter Chin, Kazuhito Koishida
Unsupervised Zero-Shot Voice Conversion (VC) aims to modify the speaker characteristic of an utterance to match an unseen target speaker without relying on parallel training data.
1 code implementation • NeurIPS 2021 • Trung Dang, Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Peter Chin, Françoise Beaufays
Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e. g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al'19] with additional knowledge about the current state of the model.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
no code implementations • 22 Oct 2021 • Simon Alford, Anshula Gandhi, Akshay Rangamani, Andrzej Banburski, Tony Wang, Sylee Dandekar, John Chin, Tomaso Poggio, Peter Chin
More specifically, we extend existing execution-guided program synthesis approaches with deductive reasoning based on function inverse semantics to enable a neural-guided bidirectional search algorithm.
no code implementations • 19 Oct 2021 • Michael R. Douglas, Michael Simkin, Omri Ben-Eliezer, Tianqi Wu, Peter Chin, Trung V. Dang, Andrew Wood
Their relative success is often credited in the literature to their ability to learn logical rules between the relations.
1 code implementation • Findings (ACL) 2021 • Pedro Colon-Hernandez, Yida Xin, Henry Lieberman, Catherine Havasi, Cynthia Breazeal, Peter Chin
Retrofitting is a technique used to move word vectors closer together or further apart in their space to reflect their relationships in a Knowledge Base (KB).
no code implementations • 16 May 2021 • Xiao Wang, Wei Jiang, Wei Wang, Shan Liu, Brian Kulis, Peter Chin
The key idea is to replace the image to be compressed with a substitutional one that outperforms the original one in a desired way.
1 code implementation • 15 Apr 2021 • Trung Dang, Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Peter Chin, Françoise Beaufays
We show that a dropout rate of 0. 2 can reduce the speaker identity accuracy to 0% top-1 (0. 5% top-5).
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 1 Feb 2021 • Yida Xin, Henry Lieberman, Peter Chin
We revisit the challenging problem of resolving prepositional-phrase (PP) attachment ambiguity.
no code implementations • 26 Apr 2020 • Andrew Wood, Ali Sydney, Peter Chin, Bishal Thapa, Ryan Ross
As a result, we have developed GymFG: GymFG couples and extends a high fidelity, open-source flight simulator and a robust agent learning framework to facilitate learning of more complex tasks.
no code implementations • 28 Feb 2020 • Laura Greige, Peter Chin
We apply our model to FlipIt, a two-player security game in which both players, the attacker and the defender, compete for ownership of a shared resource and only receive information on the current state of the game upon making a move.
no code implementations • 19 Feb 2020 • Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin
Designing effective defense against adversarial attacks is a crucial topic as deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars.
no code implementations • 18 Feb 2020 • Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin
Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models.
1 code implementation • 20 Aug 2019 • Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Peter Chin
However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e. g., large drop in test accuracy.
no code implementations • 1 Aug 2019 • Jacob Harer, Chris Reale, Peter Chin
We applied this architecture to correction tasks in both the source code and natural language domains.
no code implementations • 13 Sep 2018 • Siyue Wang, Xiao Wang, Pu Zhao, Wujie Wen, David Kaeli, Peter Chin, Xue Lin
Based on the observations of the effect of test dropout rate on test accuracy and attack success rate, we propose a defensive dropout algorithm to determine an optimal test dropout rate given the neural network model and the attacker's strategy for generating adversarial examples. We also investigate the mechanism behind the outstanding defense effects achieved by the proposed defensive dropout.
no code implementations • NeurIPS 2018 • Jacob Harer, Onur Ozdemir, Tomo Lazovich, Christopher P. Reale, Rebecca L. Russell, Louis Y. Kim, Peter Chin
Motivated by the problem of automated repair of software vulnerabilities, we propose an adversarial learning approach that maps from one discrete source domain to another target domain without requiring paired labeled examples or source and target domains to be bijections.
no code implementations • 14 Feb 2018 • Jacob A. Harer, Louis Y. Kim, Rebecca L. Russell, Onur Ozdemir, Leonard R. Kosta, Akshay Rangamani, Lei H. Hamilton, Gabriel I. Centeno, Jonathan R. Key, Paul M. Ellingwood, Erik Antelman, Alan Mackay, Marc W. McConley, Jeffrey M. Opper, Peter Chin, Tomo Lazovich
We then compare methods applied directly to source code with methods applied to artifacts extracted from the build process, finding that source-based models perform better.