no code implementations • 2 Sep 2024 • Nan Jiang, Md Nasim, Yexiang Xue

Chaos theory indicates that small changes in the initial conditions of a dynamical system can result in vastly different trajectories, necessitating the maintenance of a large set of initial conditions of the trajectory.

1 code implementation • 30 Aug 2024 • Nan Jiang, Jinzhao Li, Yexiang Xue

In reinforcement learning, Reverse Experience Replay (RER) is a recently proposed algorithm that attains better sample complexity than the classic experience replay method.

1 code implementation • 1 Feb 2024 • Nan Jiang, Md Nasim, Yexiang Xue

We propose Vertical Symbolic Regression using Deep Policy Gradient (VSR-DPG) and demonstrate that VSR-DPG can recover ground-truth equations involving multiple input variables, significantly beyond both deep reinforcement learning-based approaches and previous VSR variants.

no code implementations • 19 Dec 2023 • Nan Jiang, Md Nasim, Yexiang Xue

The first few steps in vertical discovery are significantly cheaper than the horizontal path, as their search is in reduced hypothesis spaces involving a small set of variables.

no code implementations • 7 Nov 2023 • Maxwell Joseph Jacobson, Yexiang Xue

Meta Reinforcement Learning (Meta RL) trains agents that adapt to fast-changing environments and tasks.

no code implementations • 13 Oct 2023 • Maxwell Joseph Jacobson, Yexiang Xue

SPRING embeds a neural and symbolic integrated spatial reasoning module inside the deep generative network.

no code implementations • 16 Sep 2023 • Jinzhao Li, Nan Jiang, Yexiang Xue

Solving SMC is challenging because of its highly intractable nature($\text{NP}^{\text{PP}}$-complete), incorporating statistical inference and symbolic reasoning.

1 code implementation • 13 Sep 2023 • Nan Jiang, Yexiang Xue

A selection scheme similar to that used in selecting good symbolic equations in the genetic programming process is implemented to ensure that promising experiment schedules eventually win over the average ones.

no code implementations • 13 Sep 2023 • Md Nasim, Yexiang Xue

This decomposition enables efficient learning when the source of the updates consists of gradually changing terms across large areas (sparse in the frequency domain) in addition to a few rapid updates concentrated in a small set of "interfacial" regions (sparse in the value domain).

no code implementations • 13 Sep 2023 • Md Nasim, Anter El-Azab, Xinghang Zhang, Yexiang Xue

Phase-Field-Lab combines (i) a streamlined annotation tool which reduces the annotation time (by ~50-75%), while increasing annotation accuracy compared to baseline; (ii) an end-to-end neural model which automatically learns phase field models from data by embedding phase field simulation and existing domain knowledge into learning; and (iii) novel interfaces and visualizations to integrate our platform into the scientific discovery cycle of domain scientists.

no code implementations • 29 Aug 2023 • Md Masudur Rahman, Yexiang Xue

An additional goal of the generator is to perturb the observation, which maximizes the agent's probability of taking a different action.

1 code implementation • 25 May 2023 • Nan Jiang, Yexiang Xue

CVGP starts by fitting simple expressions involving a small set of independent variables using genetic programming, under controlled experiments where other variables are held as constants.

no code implementations • 27 Apr 2023 • Md Masudur Rahman, Yexiang Xue

Data augmentation can provide a performance boost to RL agents by mitigating the effect of overfitting.

no code implementations • 2 Feb 2023 • Md Masudur Rahman, Yexiang Xue

Our approach is to estimate the value function from prior computations, such as from the Q-network learned in DQN or the value function trained for different but related environments.

1 code implementation • 14 Dec 2022 • Md Masudur Rahman, Yexiang Xue

We observed that in many settings, RPO increases the policy entropy early in training and then maintains a certain level of entropy throughout the training period.

1 code implementation • 1 Dec 2022 • Nan Jiang, Yi Gu, Yexiang Xue

Contrastive divergence is then applied to separate these samples from those in the training set.

no code implementations • 24 Oct 2022 • Maxwell J. Jacobson, Daniela Chanci Arrubla, Maria Romeo Tricas, Gayle Gordillo, Yexiang Xue, Chandan Sen, Juan Wachs

Using this framework, we discover that B-mode ultrasound classifiers can be enhanced by supplying textural features.

1 code implementation • 13 Oct 2022 • Md Masudur Rahman, Yexiang Xue

Unlike using data augmentation on the input to learn value and policy function as existing methods use, our method uses data augmentation to compute a bootstrap advantage estimation.

no code implementations • 11 Aug 2022 • Nan Jiang, Dhivya Eswaran, Choon Hui Teo, Yexiang Xue, Yesh Dattatreya, Sujay Sanghavi, Vishy Vishwanathan

We consider text retrieval within dense representational space in real-world settings such as e-commerce search where (a) document popularity and (b) diversity of queries associated with a document have a skewed distribution.

1 code implementation • 15 Jul 2022 • Md Masudur Rahman, Yexiang Xue

Deep Reinforcement Learning (RL) agents often overfit the training environment, leading to poor generalization performance.

no code implementations • 22 Mar 2022 • Fan Ding, Yijie Wang, Jianzhu Ma, Yexiang Xue

Here we propose XOR-PGD, a novel algorithm based on Projected Gradient Descent (PGD) coupled with the XOR sampler, which is guaranteed to solve the constrained stochastic convex optimization problem still in linear convergence rate by choosing proper step size.

no code implementations • NeurIPS 2021 • Chonghao Sima, Yexiang Xue

The advancement of deep neural networks over the last decade has enabled progress in scientific knowledge discovery in the form of learning Partial Differential Equations (PDEs) directly from experiment data.

no code implementations • 6 Oct 2021 • Nan Jiang, Chen Luo, Vihan Lakshman, Yesh Dattatreya, Yexiang Xue

In addition, FLAN does not require any annotated data or supervised learning.

no code implementations • 29 Sep 2021 • Md Masudur Rahman, Yexiang Xue

An additional goal of the generator is to perturb the observation, which maximizes the agent's probability of taking a different action.

1 code implementation • Findings of the Association for Computational Linguistics 2020 • Maosen Zhang, Nan Jiang, Lei LI, Yexiang Xue

Generating natural language under complex constraints is a principled formulation towards controllable text generation.

no code implementations • 13 Oct 2019 • Fan Ding, Hanjing Wang, Ashish Sabharwal, Yexiang Xue

On a suite of UAI inference challenge benchmarks, it saves 81. 5% of WISH queries while retaining the quality of results.

1 code implementation • 3 Mar 2019 • Naveen Madapana, Md Masudur Rahman, Natalia Sanchez-Tamayo, Mythra V. Balakuntala, Glebys Gonzalez, Jyothsna Padmakumar Bindu, L. N. Vishnunandan Venkatesh, Xingguang Zhang, Juan Barragan Noguera, Thomas Low, Richard Voyles, Yexiang Xue, Juan Wachs

It comprises a set of surgical robotic skills collected during a surgical training task using three robotic platforms: the Taurus II robot, Taurus II simulated robot, and the YuMi robot.

Robotics

no code implementations • NeurIPS 2018 • Yexiang Xue, Yang Yuan, Zhitian Xu, Ashish Sabharwal

Neural models operating over structured spaces such as knowledge graphs require a continuous embedding of the discrete elements of this space (such as entities) as well as the relationships between them.

1 code implementation • 7 May 2018 • Junwen Bai, Zihang Lai, Runzhe Yang, Yexiang Xue, John Gregoire, Carla Gomes

We propose imitation refinement, a novel approach to refine imperfect input patterns, guided by a pre-trained classifier incorporating prior knowledge from simulated theoretical data, such that the refined patterns imitate the ideal data.

no code implementations • ICML 2018 • Di Chen, Yexiang Xue, Carla P. Gomes

The multivariate probit model (MVP) is a popular classic model for studying binary responses of multiple entities.

no code implementations • 18 Nov 2017 • Johan Bjorck, Yiwei Bai, Xiaojian Wu, Yexiang Xue, Mark C. Whitmore, Carla Gomes

Cascades represent rapid changes in networks.

no code implementations • 17 Sep 2017 • Luming Tang, Yexiang Xue, Di Chen, Carla P. Gomes

Multi-Entity Dependence Learning (MEDL) explores conditional correlations among multiple entities.

no code implementations • 23 May 2017 • Xiaojian Wu, Yexiang Xue, Bart Selman, Carla P. Gomes

In this paper, we consider a more realistic setting where multiple edges are not independent due to natural disasters or regional events that make the states of multiple edges stochastically correlated.

no code implementations • NeurIPS 2016 • Yexiang Xue, Zhiyuan Li, Stefano Ermon, Carla P. Gomes, Bart Selman

Arising from many applications at the intersection of decision making and machine learning, Marginal Maximum A Posteriori (Marginal MAP) Problems unify the two main classes of inference, namely maximization (optimization) and marginal inference (counting), and are believed to have higher complexity than both of them.

1 code implementation • 3 Oct 2016 • Yexiang Xue, Junwen Bai, Ronan Le Bras, Brendan Rappazzo, Richard Bernstein, Johan Bjorck, Liane Longpre, Santosh K. Suram, Robert B. van Dover, John Gregoire, Carla P. Gomes

A key problem in materials discovery, the phase map identification problem, involves the determination of the crystal phase diagram from the materials' composition and structural characterization data.

no code implementations • 28 Sep 2016 • Di Chen, Yexiang Xue, Shuo Chen, Daniel Fink, Carla Gomes

Additionally, we demonstrate the benefit of using a deep neural network to extract features within the embedding and show how they improve the predictive performance of species distribution modelling.

no code implementations • 17 Aug 2015 • Yexiang Xue, Stefano Ermon, Ronan Le Bras, Carla P. Gomes, Bart Selman

The ability to represent complex high dimensional probability distributions in a compact form is one of the key insights in the field of graphical models.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.