Search Results for author: Weiwei Yang

Found 21 papers, 7 papers with code

Binary Code Summarization: Benchmarking ChatGPT/GPT-4 and Other Large Language Models

1 code implementation15 Dec 2023 Xin Jin, Jonathan Larson, Weiwei Yang, Zhiqiang Lin

Binary code summarization, while invaluable for understanding code semantics, is challenging due to its labor-intensive nature.

Benchmarking Code Summarization +2

A Statistical Turing Test for Generative Models

no code implementations16 Sep 2023 Hayden Helm, Carey E. Priebe, Weiwei Yang

Implicit in these efforts is an assumption that the generation properties of a human are different from that of the machine.

Efficient Reinforcement Learning Through Trajectory Generation

1 code implementation30 Nov 2022 Wenqi Cui, Linbin Huang, Weiwei Yang, Baosen Zhang

Off-policy and Offline RL methods have been proposed to reduce the number of interactions with the physical environment by learning control policies from historical data.

LEMMA Offline RL +2

Deep Learning with Label Noise: A Hierarchical Approach

no code implementations28 May 2022 Li Chen, Ningyuan Huang, Cong Mu, Hayden S. Helm, Kate Lytvynets, Weiwei Yang, Carey E. Priebe

Our hierarchical approach improves upon regular deep neural networks in learning with label noise.

Meta-Learning

Mental State Classification Using Multi-graph Features

no code implementations25 Feb 2022 Guodong Chen, Hayden S. Helm, Kate Lytvynets, Weiwei Yang, Carey E. Priebe

We consider the problem of extracting features from passive, multi-channel electroencephalogram (EEG) devices for downstream inference tasks related to high-level mental states such as stress and cognitive load.

Classification EEG +4

A Frequency Domain Approach to Predict Power System Transients

1 code implementation1 Nov 2021 Wenqi Cui, Weiwei Yang, Baosen Zhang

System topology and fault information are encoded by taking a multi-dimensional Fourier transform, allowing us to leverage the fact that the trajectories are sparse both in time and spatial frequencies.

Numerical Integration

Learning without gradient descent encoded by the dynamics of a neurobiological model

no code implementations16 Mar 2021 Vivek Kurien George, Vikash Morar, Weiwei Yang, Jonathan Larson, Bryan Tower, Shweti Mahajan, Arkin Gupta, Christopher White, Gabriel A. Silva

The success of state-of-the-art machine learning is essentially all based on different variations of gradient descent algorithms that minimize some version of a cost or loss function.

BIG-bench Machine Learning

Inducing a hierarchy for multi-class classification problems

no code implementations20 Feb 2021 Hayden S. Helm, Weiwei Yang, Sujeeth Bharadwaj, Kate Lytvynets, Oriana Riva, Christopher White, Ali Geisa, Carey E. Priebe

In applications where categorical labels follow a natural hierarchy, classification methods that exploit the label structure often outperform those that do not.

Classification Clustering +2

A partition-based similarity for classification distributions

no code implementations12 Nov 2020 Hayden S. Helm, Ronak D. Mehta, Brandon Duderstadt, Weiwei Yang, Christoper M. White, Ali Geisa, Joshua T. Vogelstein, Carey E. Priebe

Herein we define a measure of similarity between classification distributions that is both principled from the perspective of statistical pattern recognition and useful from the perspective of machine learning practitioners.

Classification General Classification +2

Multiple Network Embedding for Anomaly Detection in Time Series of Graphs

1 code implementation23 Aug 2020 Guodong Chen, Jesús Arroyo, Avanti Athreya, Joshua Cape, Joshua T. Vogelstein, Youngser Park, Chris White, Jonathan Larson, Weiwei Yang, Carey E. Priebe

We examine two related, complementary inference tasks: the detection of anomalous graphs within a time series, and the detection of temporally anomalous vertices.

Methodology

Omnidirectional Transfer for Quasilinear Lifelong Learning

1 code implementation27 Apr 2020 Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak D. Mehta, Ali Geisa, Haoyin Xu, Gido M. van de Ven, Emily Chang, Chenyu Gao, Weiwei Yang, Bryan Tower, Jonathan Larson, Christopher M. White, Carey E. Priebe

But striving to avoid forgetting sets the goal unnecessarily low: the goal of lifelong learning, whether biological or artificial, should be to improve performance on all tasks (including past and future) with any new data.

Federated Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.