Search Results for author: Jennifer Tang

Found 6 papers, 2 papers with code

Estimating True Beliefs from Declared Opinions

no code implementations26 Oct 2023 Jennifer Tang, Aviv Adler, Amir Ajorlou, Ali Jadbabaie

To address this, Jadbabaie et al. formulated the interacting P\'olya urn model of opinion dynamics under social pressure and studied it on complete-graph social networks using an aggregate estimator, and found that their estimator converges to the inherent beliefs unless majority pressure pushes the network to consensus.

Stochastic Opinion Dynamics under Social Pressure in Arbitrary Networks

no code implementations18 Aug 2023 Jennifer Tang, Aviv Adler, Amir Ajorlou, Ali Jadbabaie

To study this, the interacting Polya urn model was introduced by Jadbabaie et al., in which each agent has two kinds of opinion: inherent beliefs, which are hidden from the other agents and fixed; and declared opinions, which are randomly sampled at each step from a distribution which depends on the agent's inherent belief and her neighbors' past declared opinions (the social pressure component), and which is then communicated to their neighbors.

Data-Driven Blind Synchronization and Interference Rejection for Digital Communication Signals

1 code implementation11 Sep 2022 Alejandro Lancho, Amir Weiss, Gary C. F. Lee, Jennifer Tang, Yuheng Bu, Yury Polyanskiy, Gregory W. Wornell

We study the potential of data-driven deep learning methods for separation of two communication signals from an observation of their mixture.

Exploiting Temporal Structures of Cyclostationary Signals for Data-Driven Single-Channel Source Separation

1 code implementation22 Aug 2022 Gary C. F. Lee, Amir Weiss, Alejandro Lancho, Jennifer Tang, Yuheng Bu, Yury Polyanskiy, Gregory W. Wornell

We study the problem of single-channel source separation (SCSS), and focus on cyclostationary signals, which are particularly suitable in a variety of application domains.

Generative Visual Rationales

no code implementations4 Apr 2018 Jarrel Seah, Jennifer Tang, Andy Kitchen, Jonathan Seah

For each prediction, we generate visual rationales by optimizing a latent representation to minimize the prediction of disease while constrained by a similarity measure in image space.

Thinking like a machine — generating visual rationales through latent space optimization

no code implementations ICLR 2018 Jarrel Seah, Jennifer Tang, Andy Kitchen, Jonathan Seah

For each prediction, we generate visual rationales for positive classifications by optimizing a latent representation to minimize the probability of disease while constrained by a similarity measure in image space.

Cannot find the paper you are looking for? You can Submit a new open access paper.