Search Results for author: Alexander Soen

Found 9 papers, 3 papers with code

Linking Across Data Granularity: Fitting Multivariate Hawkes Processes to Partially Interval-Censored Data

1 code implementation3 Nov 2021 Pio Calderon, Alexander Soen, Marian-Andrei Rizoiu

The multivariate Hawkes process (MHP) is widely used for analyzing data streams that interact with each other, where events generate new events within their own dimension (via self-excitation) or across different dimensions (via cross-excitation).

Fair Densities via Boosting the Sufficient Statistics of Exponential Families

1 code implementation1 Dec 2020 Alexander Soen, Hisham Husain, Richard Nock

Furthermore, when the weak learners are specified to be decision trees, the sufficient statistics of the learned distribution can be examined to provide clues on sources of (un)fairness.

Fairness

UNIPoint: Universally Approximating Point Processes Intensities

no code implementations28 Jul 2020 Alexander Soen, Alexander Mathews, Daniel Grixti-Cheng, Lexing Xie

The proof connects the well known Stone-Weierstrass Theorem for function approximation, the uniform density of non-negative continuous functions using a transfer functions, the formulation of the parameters of a piece-wise continuous functions as a dynamic system, and a recurrent neural network implementation for capturing the dynamics.

Open-Ended Question Answering Point Processes +1

Interval-censored Hawkes processes

no code implementations16 Apr 2021 Marian-Andrei Rizoiu, Alexander Soen, Shidi Li, Pio Calderon, Leanne Dong, Aditya Krishna Menon, Lexing Xie

We propose the multi-impulse exogenous function - for when the exogenous events are observed as event time - and the latent homogeneous Poisson process exogenous function - for when the exogenous events are presented as interval-censored volumes.

Point Processes

On the Variance of the Fisher Information for Deep Learning

no code implementations NeurIPS 2021 Alexander Soen, Ke Sun

In the realm of deep learning, the Fisher information matrix (FIM) gives novel insights and useful tools to characterize the loss landscape, perform second-order optimization, and build geometric learning theories.

Learning Theory

Fair Wrapping for Black-box Predictions

1 code implementation31 Jan 2022 Alexander Soen, Ibrahim Alabdulmohsin, Sanmi Koyejo, Yishay Mansour, Nyalleng Moorosi, Richard Nock, Ke Sun, Lexing Xie

We introduce a new family of techniques to post-process ("wrap") a black-box classifier in order to reduce its bias.

Fairness

Sampled Transformer for Point Sets

no code implementations28 Feb 2023 Shidi Li, Christian Walder, Alexander Soen, Lexing Xie, Miaomiao Liu

The sparse transformer can reduce the computational complexity of the self-attention layers to $O(n)$, whilst still being a universal approximator of continuous sequence-to-sequence functions.

Inductive Bias

Tempered Calculus for ML: Application to Hyperbolic Model Embedding

no code implementations6 Feb 2024 Richard Nock, Ehsan Amid, Frank Nielsen, Alexander Soen, Manfred K. Warmuth

Most mathematical distortions used in ML are fundamentally integral in nature: $f$-divergences, Bregman divergences, (regularized) optimal transport distances, integral probability metrics, geodesic distances, etc.

Tradeoffs of Diagonal Fisher Information Matrix Estimators

no code implementations8 Feb 2024 Alexander Soen, Ke Sun

The Fisher information matrix characterizes the local geometry in the parameter space of neural networks.

Navigate

Cannot find the paper you are looking for? You can Submit a new open access paper.