Search Results for author: Yo Joong Choe

Found 12 papers, 9 papers with code

Combining Evidence Across Filtrations

no code implementations15 Feb 2024 Yo Joong Choe, Aaditya Ramdas

An e-process quantifies the accumulated evidence against a composite null hypothesis over a sequence of outcomes.

valid

The Linear Representation Hypothesis and the Geometry of Large Language Models

1 code implementation7 Nov 2023 Kiho Park, Yo Joong Choe, Victor Veitch

Using this causal inner product, we show how to unify all notions of linear representation.

counterfactual Sentence

Counterfactually Comparing Abstaining Classifiers

1 code implementation NeurIPS 2023 Yo Joong Choe, Aditya Gangrade, Aaditya Ramdas

When evaluating black-box abstaining classifier(s), however, we lack a principled approach that accounts for what the classifier would have predicted on its abstentions.

Causal Inference counterfactual +1

Comparing Sequential Forecasters

1 code implementation30 Sep 2021 Yo Joong Choe, Aaditya Ramdas

Consider two forecasters, each making a single prediction for a sequence of events over time.

valid

An Empirical Study of Invariant Risk Minimization

1 code implementation10 Apr 2020 Yo Joong Choe, Jiyeon Ham, Kyubyong Park

Invariant risk minimization (IRM) (Arjovsky et al., 2019) is a recently proposed framework designed for learning predictors that are invariant to spurious correlations across different training environments.

text-classification Text Classification

word2word: A Collection of Bilingual Lexicons for 3,564 Language Pairs

2 code implementations LREC 2020 Yo Joong Choe, Kyubyong Park, Dongwoo Kim

We wrap our dataset and model in an easy-to-use Python library, which supports downloading and retrieving top-k word translations in any of the supported language pairs as well as computing top-k word translations for custom parallel corpora.

Sentence Translation

Discovery of Natural Language Concepts in Individual Units of CNNs

1 code implementation ICLR 2019 Seil Na, Yo Joong Choe, Dong-Hyun Lee, Gunhee Kim

Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret.

Concept Alignment General Classification +1

Local White Matter Architecture Defines Functional Brain Dynamics

no code implementations22 Apr 2018 Yo Joong Choe, Sivaraman Balakrishnan, Aarti Singh, Jean M. Vettel, Timothy Verstynen

If communication efficiency is fundamentally constrained by the integrity along the entire length of a white matter bundle, then variability in the functional dynamics of brain networks should be associated with variability in the local connectome.

Variable Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.