Search Results for author: Felix X. Yu

Found 24 papers, 9 papers with code

Designing Category-Level Attributes for Discriminative Visual Recognition

no code implementations CVPR 2013 Felix X. Yu, Liangliang Cao, Rogerio S. Feris, John R. Smith, Shih-Fu Chang

In this paper, we propose a novel formulation to automatically design discriminative "category-level attributes", which can be efficiently encoded by a compact category-attribute matrix.

Attribute Transfer Learning +1

$\propto$SVM for learning with label proportions

no code implementations4 Jun 2013 Felix X. Yu, Dong Liu, Sanjiv Kumar, Tony Jebara, Shih-Fu Chang

We study the problem of learning with label proportions in which the training data is provided in groups and only the proportion of each class in each group is known.

On Learning from Label Proportions

1 code implementation24 Feb 2014 Felix X. Yu, Krzysztof Choromanski, Sanjiv Kumar, Tony Jebara, Shih-Fu Chang

Learning from Label Proportions (LLP) is a learning setting, where the training data is provided in groups, or "bags", and only the proportion of each class in each bag is known.

Marketing

Circulant Binary Embedding

no code implementations13 May 2014 Felix X. Yu, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang

To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix.

Video Event Detection by Inferring Temporal Instance Labels

no code implementations CVPR 2014 Kuan-Ting Lai, Felix X. Yu, Ming-Syan Chen, Shih-Fu Chang

To solve this problem, we propose a large-margin formulation which treats the instance labels as hidden latent variables, and simultaneously infers the instance labels as well as the instance-level classification model.

Event Detection

An exploration of parameter redundancy in deep networks with circulant projections

no code implementations ICCV 2015 Yu Cheng, Felix X. Yu, Rogerio S. Feris, Sanjiv Kumar, Alok Choudhary, Shih-Fu Chang

We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection.

Compact Nonlinear Maps and Circulant Extensions

no code implementations12 Mar 2015 Felix X. Yu, Sanjiv Kumar, Henry Rowley, Shih-Fu Chang

This leads to much more compact maps without hurting the performance.

On Binary Embedding using Circulant Matrices

no code implementations20 Nov 2015 Felix X. Yu, Aditya Bhaskara, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang

To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix.

Fast Orthogonal Projection Based on Kronecker Product

no code implementations ICCV 2015 Xu Zhang, Felix X. Yu, Ruiqi Guo, Sanjiv Kumar, Shengjin Wang, Shi-Fu Chang

We propose a family of structured matrices to speed up orthogonal projections for high-dimensional data commonly seen in computer vision applications.

Image Retrieval Quantization

Federated Learning: Strategies for Improving Communication Efficiency

no code implementations ICLR 2018 Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, Dave Bacon

We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model.

Federated Learning Quantization

Orthogonal Random Features

no code implementations NeurIPS 2016 Felix X. Yu, Ananda Theertha Suresh, Krzysztof Choromanski, Daniel Holtmann-Rice, Sanjiv Kumar

We present an intriguing discovery related to Random Fourier Features: in Gaussian kernel approximation, replacing the random Gaussian matrix by a properly scaled random orthogonal matrix significantly decreases kernel approximation error.

Distributed Mean Estimation with Limited Communication

no code implementations ICML 2017 Ananda Theertha Suresh, Felix X. Yu, Sanjiv Kumar, H. Brendan McMahan

Motivated by the need for distributed learning and optimization algorithms with low communication cost, we study communication efficient algorithms for distributed mean estimation.

Quantization

Learning Discriminative and Transformation Covariant Local Feature Detectors

1 code implementation CVPR 2017 Xu Zhang, Felix X. Yu, Svebor Karaman, Shih-Fu Chang

Specifically, we extend the covariant constraint proposed by Lenc and Vedaldi by defining the concepts of "standard patch" and "canonical feature" and leverage these to train a novel robust covariant detector.

Image Retrieval

Learning discriminative and transformation covariant local feature detectors.

1 code implementation Computer Vision and Pattern Recognition 2017 Xu Zhang, Felix X. Yu, Svebor Karaman, Shih-Fu Chang

Specifically, we extend the covariant constraint proposed by Lenc and Vedaldi [8] by defining the concepts of “standard patch” and “canonical feature” and leverage these to train a novel robust covariant detector.

Image Retrieval

Learning Spread-out Local Feature Descriptors

2 code implementations ICCV 2017 Xu Zhang, Felix X. Yu, Sanjiv Kumar, Shih-Fu Chang

We propose a simple, yet powerful regularization technique that can be used to significantly improve both the pairwise and triplet losses in learning local feature descriptors.

Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling

1 code implementation26 Jun 2018 Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi, Felix X. Yu, Daniel Holtmann-Rice, Dmitry Storcheus, Afshin Rostamizadeh, Sanjiv Kumar

Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1. 1-3x) compared to the previous state-of-the-art methods.

Extreme Multi-Label Classification Multi-Label Learning +1

AdaCliP: Adaptive Clipping for Private SGD

1 code implementation20 Aug 2019 Venkatadheeraj Pichapati, Ananda Theertha Suresh, Felix X. Yu, Sashank J. Reddi, Sanjiv Kumar

Motivated by this, differentially private stochastic gradient descent (SGD) algorithms for training machine learning models have been proposed.

BIG-bench Machine Learning Privacy Preserving

Pre-training Tasks for Embedding-based Large-scale Retrieval

no code implementations ICLR 2020 Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, Sanjiv Kumar

We consider the large-scale query-document retrieval problem: given a query (e. g., a question), return the set of relevant documents (e. g., paragraphs containing the answer) from a large document corpus.

Information Retrieval Link Prediction +1

Federated Learning with Only Positive Labels

1 code implementation ICML 2020 Felix X. Yu, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar

We consider learning a multi-class classification model in the federated setting, where each user has access to the positive data associated with only a single class.

Federated Learning Multi-class Classification

Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces

no code implementations12 May 2021 Ankit Singh Rawat, Aditya Krishna Menon, Wittawat Jitkrittum, Sadeep Jayasumana, Felix X. Yu, Sashank Reddi, Sanjiv Kumar

Negative sampling schemes enable efficient training given a large number of classes, by offering a means to approximate a computationally expensive loss function that takes all labels into account.

Retrieval

FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients

no code implementations28 Jan 2022 Jianyu Wang, Hang Qi, Ankit Singh Rawat, Sashank Reddi, Sagar Waghmare, Felix X. Yu, Gauri Joshi

In classical federated learning, the clients contribute to the overall training by communicating local updates for the underlying model on their private data to a coordinating server.

Federated Learning

Automatic Engineering of Long Prompts

no code implementations16 Nov 2023 Cho-Jui Hsieh, Si Si, Felix X. Yu, Inderjit S. Dhillon

Large language models (LLMs) have demonstrated remarkable capabilities in solving complex open-domain tasks, guided by comprehensive instructions and demonstrations provided in the form of prompts.

Prompt Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.