We show that STAMINA outperforms the prior SOTA for the setting of text-to-image continual customization on a 50-concept benchmark composed of landmarks and human faces, with no stored replay data.
We show that C-LoRA not only outperforms several baselines for our proposed setting of text-to-image continual customization, which we refer to as Continual Diffusion, but that we achieve a new state-of-the-art in the well-established rehearsal-free continual learning setting for image classification.
We show that existing deep keyword spotting mechanisms can be improved by Successive Refinement, where the system first classifies whether the input audio is speech or not, followed by whether the input is keyword-like or not, and finally classifies which keyword was uttered.
Such object navigation tasks usually require large-scale training in visual environments with labeled objects, which generalizes poorly to novel objects in unknown environments.
We first develop a graph-based ranking for measuring the importance of attention heads, and the extracted importance information is further integrated to an optimization-based procedure to impose the heterogeneous structured sparsity patterns on the ViT models.
However, standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.
In other words, the optimization objective of SVD is not aligned with the trained model's task accuracy.
Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.
In contrast to previous works, our model splits alignment into different levels to achieve learning better correlations without needing additional data and annotations.
Most existing continual learning approaches suffer from low accuracy and performance fluctuation, especially when the distributions of old and new data are significantly different.
The key primitive is that Dictionary-Lookup-Transformormations (DLT) is proposed to replace Linear Transformation (LT) in multi-modal detectors where each weight in Linear Transformation (LT) is approximately factorized into a smaller dictionary, index, and coefficient.
Knowledge distillation, Weight pruning, and Quantization are known to be the main directions in model compression.
We are the first to propose a method that works well across both OOD detection and calibration and under different types of shifts.
To this end, we theoretically derive two score functions for OOD detection, the covariate shift score and concept shift score, based on the decomposition of KL-divergence for both scores, and propose a geometrically-inspired method (Geometric ODIN) to improve OOD detection under both shifts with only in-distribution data.
DictFormer significantly reduces the redundancy in the transformer's parameters by replacing the prior transformer's parameters with compact, shared dictionary, a few unshared coefficients, and indices.
This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection.
Modern computer vision applications suffer from catastrophic forgetting when incrementally learning new concepts over time.
Ranked #5 on Class Incremental Learning on cifar100
In this paper, we introduce a novel multi-step spoken language understanding system based on adversarial learning that can leverage the multiround user's feedback to update slot values.
A cryptographic neural network inference service is an efficient way to allow two parties to execute neural network inference without revealing either party’s data or model.
We design a Dirichlet Prior RNN to model high-order uncertainty by degenerating as softmax layer for RNN model training.
Existing open-domain dialogue generation models are usually trained to mimic the gold response in the training set using cross-entropy loss on the vocabulary.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
Cryptography and Security Computers and Society
Text-based interactive recommendation provides richer user feedback and has demonstrated advantages over traditional interactive recommender systems.
Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise.
Text-based interactive recommendation provides richer user preferences and has demonstrated advantages over traditional interactive recommender systems.
ProgModel consists of a novel context gate that transfers previously learned knowledge to a small size expanded component; and meanwhile enables this new component to be fast trained to learn from new data.
Recurrent neural network (RNN) based joint intent classification and slot tagging models have achieved tremendous success in recent years for building spoken language understanding and dialog systems.
We present SkillBot that takes the first step to enable end users to teach new skills in personal assistants (PA).
Under deep neural networks, a pre-defined vocabulary is required to vectorize text inputs.
Many vision and language models suffer from poor visual grounding - often falling back on easy-to-learn language priors rather than basing their decisions on visual concepts in the image.
The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models or a joint model.
Ranked #1 on Intent Detection on ATIS
With the recently rapid development in deep learning, deep neural networks have been widely adopted in many real-life applications.
The results show that our approach leverages such simple user information to outperform state-of-the-art approaches by 0. 25% for intent detection and 0. 31% for slot filling using standard training data.
Learning intents and slot labels from user utterances is a fundamental step in all spoken language understanding (SLU) and dialog systems.
We present a system, CRUISE, that guides ordinary software developers to build a high quality natural language understanding (NLU) engine from scratch.
The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states.