Moreover, this task can be used to improve visual question generation and visual question answering.
Further, for other homophilous nodes excluded in the neighborhood, they are ignored for information aggregation.
However, most methods only learn a fixed representation for each feature without considering the varying importance of each feature under different contexts, resulting in inferior performance.
The most important motivation of this research is that we can use the straightforward element-wise multiplication operation to replace the image convolution in the frequency domain based on the Cross-Correlation Theorem, which obviously reduces the computation complexity.
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
To the best of our knowledge, we are the first to make a reasonable dynamic runtime scheduler on the combination of tensor swapping and tensor recomputation without user oversight.
To this end, we propose a cross-metric multi-dimensional root cause analysis method, named CMMD, which consists of two key components: 1) relationship modeling, which utilizes graph neural network (GNN) to model the unknown complex calculation among metrics and aggregation function among dimensions from historical data; 2) root cause localization, which adopts the genetic algorithm to efficiently and effectively dive into the raw data and localize the abnormal dimension(s) once the KPI anomalies are detected.
Fine-tuning pretrained models is a common practice in domain generalization (DG) tasks.
Ranked #1 on Domain Generalization on VLCS
To alleviate the huge search cost caused by the expanded search space, three strategies are adopted: First, an adaptive pruning strategy that iteratively trims the average model size in the population without compromising performance.
We propose a simple but powerful data-driven framework for solving highly challenging visual deep reinforcement learning (DRL) tasks.
Continuous-depth neural networks, such as the Neural Ordinary Differential Equations (ODEs), have aroused a great deal of interest from the communities of machine learning and data science in recent years, which bridge the connection between deep neural networks and dynamical systems.
To address these issues, we propose a RL-enhanced GNN explainer, RG-Explainer, which consists of three main components: starting point selection, iterative graph generation and stopping criteria learning.
In this paper, to eliminate the effort for tuning the momentum-related hyperparameter, we propose a new adaptive momentum inspired by the optimal choice of the heavy ball momentum for quadratic optimization.
Distributed data-parallel training has been widely used for natural language processing (NLP) neural network models.
Distributed stochastic gradient descent (SGD) approach has been widely used in large-scale deep learning, and the gradient collective method is vital to ensure the training scalability of the distributed deep learning system.
Ensemble learning, which can consistently improve the prediction performance in supervised learning, has drawn increasing attentions in reinforcement learning (RL).
A good state representation is crucial to reinforcement learning (RL) while an ideal representation is hard to learn only with signals from the RL objective.
Real-world data is often generated by some complex distribution, which can be approximated by a composition of multiple simpler distributions.
How to make intelligent decisions is a central problem in machine learning and cognitive science.
Autoregressive generative models are commonly used, especially for those tasks involving sequential data.
Specifically, we explicitly consider the difference between the online and offline data and apply an adaptive update scheme accordingly, i. e., a pessimistic update strategy for the offline dataset and a greedy or no pessimistic update scheme for the online dataset.
The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change.
In this paper, we endeavor to obtain a better understanding of GCN-based CF methods via the lens of graph signal processing.
IIB significantly outperforms IRM on synthetic datasets, where the pseudo-invariant features and geometric skews occur, showing the effectiveness of proposed formulation in overcoming failure modes of IRM.
Based on the proposed quality measurement, we propose a deep Tiny Face Quality network (tinyFQnet) to learn a quality prediction function from data.
Hence, the key is to make full use of rich interaction information among streamers, users, and products.
One-bit matrix completion is an important class of positiveunlabeled (PU) learning problems where the observations consist of only positive examples, eg, in top-N recommender systems.
Graph pooling that summaries the information in a large graph into a compact form is essential in hierarchical graph representation learning.
In collaborative filtering (CF) algorithms, the optimal models are usually learned by globally minimizing the empirical risks averaged over all the observed data.
Graph neural networks (GNN) have been proven to be mature enough for handling graph-structured data on node-level graph representation learning tasks.
By monitoring the impact of varying resolution on the quality of high-dimensional video analytics features, hence the accuracy of video analytics results, the proposed end-to-end optimization framework learns the best non-myopic policy for dynamically controlling the resolution of input video streams to globally optimize energy efficiency.
In this paper, we investigate the decentralized statistical inference problem, where a network of agents cooperatively recover a (structured) vector from private noisy samples without centralized coordination.
The stability and generalization of stochastic gradient-based methods provide valuable insights into understanding the algorithmic performance of machine learning models.
In recent years, the Deep Learning Alternating Minimization (DLAM), which is actually the alternating minimization applied to the penalty form of the deep neutral networks training, has been developed as an alternative algorithm to overcome several drawbacks of Stochastic Gradient Descent (SGD) algorithms.
Many works concentrate on how to reduce language bias which makes models answer questions ignoring visual content and language context.
Reference expression comprehension (REC) aims to find the location that the phrase refer to in a given image.
Distant supervision provides a means to create a large number of weakly labeled data at low cost for relation classification.
First, we provide a finite sample bound for both classification and regression problems under Semi-DA.
In real world applications like healthcare, it is usually difficult to build a machine learning prediction model that works universally well across different institutions.
We also provide novel update rules and theoretical convergence analysis.
Typically, the performance of TD(0) and TD($\lambda$) is very sensitive to the choice of stepsizes.
It is challenging for weakly supervised object detection network to precisely predict the positions of the objects, since there are no instance-level category annotations.
In this paper, we propose a general proximal incremental aggregated gradient algorithm, which contains various existing algorithms including the basic incremental aggregated gradient method.
Rapid progress has been made in the field of reading comprehension and question answering, where several systems have achieved human parity in some simplified settings.
Ranked #5 on Question Answering on DROP Test
Traditional approaches to the task of ACE event extraction usually depend on manually annotated data, which is often laborious to create and limited in size.
This paper considers the reading comprehension task in which multiple documents are given as input.
Open-domain targeted sentiment analysis aims to detect opinion targets along with their sentiment polarities from a sentence.
Most teacher-student frameworks based on knowledge distillation (KD) depend on a strong congruent constraint on instance level.
Focusing on discriminate spatiotemporal feature learning, we propose Information Fused Temporal Transformation Network (IF-TTN) for action recognition on top of popular Temporal Segment Network (TSN) framework.
The proposed FSN can make dense predictions at frame-level for a video clip using both spatial and temporal context information.
In this paper, we consider a class of nonconvex problems with linear constraints appearing frequently in the area of image processing.
Collaborative filtering (CF) is a popular technique in today's recommender systems, and matrix approximation-based CF methods have achieved great success in both rating prediction and top-N recommendation tasks.
For objective functions satisfying a relaxed strongly convex condition, the linear convergence is established under weaker assumptions on the step size and inertial parameter than made in the existing literature.
Support vector machines (SVMs) with sparsity-inducing nonconvex penalties have received considerable attentions for the characteristics of automatic classification and variable selection.
Despite that current reading comprehension systems have achieved significant advancements, their promising performances are often obtained at the cost of making an ensemble of numerous models.
Machine reading comprehension with unanswerable questions aims to abstain from answering when no answer can be inferred.
Ranked #12 on Question Answering on SQuAD2.0 dev
LRM is a general method for real-time detectors, as it utilizes the final feature map which exists in all real-time detectors to mine hard examples.
Depthwise convolutions provide significant performance benefits owing to the reduction in both parameters and mult-adds.
However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy.
Ranked #4 on Recommendation Systems on MovieLens 10M
A newly proposed work exploits Convolutional-Deconvolutional-Convolutional (CDC) filters to upsample the predictions of 3D ConvNets, making it possible to perform per-frame action predictions and achieving promising performance in terms of temporal action localization.
S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling technique, to choose the training examples according to this influence during hard example mining, and thus enhance the performance of object detectors.
Object detection is an import task of computer vision. A variety of methods have been proposed, but methods using the weak labels still do not have a satisfactory result. In this paper, we propose a new framework that using the weakly supervised method's output as the pseudo-strong labels to train a strongly supervised model. One weakly supervised method is treated as black-box to generate class-specific bounding boxes on train dataset. A de-noise method is then applied to the noisy bounding boxes. Then the de-noised pseudo-strong labels are used to train a strongly object detection network. The whole framework is still weakly supervised because the entire process only uses the image-level labels. The experiment results on PASCAL VOC 2007 prove the validity of our framework, and we get result 43. 4% on mean average precision compared to 39. 5% of the previous best result and 34. 5% of the initial method, respectively. And this frame work is simple and distinct, and is promising to be applied to other method easily.