Trending Research

Ordered by accumulated GitHub stars in last 3 days
Trending Latest Greatest
1
Card image cap
Phrase-Based & Neural Unsupervised Machine Translation
Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semi-supervised and supervised approaches leveraging the paucity of available bitexts.
397
3.82 stars / hour
 Paper  Code
2
Card image cap
Unsupervised Machine Translation Using Monolingual Corpora Only
Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data.
397
3.82 stars / hour
 Paper  Code
3
Card image cap
Word Translation Without Parallel Data
State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation.
397
3.82 stars / hour
 Paper  Code
4
Card image cap
Sliced Recurrent Neural Networks
However, they have difficulty in parallelization because of the recurrent structure, so it takes much time to train RNNs. In this paper, we introduce sliced recurrent neural networks (SRNNs), which could be parallelized by slicing the sequences into many subsequences.
132
1.33 stars / hour
 Paper  Code
5
Card image cap
Efficient Neural Architecture Search with Network Morphism
Network morphism, which keeps the functionality of a neural network while changing its neural architecture, could be helpful for NAS by enabling a more efficient training during the search. In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search by introducing a neural network kernel and a tree-structured acquisition function optimization algorithm.
2,471
0.84 stars / hour
 Paper  Code
6
Card image cap
Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination
Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so. We study whether this observation can be extended beyond the conventional domain of supervised learning: Can we learn a good feature representation that captures apparent similarity among instances, instead of classes, by merely asking the feature to be discriminative of individual instances?
29
0.40 stars / hour
 Paper  Code
7
Card image cap
Improving Generalization via Scalable Neighborhood Component Analysis
Current major approaches to visual recognition follow an end-to-end formulation that classifies an input image into one of the pre-determined set of semantic categories. Parametric softmax classifiers are a common choice for such a closed world with fixed categories, especially when big labeled data is available during training.
29
0.40 stars / hour
 Paper  Code
8
Card image cap
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards.
107,618
0.39 stars / hour
 Paper  Code
9
Generative Choreography using Deep Learning
Recent advances in deep learning have enabled the extraction of high-level features from raw sensor data which has opened up new possibilities in many different fields, including computer generated choreography. In this paper we present a system chor-rnn for generating novel choreographic material in the nuanced choreographic language and style of an individual choreographer.
275
0.36 stars / hour
 Paper  Code
10
Card image cap
models
Models and examples built with TensorFlow
40,033
0.36 stars / hour
 Paper  Code
11
T2Net: Synthetic-to-Realistic Translation for Solving Single-Image Depth Estimation Tasks
Current methods for single-image depth estimation use training datasets with real image-depth pairs or stereo pairs, which are not easy to acquire. We propose a framework, trained on synthetic image-depth pairs and unpaired real images, that comprises an image translation network for enhancing realism of input images, followed by a depth prediction network.
74
0.33 stars / hour
 Paper  Code
12
Card image cap
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.
97
0.31 stars / hour
 Paper  Code
13
Card image cap
tensor2tensor
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
4,725
0.27 stars / hour
 Paper  Code
14
Card image cap
MultiPoseNet: Fast Multi-Person Pose Estimation using Pose Residual Network
In this paper, we present MultiPoseNet, a novel bottom-up multi-person pose estimation architecture that combines a multi-task model with a novel assignment method. MultiPoseNet can jointly handle person detection, keypoint detection, person segmentation and pose estimation problems.
53
0.25 stars / hour
 Paper  Code
15
Card image cap
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront.
202
0.24 stars / hour
 Paper  Code
16
Card image cap
Consistent Individualized Feature Attribution for Tree Ensembles
Interpreting predictions from tree ensemble methods such as gradient boosting machines and random forests is important, yet feature attribution for trees is often heuristic and not individualized for each prediction. Here we show that popular feature attribution methods are inconsistent, meaning they can lower a feature's assigned importance when the true impact of that feature actually increases.
1,838
0.23 stars / hour
 Paper  Code
17
Card image cap
Axiomatic Attribution for Deep Networks
We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy.
1,838
0.23 stars / hour
 Paper  Code
18
Card image cap
SmoothGrad: removing noise by adding noise
Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision.
1,838
0.23 stars / hour
 Paper  Code
19
Card image cap
Axiomatic Attribution for Deep Networks
We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy.
1,837
0.23 stars / hour
 Paper  Code
20
Card image cap
SmoothGrad: removing noise by adding noise
Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision.
1,837
0.23 stars / hour
 Paper  Code
21
Card image cap
Neural Network Encapsulation
To resolve this limitation, we approximate the routing process with two branches: a master branch which collects primary information from its direct contact in the lower layer and an aide branch that replenishes master based on pattern variants encoded in other lower capsules. Motivated by the routing to make higher capsule have agreement with lower capsule, we extend the mechanism as a compensation for the rapid loss of information in nearby layers.
16
0.22 stars / hour
 Paper  Code
22
Card image cap
Dynamic Routing Between Capsules
We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules.
16
0.22 stars / hour
 Paper  Code
23
Card image cap
GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture. Meanwhile, the generated images are in high-quality and are photo-realistic, allowing them to be used as data augmentation to improve the performance of a hand gesture classifier.
11
0.21 stars / hour
 Paper  Code
24
Card image cap
Detectron
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
15,787
0.20 stars / hour
 Paper  Code
25
Card image cap
Deep Neural Decision Trees
In this work, we present Deep Neural Decision Trees (DNDT) -- tree models realised by neural networks. A DNDT is intrinsically interpretable, as it is a tree.
24
0.19 stars / hour
 Paper  Code
26
Card image cap
Ray: A Distributed Framework for Emerging AI Applications
The next generation of AI applications will continuously interact with the environment and learn from these interactions. These applications impose new and demanding systems requirements, both in terms of performance and flexibility.
3,909
0.18 stars / hour
 Paper  Code
27
Card image cap
Attention U-Net: Learning Where to Look for the Pancreas
We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task.
161
0.18 stars / hour
 Paper  Code
28
Card image cap
Attention-Gated Networks for Improving Ultrasound Scan Plane Detection
In this work, we apply an attention-gated network to real-time automated scan plane detection for fetal ultrasound screening. We show that, when the base network has a high capacity, the incorporated attention mechanism can provide efficient object localisation while improving the overall performance.
161
0.18 stars / hour
 Paper  Code
29
Card image cap
Acquisition of Localization Confidence for Accurate Object Detection
Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes.
179
0.17 stars / hour
 Paper  Code
30
Masked Autoregressive Flow for Density Estimation
Autoregressive models are among the best performing neural density estimators. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow.
966
0.17 stars / hour
 Paper  Code
31
TensorFlow Distributions
The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation. Building on two basic abstractions, it offers flexible building blocks for probabilistic computation.
966
0.17 stars / hour
 Paper  Code
32
Card image cap
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition. However, recent studies have highlighted the lack of robustness in well-trained deep neural networks to adversarial examples.
21
0.16 stars / hour
 Paper  Code
33
Card image cap
RippleNet: Propagating User Preferences on the Knowledge Graph for Recommender Systems
To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information.
23
0.16 stars / hour
 Paper  Code
34
Learning-based Video Motion Magnification
We show that the learned filters achieve high-quality results on real videos, with less ringing artifacts and better noise characteristics than previous methods. While our model is not trained with temporal filters, we found that the temporal filters can be used with our extracted representations up to a moderate magnification, enabling a frequency-based motion selection.
16
0.16 stars / hour
 Paper  Code
35
Card image cap
Diverse Image-to-Image Translation via Disentangled Representations
In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time.
115
0.16 stars / hour
 Paper  Code
36
Card image cap
Implementing Neural Turing Machines
Neural Turing Machines (NTMs) are an instance of Memory Augmented Neural Networks, a new class of recurrent neural networks which decouple computation from memory by introducing an external memory unit. Our implementation learns to solve three sequential learning tasks from the original NTM paper.
347
0.15 stars / hour
 Paper  Code
37
Card image cap
Neural Turing Machines
We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent.
347
0.15 stars / hour
 Paper  Code
38
Card image cap
Horovod: fast and easy distributed deep learning in TensorFlow
Training modern deep learning models requires large amounts of computation, often provided by GPUs. Depending on the particular methods employed, this communication may entail anywhere from negligible to significant overhead.
3,442
0.15 stars / hour
 Paper  Code
39
Card image cap
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks.
964
0.15 stars / hour
 Paper  Code
40
Card image cap
Compositional Attention Networks for Machine Reasoning
We present the MAC network, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. MAC moves away from monolithic black-box neural architectures towards a design that encourages both transparency and versatility.
170
0.15 stars / hour
 Paper  Code
41
Card image cap
Enriching Word Vectors with Subword Information
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. A vector representation is associated to each character $n$-gram; words being represented as the sum of these representations.
15,217
0.14 stars / hour
 Paper  Code
42
Card image cap
FastText.zip: Compressing text classification models
We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings.
15,217
0.14 stars / hour
 Paper  Code
43
Card image cap
Bag of Tricks for Efficient Text Classification
This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation.
15,217
0.14 stars / hour
 Paper  Code
44
Card image cap
DensePose: Dense Human Pose Estimation In The Wild
In this work, we establish dense correspondences between RGB image and a surface-based representation of the human body, a task we refer to as dense human pose estimation. We first gather dense correspondences for 50K persons appearing in the COCO dataset by introducing an efficient annotation pipeline.
3,215
0.14 stars / hour
 Paper  Code
45
Card image cap
AllenNLP: A Deep Semantic Natural Language Processing Platform
This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily.
2,964
0.14 stars / hour
 Paper  Code
46
Emergence of Locomotion Behaviours in Rich Environments
The reinforcement learning paradigm allows, in principle, for complex behaviours to be learned directly from simple reward signals. We demonstrate this principle for locomotion -- behaviours that are known for their sensitivity to the choice of reward.
42
0.13 stars / hour
 Paper  Code
47
Proximal Policy Optimization Algorithms
We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates.
42
0.13 stars / hour
 Paper  Code
48
Card image cap
MultiPoseNet: Fast Multi-Person Pose Estimation using Pose Residual Network
In this paper, we present MultiPoseNet, a novel bottom-up multi-person pose estimation architecture that combines a multi-task model with a novel assignment method. MultiPoseNet can jointly handle person detection, keypoint detection, person segmentation and pose estimation problems.
48
0.13 stars / hour
 Paper  Code
49
Card image cap
A guide to convolution arithmetic for deep learning
We introduce a guide to help deep learning practitioners understand and manipulate convolutional neural network architectures. The guide clarifies the relationship between various properties (input shape, kernel shape, zero padding, strides and output shape) of convolutional, pooling and transposed convolutional layers, as well as the relationship between convolutional and transposed convolutional layers.
4,339
0.13 stars / hour
 Paper  Code
50
Card image cap
Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research
The purpose of this technical report is two-fold. First of all, it introduces a suite of challenging continuous control tasks (integrated with OpenAI Gym) based on currently existing robotics hardware.
13,218
0.13 stars / hour
 Paper  Code
51
Card image cap
OpenAI Gym
OpenAI Gym is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms.
13,218
0.13 stars / hour
 Paper  Code
52
Card image cap
Quantized Densely Connected U-Nets for Efficient Landmark Localization
In this paper, we propose quantized densely connected U-Nets for efficient visual landmark localization. Finally, to reduce the memory consumption and high precision operations both in training and testing, we further quantize weights, inputs, and gradients of our localization network to low bit-width numbers.
54
0.12 stars / hour
 Paper  Code
53
Card image cap
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class.
206
0.12 stars / hour
 Paper  Code
54
Mask R-CNN
Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection.
7,149
0.12 stars / hour
 Paper  Code
55
Focal Loss for Dense Object Detection
We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training.
1,444
0.11 stars / hour
 Paper  Code
56
Card image cap
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss.
4,894
0.11 stars / hour
 Paper  Code
57
Card image cap
Image-to-Image Translation with Conditional Adversarial Networks
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping.
4,894
0.11 stars / hour
 Paper  Code
58
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model.
4,364
0.11 stars / hour
 Paper  Code
59
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images.
3,833
0.11 stars / hour
 Paper  Code
60
Card image cap
FaceNet: A Unified Embedding for Face Recognition and Clustering
Despite significant recent advances in the field of face recognition, implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%.
5,481
0.11 stars / hour
 Paper  Code
61
Card image cap
Deep Speech: Scaling up end-to-end speech recognition
We present a state-of-the-art speech recognition system developed using end-to-end deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments.
7,539
0.11 stars / hour
 Paper  Code
62
Card image cap
Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image
We propose a unified formulation for the problem of 3D human pose estimation from a single raw RGB image that reasons jointly about 2D joint estimation and 3D pose reconstruction to improve both tasks. We take an integrated approach that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN architecture and uses the knowledge of plausible 3D landmark locations to refine the search for better 2D locations.
1,173
0.11 stars / hour
 Paper  Code
63
Card image cap
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks.
1,173
0.11 stars / hour
 Paper  Code
64
Unsupervised Monocular Depth Estimation with Left-Right Consistency
Learning based methods have shown very promising results for the task of depth estimation in single images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches.
722
0.10 stars / hour
 Paper  Code
65
Card image cap
Reconfigurable Inverted Index
Existing approximate nearest neighbor search systems suffer from two fundamental problems that are of practical importance but have not received sufficient attention from the research community. Owing to the linear layout, the data structure can be dynamically adjusted after new items are added, maintaining the fast speed of the system.
15
0.10 stars / hour
 Paper  Code
66
Card image cap
MojiTalk: Generating Emotional Responses at Scale
Generating emotional language is a key step towards building empathetic natural language processing agents. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis.
43
0.10 stars / hour
 Paper  Code
67
Card image cap
Understanding Convolution for Semantic Segmentation
Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the "gridding issue" caused by the standard dilated convolution operation.
408
0.10 stars / hour
 Paper  Code
68
Card image cap
MatchZoo: A Toolkit for Deep Text Matching
In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods. In this paper, we introduce the MatchZoo toolkit that aims to facilitate the designing, comparing and sharing of deep text matching models.
1,398
0.10 stars / hour
 Paper  Code
69
Card image cap
Deep Randomized Ensembles for Metric Learning
Learning embedding functions, which map semantically related inputs to nearby locations in a feature space supports a variety of classification and information retrieval tasks. In this work, we propose a novel, generalizable and fast method to define a family of embedding functions that can be used as an ensemble to give improved results.
5
0.09 stars / hour
 Paper  Code
70
Card image cap
The Devil of Face Recognition is in the Noise
2) With the original datasets and cleaned subsets, we profile and analyze label noise properties of MegaFace and MS-Celeb-1M. 3) We study the association between different types of noise, i.e., label flips and outliers, with the accuracy of face recognition models.
134
0.09 stars / hour
 Paper  Code
71
Card image cap
Analogical Reasoning on Chinese Morphological and Semantic Relations
Analogical reasoning is effective in capturing linguistic regularities. This paper proposes an analogical reasoning task on Chinese.
2,360
0.09 stars / hour
 Paper  Code
72
Card image cap
Are GANs Created Equal? A Large-Scale Study
Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others.
477
0.09 stars / hour
 Paper  Code
73
Card image cap
The GAN Landscape: Losses, Architectures, Regularization, and Normalization
Generative Adversarial Networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of "tricks".
477
0.09 stars / hour
 Paper  Code
74
Card image cap
Familia: An Open-Source Toolkit for Industrial Topic Modeling
Familia is an open-source toolkit for pragmatic topic modeling in industry. Familia abstracts the utilities of topic modeling in industry as two paradigms: semantic representation and semantic matching.
1,361
0.09 stars / hour
 Paper  Code
75
Card image cap
NiftyNet: a deep-learning platform for medical imaging
NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.
578
0.09 stars / hour
 Paper  Code