Philosophy

116 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Learning Spatio-Temporal Representation with Pseudo-3D Residual Networks

ZhaofanQiu/pseudo-3d-residual-networks ICCV 2017

In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating $3\times3\times3$ convolutions with $1\times3\times3$ convolutional filters on spatial domain (equivalent to 2D CNN) plus $3\times1\times1$ convolutions to construct temporal connections on adjacent feature maps in time.

AXNet: ApproXimate computing using an end-to-end trainable neural network

PengZhenghao/AXNet 27 Jul 2018

To guarantee the approximation quality, existing works deploy two neural networks (NNs), e. g., an approximator and a predictor.

Expert Concept-Modeling Ground Truth Construction for Word Embeddings Evaluation in Concept-Focused Domains

yoortwijn/quine-ground-truth COLING 2020

We present a novel, domain expert-controlled, replicable procedure for the construction of concept-modeling ground truths with the aim of evaluating the application of word embeddings.

How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social Impact

zhijing-jin/NLP4SocialGood_Papers Findings (ACL) 2021

We lay the foundations via the moral philosophy definition of social good, propose a framework to evaluate the direct and indirect real-world impact of NLP tasks, and adopt the methodology of global priorities research to identify priority causes for NLP research.

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

FlowX: Towards Explainable Graph Neural Networks via Message Flows

divelab/DIG 26 Jun 2022

We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms.

Wayformer: Motion Forecasting via Simple & Efficient Attention Networks

zhejz/hptr 12 Jul 2022

In this paper, we present Wayformer, a family of attention based architectures for motion forecasting that are simple and homogeneous.

Active-Learning-as-a-Service: An Automatic and Efficient MLOps System for Data-Centric AI

mlsysops/active-learning-as-a-service 19 Jul 2022

In data-centric AI, active learning (AL) plays a vital role, but current AL tools 1) require users to manually select AL strategies, and 2) can not perform AL tasks efficiently.

Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness

pantheon5100/deacl 22 Jul 2022

Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.