Human annotations play a crucial role in machine learning (ML) research and development.
This paper offers a retrospective of what we learnt from organizing the workshop *Ethical Considerations in Creative applications of Computer Vision* at CVPR 2021 conference and, prior to that, a series of workshops on *Computer Vision for Fashion, Art and Design* at ECCV 2018, ICCV 2019, and CVPR 2020.
Despite the foundational role of benchmarking practices in this field, relatively little attention has been paid to the dynamics of benchmark dataset use and reuse, within or across machine learning subcommunities.
There is a tendency across different subfields in AI to valorize a small collection of influential benchmarks.
Specifically, we examine what dataset documentation communicates about the underlying values of vision data and the larger practices and goals of computer vision as a field.
Datasets have played a foundational role in the advancement of machine learning research.
In this paper, we introduce a rigorous framework for dataset development transparency which supports decision-making and accountability.
However, overall accuracy hides disproportionately high errors on a small subset of examples; we call this subset Compression Identified Exemplars (CIE).
Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models.
The ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range of constraints and objectives.
Although essential to revealing biased performance, well intentioned attempts at algorithmic auditing can have effects that may harm the very populations these measures are meant to protect.
Computers and Society
Facial analysis models are increasingly used in applications that have serious impacts on people's lives, ranging from authentication to surveillance tracking.
In hierarchical reinforcement learning a major challenge is determining appropriate low-level policies.
We consider the multi-agent reinforcement learning setting with imperfect information in which each agent is trying to maximize its own utility.
Sample generations are both varied and sharp, even many frames into the future, and compare favorably to those from existing approaches.
Ranked #3 on Video Prediction on KTH
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss.
In this paper we introduce a generative parametric model capable of producing high quality samples of natural images.
We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks.