Author ORCID Identifier

Jonathon Penney: 0000-0001-9570-0146

Document Type

Article

Publication Date

12-3-2020

Source Publication

2020 Workshop on Navigating the Broader Impacts of AI Research, Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020)

Abstract

This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks characterize themselves as “real world.” Despite this framing, however, we found the physical or real-world testing conducted was minimal, provided few details about testing subjects and was often conducted as an afterthought or demonstration. Adversarial ML research without representative trials or testing is an ethical, scientific, and health/safety issue that can cause real harms. We introduce the problem and our methodology, and then critique the physical domain testing methodologies employed by papers in the field. We then explore various barriers to more inclusive physical testing in adversarial ML and offer recommendations to improve such testing notwithstanding these challenges.

Share

COinS