The research by Zuxuan Wu, Ser-Nam Lim, Larry Davis, and Tom Goldstein, studies the art and science of creating adversarial attacks against object detectors. Most work on real-world adversarial attacks has focused on classifiers, which assign a holistic label to an entire image, rather than on detectors that locate objects within an image. Detectors work by taking into account thousands of possible bounding boxes within the image with different locations, sizes, and aspect ratios. To fool an object detector, we must fool all possible bounding boxes in the image, which is much more difficult than fooling the single output of a classifier.
This sweater is more than just a warm garment for the winter, as it incorporates an adverse patterned lining that evades the most common object detectors.
This is what it looks like when used in real-time:
[1] Invisibility cloak.(2015). University of Maryland. Umd.edu. Last accessed October 3, 2022, at https://www.cs.umd.edu/~tomg/projects/invisible.