AnCaps
ANARCHO-CAPITALISTS
Bitch-Slapping Statists For Fun & Profit Based On The Non-Aggression Principle
 
HomePortalGalleryRegisterLog in

 

 Dressing for the Surveillance Age

View previous topic View next topic Go down 
AuthorMessage
CovOps

CovOps

Female Location : Ether-Sphere
Job/hobbies : Irrationality Exterminator
Humor : Über Serious

Dressing for the Surveillance Age Vide
PostSubject: Dressing for the Surveillance Age   Dressing for the Surveillance Age Icon_minitimeFri Mar 13, 2020 11:05 pm

Tom Goldstein, an associate professor of computer science at the University of Maryland, took an “invisibility cloak” from a pile on a chair in his office and pulled it on over his head. To my eye, it looked like a baggy sweatshirt made of glossy polyester, printed with garish colors in formless shapes that, far from turning Goldstein invisible, made him impossible to miss.
It was mid-January. Early that morning, in my search for a suitable outfit to thwart the all-seeing eyes of surveillance machines, I had taken the train from New York City to College Park. As I rode the subway from Brooklyn to Penn Station, and then boarded Amtrak for my trip south, I counted the CCTV cameras; at least twenty-six caught me going and returning. When you come from a small town, as I do, where everyone knows your face, public anonymity—the ability to disappear into a crowd—is one of the great pleasures of city living. As cities become surveillance centers, packed with cameras that always see, public anonymity could vanish. Is there anything fashion can do?
I could have worn a surgical mask on my trip, ostensibly for health reasons; reports of an unexplained pneumonia outbreak in China were making the news, and I’d spotted a woman on the C train in an N95 respirator mask, which had a black, satiny finish. Later, when I spoke to Arun Ross, a computer-vision researcher at Michigan State University, he told me that a surgical mask alone might not block enough of my face’s pixels in a digital shot to prevent a face-recognition system from making a match; some algorithms can reconstruct the occluded parts of people’s faces. As the coronavirus spread through China, SenseTime, a Chinese A.I. company, claimed to have developed an algorithm that not only can match a surgically masked face with the wearer’s un-occluded face but can also use thermal imaging to detect an elevated temperature and discern whether that person is wearing a mask. For my purposes, a full-face covering, like the Guy Fawkes mask made popular by the “V for Vendetta” graphic novels and films, would have done the trick, but I doubt whether Amtrak would have let me on the train. During Occupy Wall Street, New York enforced old anti-mask laws to prevent protesters from wearing them.
Goldstein’s invisibility cloak clashed with the leopard-print cell-signal-blocking Faraday pouch, made by Silent Pocket, in which I carried my phone so that my location couldn’t be tracked. As a luxury item, the cloak was far from the magnificent Jammer Coat, a prototype of anti-surveillance outerwear that I had slipped on a few weeks earlier, at Coop Himmelb(l)au, an architecture studio in Vienna. The Jammer Coat, a one-of-a-kind, ankle-length garment with a soft finish and flowing sleeves, like an Arabic thawb, is lined with cellular-blocking metallic fabric and covered with patterns that vaguely resemble body parts, which could potentially render personal technology invisible to electronic-object detectors. Swaddled in the cushy coat, I could at least pretend to be the absolute master of my personal information, even if its designers, Wolf and Sophie Prix, wouldn’t let me leave the studio in it.
However, the invisibility cloak, while not as runway-ready as some surveillance-wear, did have one great advantage over other fashion items that aim to confuse the algorithms that control surveillance systems: the cloak’s designer was an algorithm.
To put together a Jammer outfit for my style of dressing—something like stealth streetwear—I first needed to understand how machines see. In Maryland, Goldstein told me to step in front of a video camera that projected my live image onto a large flat screen mounted on the wall of his office in the Iribe Center, the university’s hub for computer science and engineering. The screen showed me in my winter weeds of dark denim, navy sweater, and black sneakers. My image was being run through an object detector called YOLO (You Only Look Once), a vision system widely employed in robots and in CCTV. I looked at the camera, and that image passed along my optic nerve and into my brain.
On the train trip down to Maryland, I watched trees pass by my window, I glanced at other passengers, and I read my book, all without being aware of the incredibly intricate processing taking place in my brain. Photoreceptors in our retinas capture images, turning light into electrical signals that travel along the optic nerve. The primary visual cortex, in the occipital lobe, at the rear of the head, then sends out these signals—which are conveying things like edges, colors, and motion. As these pass through a series of hierarchical cerebral layers, the brain reassembles them into objects, which are in turn stitched together into complex scenes. Finally, the visual memory system in the prefrontal cortex recognizes them as trees, people, or my book. All of this in about two hundred milliseconds.
Building machines that can process and recognize images as accurately as a human has been, along with teaching machines to read, speak, and write our language, a holy grail of artificial-intelligence research since the early sixties. These machines don’t see holistically, either—they see in pixels, the minute grains of light that make up a photographic image. At the dawn of A.I., engineers tried to “handcraft” computer programs to extract the useful information in the pixels that would signal to the machine what kind of object it was looking at. This was often achieved by extracting information about the orientation of edges in an image, because edges appear the same under different lighting conditions. Programmers tried to summarize the content of an image by calculating a small list of numbers, called “features,” which describe the orientation of edges, as well as textures, colors, and shapes.
But the pioneers soon encountered a problem. The human brain has a remarkable ability, as it processes an object’s components, to save the useful content, while throwing away “nuisance variables,” like lighting, shadows, and viewpoint. A.I. researchers couldn’t describe exactly what makes a cat recognizable as a cat, let alone code this into a mathematical formula that was unaffected by the infinitely variable conditions and scenes in which a cat might appear. It was impossible to code the cognitive leap that our brains make when we generalize. Somehow, we know it’s a cat, even when we catch only a partial glimpse of it, or see one in a cartoon.
Researchers around the world, including those at the University of Maryland, spent decades training machines to see cats among other things, but, until 2010, computer vision, or C.V., still had an error rate of around thirty per cent, roughly six times higher than a typical person’s. After 9/11, there was much talk of “smart” CCTV cameras that could recognize faces, but the technology worked only when the images were passport-quality; it failed on faces “in the wild”—that is, out in the real world. Human-level object recognition was thought to be an untouchable problem, somewhere over the scientific horizon.
A revolution was coming, however. Within five years, machines could perform object recognition with not just human but superhuman performance, thanks to deep learning, the now ubiquitous approach to A.I., in which algorithms that process input data learn through multiple trial-and-error cycles. In deep-learning-based computer vision, feature extraction and mapping are done by a neural network, a constellation of artificial neurons. By training a neural net with a large database of images of objects or faces, the algorithm will learn to correctly recognize objects or faces it then encounters. Only in recent years have sufficient digitized data sets and vast cloud-based computing resources been developed to allow this data- and power-thirsty approach to work. Billions of trial-and-error cycles might be required for an algorithm to figure out not only what a cat looks like but what kind of cat it is.
Dressing for the Surveillance Age 200316_a22689A guest sinks into quicksand during a Q A session.
“Computer-vision problems that scientists said wouldn’t be overcome in our lifetime were solved in a couple of years,” Goldstein had told me when we first met, in New York. He added, “The reason the scientific community is so shocked by these results—they ripple through everything—is that we have this tool that achieves humanlike performance that nobody ever thought we would have. And suddenly not only do we have it but it does things that are way crazier than we could have imagined. It’s sort of mind-blowing.”

https://www.newyorker.com/magazine/2020/03/16/dressing-for-the-surveillance-age
Back to top Go down
 

Dressing for the Surveillance Age

View previous topic View next topic Back to top 
Page 1 of 1

Permissions in this forum:You cannot reply to topics in this forum
 :: Anarcho-Capitalist Categorical Imperatives :: AnCaps In Science, Technology & Environment-