AnCaps
ANARCHO-CAPITALISTS
Bitch-Slapping Statists For Fun & Profit Based On The Non-Aggression Principle
 
HomePortalGalleryRegisterLog in

 

  Homeland Security’s Lunatic ‘Pre-Crime’ Screening Will Never Work

View previous topic View next topic Go down 
AuthorMessage
CovOps

CovOps

Female Location : Ether-Sphere
Job/hobbies : Irrationality Exterminator
Humor : Über Serious

 Homeland Security’s Lunatic ‘Pre-Crime’ Screening Will Never Work Vide
PostSubject: Homeland Security’s Lunatic ‘Pre-Crime’ Screening Will Never Work    Homeland Security’s Lunatic ‘Pre-Crime’ Screening Will Never Work Icon_minitimeThu Apr 19, 2012 7:02 am

The U.S. Department of Homeland security is working on a project called FAST, the Future Attribute Screening Technology, which is some crazy straight-out-of-sci-fi pre-crime detection and prevention software which may come to an airport security screening checkpoint near you someday soon. Yet again the threat of terrorism is being used to justify the introduction of super-creepy invasions of privacy, and lead us one step closer to a turn-key totalitarian state. This may sound alarmist, but in cases like this a little alarm is warranted. FAST will remotely monitor physiological and behavioral cues, like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyze these cues algorithmically for statistical aberrance in an attempt to identify people with nefarious intentions. There are several major flaws with a program like this, any one of which should be enough to condemn attempts of this kind to the dustbin. Lets look at them in turn.

 Homeland Security’s Lunatic ‘Pre-Crime’ Screening Will Never Work Risto20crimefighting206



First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations — an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.

Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes — which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.

The second major problem with FAST is the experimental methodology being used to develop it. According to a DHS privacy impact assessment of the research, the technology is being tested in a lab setting using volunteer subjects. These volunteer participants are sorted into two groups, one of which is “explicitly instructed to carry out a disruptive act, so that the researchers and the participant (but not the experimental screeners) already know that the participant has malintent.” The experimental screeners then use the results from the FAST sensors to try and identify participants with malintent. Presumably this is where that 70 percent number comes from.

The validity of this procedure is based on the assumption that volunteers who have been instructed by researchers to “have malintent” serve as a reasonable facsimile of real life terrorists in the field. This seems like quite a leap. Without actual intent to commit a terrorist act — something these volunteers necessarily don’t have — it is likely to be difficult to have test observations that mimic the actual subtle cues a terrorist might show. It would seem that the act of instructing a volunteer to have malintent would make that intent seem acceptable within the testing conditions, thereby altering the subtle cues that a subject might exhibit. Without a legitimate sample exhibiting the actual characteristics being screened for — a near impossible proposition for this project — we should be extremely wary of any claimed results.

http://www.theatlantic.com/technology/archive/2012/04/homeland-securitys-pre-crime-screening-will-never-work/255971/
_________________
Anarcho-Capitalist, AnCaps Forum, Ancapolis, OZschwitz Contraband
“The state calls its own violence law, but that of the individual, crime.”-- Max Stirner
"Remember: Evil exists because good men don't kill the government officials committing it." -- Kurt Hofmann
Back to top Go down
 

Homeland Security’s Lunatic ‘Pre-Crime’ Screening Will Never Work

View previous topic View next topic Back to top 
Page 1 of 1

Permissions in this forum:You cannot reply to topics in this forum
 :: Anarcho-Capitalist Categorical Imperatives :: Via AnCaps: Law & Enforced Unnatural Order-