AnCaps
ANARCHO-CAPITALISTS
Bitch-Slapping Statists For Fun & Profit Based On The Non-Aggression Principle
 
HomePortalGalleryRegisterLog in

 

 New Research Warns of ‘Normal Accident’ From AI in Modern Warfare

View previous topic View next topic Go down 
AuthorMessage
CovOps

CovOps

Female Location : Ether-Sphere
Job/hobbies : Irrationality Exterminator
Humor : Über Serious

New Research Warns of ‘Normal Accident’ From AI in Modern Warfare Vide
PostSubject: New Research Warns of ‘Normal Accident’ From AI in Modern Warfare   New Research Warns of ‘Normal Accident’ From AI in Modern Warfare Icon_minitimeThu May 23, 2019 11:14 pm

Artificial intelligence provides vital solutions to some very complex problems but remains a looming threat as we consider its applications on the battlefield. New research details the impending risks and pins the fate of humanity on complex choices we will face in the near future.



New Research Warns of ‘Normal Accident’ From AI in Modern Warfare Evil-angry-robot_zyi-kTCd-640x394



A new paper from private intelligence and defense company ASRC Federal and the University of Maryland takes a deep dive into what could happen if we, as a society, choose to employ artificial intelligence in modern warfare. While many aspects of the paper focus on bleak and sobering scenarios that result in the eradication of humanity, it concludes with an understanding that this technology will exist and we can, too, if we make the right choices. Nevertheless, the researchers touch on two important inevitabilities: the development of weaponized artificial intelligence and an impending “normal accident” (more commonly known as a system accident) related to this technology.

Normal accidents, such as the Three Mile Island accident cited in the paper, occur through the implementation of complex technologies we can’t fully understand—at least, not yet. Artificial intelligence may fit the definition better than anything people have ever created. While we understand how AI works, we struggle to explain how it arrives at its conclusions for the same reason we struggle to do the same with people. With so many variables in the system to monitor, let alone analyze, the researchers raise an important concern about creating further abstraction from our tools of war:

Quote :
An extension of the human-on-the-loop approach is the human-initiated “fire and forget” approach to battlefield AI. Once the velocity and dimensionality of the battlespace increase beyond human comprehension, human involvement will be limited to choosing the temporal and physical bounds of behavior desired in an anticipated context. Depending on the immediacy or unpredictability of the threat, engaging the system manually at the onset of hostilities may be impossible.

While MIT has discovered a method for predicting some AI “errors” in advance and just initiated a 50-year AI accelerator program with the US Air Force, such preparation only mitigates the potential damage we can predict. Just as we failed to preempt the psychological toll of smartphones—a “normal accident” of a different variety—we will fail to predict future damage caused by artificial intelligence. That’s already happening.


Normal accidents do not have to originate from a weakness of human ethics, though it’s often difficult to tell the difference. Regardless of their circumstances, these accidents occur as an unfortunately harsh phenomenon of growth. As the paper states, abstaining from the development of potentially dangerous technologies will not prevent their expansion through other governments and private citizens—malicious or otherwise. While we cannot stop the growth of AI and we cannot prevent the damage it will do, we can follow the paper’s proposal and do our best to mitigate potential harm.

While the researchers only suggest prohibition and regulation of AI, an ethics-based legal foundation can at least provide a framework for managing the problems as they arise. We can also offset the cost by investing our time and resources in machine learning projects that help save lives. Because the paper does not address other uses of artificial intelligence it assesses the potential risks of weaponization in isolation. The human deficiencies that pose the risk we have today may not look the same tomorrow. With the concurrent market growth of AI, bio-implant technologies, and genetic modification, the people of tomorrow may be better equipped to handle the threats we can imagine today.

We can’t safely bet on the unknowns, whether positive or negative, but we can prepare for the evidence-based scenarios we can predict. Although this new research paints an incomplete picture, it encourages us to use the tools currently at our disposal to mitigate the potential harms of weaponized AI. With thoughtful effort, and the willingness to try, we retain hope of weathering the future storm of inevitable change.



https://www.extremetech.com/extreme/291762-artificial-intelligence-modern-warfare
Back to top Go down
 

New Research Warns of ‘Normal Accident’ From AI in Modern Warfare

View previous topic View next topic Back to top 
Page 1 of 1

Permissions in this forum:You cannot reply to topics in this forum
 :: Anarcho-Capitalist Categorical Imperatives :: AnCaps In Science, Technology & Environment-