CovOps
Location : Ether-Sphere Job/hobbies : Irrationality Exterminator Humor : Über Serious
| Subject: Why I’m turning my son into a cyborg Thu Jul 18, 2019 9:04 pm | |
| Imagine if everyone spoke a language you don’t understand. People have been speaking it around you since the day you were born, but while everyone else picks it up immediately, for you it means nothing. Others become frustrated with you. Friendships and jobs are difficult. Just being “normal” becomes a battle. For many with autism, this is the language of emotion. For those on the spectrum, fluency in facial expressions doesn’t come for free as it does for “neurotypicals.” To them, reading facial expressions seems like a superpower.
So when my son was diagnosed, I reacted not just as a mom. I reacted as a mad scientist and built him a superpower. This isn’t the first time I’ve played mad scientist with my son’s biology. When he was diagnosed with type 1 diabetes, I hacked his insulin pump and built an AI that learned to match his insulin to his emotions and activities. I’ve also explored neurotechnologies to augment human sight, hearing, memory, creativity, and emotions. Tiger moms might obsess over the “right” prep schools and extracurriculars for their child, but I say why leave their intellect up to chance? I’ve chosen to turn my son into a cyborg and change the definition of what it means to be human. But do my son’s engineered superpowers make him more human, or less?
How the CIA taught me to smileLife gave me an amazing and exhausting little boy. It also gave me unique tools to help him overcome his challenges. The first came in the form of a crazy CIA scheme to create an AI to catch liars. Years ago, on my very first machine-learning project as an undergrad, I helped build a real-time lie-detection system that could work off raw video. The AI we developed learned to recognize the facial expressions of people on camera and infer their emotions. It explored every frame of video, learning the facial muscle movements that indicated disgust (nose wrinkle + upper lip raise) or anger (eyebrows down and together + eyes glare + lips narrow). It even learned to distinguish “false” smiles from “true,” otherwise known as duchenne smiles (tightening superobital muscles around the eyes). Before this project, I assumed I’d spend a long neuroscience career sticking electrodes into brains. But watching our algorithms learn such a foundationally human task hooked me on studying how natural and artificial intelligence can work together. Fast forward through the next decade of my academic career (neural coding and cyborgs) and my first few startups (AI for education and jobs), and I had built a reputation as the crazy lady seeking to “maximize human potential.” When the ill-fated Google Glass, a wearable smartphone masquerading as a pair of glasses, was launched by throwing some guys out of a blimp, I was invited to explore ideas for what could be done beyond social posts and family videos. For a woman that wanted to build cyborgs, there was so much potential. Along with its computing power, Glass had a live camera, a heads-up display, and a combination of voice and head-motion controls. Drawing from that old CIA project and my years of machine-learning research, I began to build face- and expression-recognition systems for Glass. (In truth, the crappy little processor would heat up like a bomb, so the system required an extra computer strapped to the user’s back to work—not exactly Iron Man.) Using these augmented reality glasses, I could read people’s faces—and so many more terrible things. I imagined using them to scan a room, reading expressions and flagging false smiles (LA and DC, I’m looking at you). I saw a future where we could access credit scores, or pull up Facebook or Grindr accounts (or Ashley Madison for CFOs). The scene could play out like an episode of Black Mirror, with Glass cuing my actions to exploit the emotional vulnerabilities of others. But I wasn’t interested in the questionable or downright terrifying applications. I just wanted to give kids like my son greater insight into the people around them. In 2013 I built a proof of concept system called SuperGlass. Based on research from one of my academic labs, our system could recognize the expression of a face and write the emotion on Glass’s little heads-up screen, allowing an individual with autism to more easily perceive whether the person in front of them was happy, sad, angry, or something else. Simply wearing Glass while continuing everyday social interactions with others allowed these kids to learn that secret language of facial expressions; it’s the real-time version of the flashcard-based emotion-recognition training using cartoon faces on cardboard. But learning that a smile means happiness from a flashcard teaches kids nothing about why people are happy. Learning the same from natural social interaction actually helps build theory of mind, another secret language thought to be missing in autism. This research has continued over the years and overcome many of its original limitations. For many kids, these systems are more than a prosthetic—they actually advanced their learning of this secret emotional language. A team at Stanford has shown that it can improve their expression recognition, even when not wearing it. Our pilot even found that it helped foster empathy.
https://qz.com/1650393/transhumanist-parents-are-turning-their-children-into-cyborgs/ |
|