Could Intelligent Assistants Prevent School Shootings?

Can intelligent personal assistants / mobile virtual assistants help to stop senseless school shootings? Can the same technology that fuels applications like Siri, Google Now, and EasilyDo sense when a young man has crossed the boundary from heartbreak into violent psychosis?

Troubled YouthEnvision an imaginary teenager named John Doe. John has had a difficult time adjusting to the social pressures of high school. The few friends he has are not part of the popular crowd and John has had brushes with school bullies. Recently, John has developed a crush on a girl from the popular crowd who happens to live in his neighborhood. Several months previously, John worked up the courage to ask her out, but she rejected him. In the process of her rejection, she also insulted John and instigated several bullies from her clique to mock him. These past few months have been a living hell for John and he can’t go on any longer.

There are warning signs that John is under intense emotional pressure. He has stopped playing club soccer. He has posted some disturbing images on Facebook and has made tweets that focus on betrayal and hint at the desire for revenge.

So what could John’s intelligent assistant do to help him in his troubled situation and prevent him from spiraling out of control? John’s IA has access to lots of information about him. The IA needs this access in order to provide John with the timely information he relies on. The IA maintains John’s calendar. It scans John’s emails to find out who his contacts are and to alert him of things like the status of packages he’s ordered. John’s IA can keep an eye on the sentiment of his tweets. The IA can get a sense of when John is under stress. The IA may even be able to look at a combination of factors and connect the dots to realize that John might be planning a very bad thing.

All of these technologies are available today. They might not be combined yet into one app, but they could be. The challenge probably lies in getting someone like John to agree to have his IA look out for him. If John were in agreement, who would the IA alert if signs pointed to a possible emotional crisis? Would John perhaps agree to have a peer alerted? If so, what’s the responsibility of the peer? What if the IA alerted a parent? In that case, John would most likely change his behavior to avoid detection. What if the IA were to alert a social worker or therapist? Again, many people would likely decline to use the app if they knew it was tracking them and could end up ‘ratting them out’ to some sort of professional or even to law enforcement.

What would convince people to have an IA keep a watchful eye over them to protect them from themselves? That’s a question that needs to be researched. There are obvious ethical questions associated with such an IA that go beyond just the user’s agreement to use it. But with the technology available, it seems like an application of intelligent assistant features that should be investigated. For someone like John, simply having a friend or trusted adult or peer know about his emotional pain could make all the difference. If this confidant could express sympathy and lend an ear, it might forestall John from making very bad decisions.

Will intelligent assistants of the future play the role of watchful, caring overseers that get us the help we need before we can cause harm to ourselves and other? It seems like a plausible scenario and one that should be explored.

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s