Preparing for the Artificial Intelligence Takeover

Preparing for the Artificial Intelligence Takeover

Ephraim Baron

Blogpost Image

Filmmakers and dystopian sci-fi novelists have long predicted a future where robots take control and humans are relegated to a servant class – or worse. This thought was in the back of my mind as I joined 20 Guidewire engineers and architects in an “Artificial Intelligence (AI)/Machine Learning (ML) Workshop” led by Amazon Web Services.

The field of AI has long captured our imagination in forms ranging from calculating machines to Frankenstein’s monster. Within modern computer science, the subject got its start at Dartmouth College in the 1950s. At the time, computing machines that could perform such quotidian tricks as playing checkers astonished audiences. The corresponding attention and hype led to inflated expectations of the rapid arrival of a robot-run world. The failure of this world to materialize led to a decades-long period of AI disillusionment. It’s ironic, then, that AI’s more recent rebirth – in the public eye, at least – resulted largely from machines like IBM’s Deep Blue that was able to beat the reigning world chess champion in 1997 and IBM’s Watson system that was able to beat the best human competitors at the game show Jeopardy! in 2010. We humans love our games, after all. Today, applications of AI and its related discipline ML are growing exponentially in fields ranging from healthcare and financial services, to machine-assisted – and eventually driverless – automobiles.

Returning to our workshop, our mission was to collaborate with non-biological entities (NBEs) on a project called Skynet, described as “a philanthropic initiative to assist the BE’s (Biological Entities) with their most pressing problems. Mostly, there are too many of them, they create too much waste, and are consuming natural resources too quickly. Your task is to convince the BEs that it is in their best interest to report to the processing center nearest to them for ARP (Atom Re-assignment Processing).**”

blog-20171109-artificial-intelligence-01.jpg

The workshop was divided into a series of five challenges:

  1. Use MXNet to learn the symbols of human language;

  2. Use the Amazon Machine Learning framework to separate humans from NBEs;

  3. Use Amazon’s Rekognition service to uniquely identify specific humans;

  4. Use Amazon Polly to speak with targeted BEs; and

  5. Use Amazon Lex to build a convincing chatbot that would convince humans to visit their nearest processing center for reassignment.

blog-20171109-artificial-intelligence-02-rasberry-pi.jpg

The equipment required was amazingly simple:

Who knew that humanity’s demise could be so diabolically minimalist?

We began with the introduction of a number of AI and ML concepts before diving into hands-on lab exercises. In terms of the challenges laid out above, we saw how ML was used for classification tasks like character identification (e.g. optical character recognition of scanned documents). We then used a ML-supervised learning model to analyze sets of features and distinguish between humans and non-humans.

blog-20171109-artificial-intelligence-03.png

The final step was to identify a target human for subsequent “reassignment persuasion”. If learners were creeped out by this prospect, they hid their discomfort well.

blog-20171109-artificial-intelligence-04.jpg

Having established our targets, we went about locating them using information feeds from security cameras, drones, and selfie uploads. Such streaming data has become increasingly commonplace, legislation notwithstanding. In our case, we used Amazon Rekognition to search for our quarry across several locations. Once our targets were located, we were ready to make contact.

blog-20171109-artificial-intelligence-05.jpg

The idea of computers conversing with humans in a convincing way began with Alan Turing and his concept of the Turing test – the concept of interaction with a machine which is indistinguishable from a human. Modern-day chatbots share this aspiration. In our case, our AI needed to generate speech that was convincingly human-like and to parse and respond to natural language. The two services used for this task were Amazon Polly and Amazon Lex, respectively. Happily for learners, though perhaps less so for our targets, these tools made human interactions persuasive enough to achieve our mission.

blog-20171109-artificial-intelligence-06.jpg

We were able to contact our target humans and convince them to register for reassignment with relative ease. Based on our work, a steady soylent green supply chain is ensured for future generations.

Our final exercise was a “super lab” that incorporated all the skills we had used so far. The interface was an observation bot complete with computer, camera, speaker, and motor that would traverse our game board scanning for and identifying objects as it went (i.e. Google Street View meets Big Brother).

blog-20171109-artificial-intelligence-07.jpg

And what kind of information were we looking for? Based on Amazon’s extensive research, the four most important classifications include:

  • Object in scene – classified by category and sub-type

  • Facial analysis – is it human, and is the person in our database?

  • Image moderation – is it safe, suggestive, or inappropriate?

  • Celebrity – because people are obsessed with fame

Our 20 participants split into teams, each determined to think like a robot and anticipate which features the bot would identify and classify. Teams would earn points for successfully detecting attributes and lose points for missing or detecting incorrect attributes.

In the end, a software glitch resulted in the bot’s inability to perform its task.

blog-20171109-artificial-intelligence-08.jpg

Humanity was spared. But for how long?

I will be hosting a Connections breakout session with AWS, “Data Security in the Public Cloud,” on Tuesday, November 14, 2017 at 3:45 p.m. in Lafite meeting room #8.

Tags