Categories
Editorial Sixty Seconds of AI

Why yes, I would like a knuckle sandwich.

I caught up on Far Side cartoons from the page-a-day calendar my brother got me as a gift. It’s interesting learning how much something you enjoyed as a kid is not only no longer fun can be quite upsetting once you’ve grown up in the culture that produced them – Whee, lynching jokes! You know, for kids!

On the other hand, it’s interesting to interpret things later in ways you never would when you first saw them. Here’s a recent one that jumped out at me (pardon the hasty phone scan):

Far Side comic strip of an alien badly disguised as human, confronted by violent men. Caption: Why yes, I would like a knuckle sandwich.
c. Andrews McMeel Universal

The punchline is the alien apparently understands English well enough to interpret an offer, but not realizing the “knuckle sandwich” offered is an idiom meaning a threat of violence, eagerly accepts it as an opportunity to make contact.

Why it’s interesting

As simple as this joke is, it’s actually much like the most advanced NLP systems we use today. I can guess that the very best speech recognition available from any of the enterprise vendors or big tech companies would be able to interpret a lot from the offer we infer our alien friend can guess phrasal intents like OfferAccept, or that “why yes thank you” is a synonym for “yes.”

On the other hand, this kind of crawling toward the Uncanny Valley only gets more frustrating as our progress shows how much we fail to get out of truly natural language. We can think of many examples of how our NLP especially for speech misses the forest for the trees.

Why we care

Ray Jackendoff in Patterns in the Mind uses a metaphor of a robot that understands only one language but speaks another to highlight two strange facts about language: Our knowledge of language is individual, but requires world knowledge, and using language requires reciprocity as a matter of cognitive fact.

It’s easy to imagine that robot, and how different the robot is from us makes the image stick. What’s funny to me today is that reading this old comic strip from the 1980s makes me realize as I think about it that we actually have built that robot again and again.

Our very best NLP systems don’t understand anything about language at all. As DNNs and other techniques get more complex and abstract, and processing power continues to expand, we can expect the need for VUI design to continue to expand, hand-crafting the platforms that turn what machines understand into something humans do.

Links

Patterns in the Mind on Google Books: https://books.google.com/books/about/Patterns_in_the_Mind.html?id=keMJX6BOppAC

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Editorial Sixty Seconds of AI

Towards a Shitty AI

Today I’m working on a research project with potentially heavy and far-reaching implications. As I checked email this morning, I was lost in thought how the AI that this interaction would require might be the kind of hand-wavey systems that requires suspending disbelief.  You know the kind, that’s pitched both in house and to VCs.

  1. Want a thing that exists in today’s world
  2. Data
  3. ??? (waves hands)
  4. AI reproduces thing in the world closely in the spirit and letter of what we want

A response to a thread in my email got me to thinking more about that.  Here’s a summary.

Dee VUI is a new company, so I’m sensitive to any information online that might be inaccurate or otherwise cast what we do in the wrong light.  I found online that certain public documents that are accessible by default had been consolidated.  A company that vends information had this on a page generated to manipulate SEO to direct searches to its services.

They have an opt out policy.  I filled out a form, received an email confirmation, clicked the link in the email to confirm receipt, and shortly thereafter received confirmation that the page would be removed in 48-72 hours.  I checked, and it was gone.

A few days later, I received an email solicitation from the company, sent through one of the popular audience contact management platforms, which always have an unsubscribe link.  I clicked it, and then, just to provide feedback to the company in case they’d use it, wrote them a one-line message saying I hadn’t signed up for their email list but asked them to remove my information from their system, so it’s a trust issue for me to believe they’ve removed it if they’re now emailing to promote their business.

This email was redirected to one of the popular customer service management platforms (unrelated to the contact management platform) It uses “ghost workers” (I’m borrowing the term Gray and Suri use in their book Ghost Work) to reply to email, SMS, and chat messages and may power the chatbot you last used, claiming to be “artificial intelligence.”  I know this because I received an automated response email telling me my message had been received and I could expect a response within 24 hours.

The next message was from a person who read the message, and realizing that nothing in their toolkit allowed them to take any action on my behalf, confirmed escalating it to their privacy and security team.  The next person send a message with instructions detailing how to fill out their opt out form to be removed from their system.  I replied, now annoyed, if sympathetic to the 2nd level support worker who probably had more pressure to have more and faster replies than the 1st, and was using a superpower to scan messages very quickly for a single phrase like “opt out” to see if there was a script to use.

I replied saying I was attempting to point out a trust issue with their service and that I’d been added to the email list without my consent as a result of following those very instructions.  The next reply, from a third person, sounded like they treated my message as a request to unsubscribe, warning me I would no longer receive promotional emails from the company, but I may continue to for 48 hours.

Finally, I received an automated email asking me to evaluate my customer service experience, with two links to click, according to whether my experience was good or bad.

Why it’s interesting

AI systems designed to break down interactions to steps and learn patterns in interactions require hand-correction to work.  Usually these systems target the most expensive parts of a system in order to automate it.

The way customer service policies are turned into manuals and scripts is breaks multi-step processes into individual steps to be executed. The way that ML systems are built is ingesting large sets of records of interactions in order to learn patterns in them, and hopefully reproduce the right ones while avoiding the wrong ones.  In this case, end-to-end email transcripts might be a likely source of training data.

Why I care

In today’s world, our systems are usually evaluated at the transactional level.  That is: identify a point of interaction, identify the right outcome, observe the actual outcome.  We also make products out of these things, by deciding what we want and then tweaking the systems to make outputs appropriate for our products.

Binary transactions that in and of themselves are shaped into a hierarchy of transactions designed only to make charts go up and to the right don’t lead to what we expect them to.  In this particular case, the entire system’s design doesn’t consider error, and as a result, each error is treated as a valid input to the next step.  Once there’s a successful intervention, the user is asked for input on whether or not it was a good experience.

System here really means system of systems.  The system I used to contact customer service about the web content system, which itself monetized data scraped from a system that published information for free.  That system connected to another system, which downstream was connected to another final system.

My iteraction with the system had a human in the loop at ever step.  That means it probably won’t get better by automating more of it.  By proposing, “Machine learning will identify the right cases, so we can be hands off the wheel,” unless you’re okay with moving towards a shitty AI.

How do you evaluate an experience that shouldn’t ever have happened?

It’s easy.

Our hands are on the wheel, even when you take them off.

Let’s have a conversation

This blog is supposed to be brief, but from time to time we have editorial thoughts we’ll share. This is the first one. Let us know what you think! Write us at:

sixtyseconds@deevui.com