Today I’m working on a research project with potentially heavy and far-reaching implications. As I checked email this morning, I was lost in thought how the AI that this interaction would require might be the kind of hand-wavey systems that requires suspending disbelief. You know the kind, that’s pitched both in house and to VCs.
- Want a thing that exists in today’s world
- Data
- ??? (waves hands)
- AI reproduces thing in the world closely in the spirit and letter of what we want
A response to a thread in my email got me to thinking more about that. Here’s a summary.
Dee VUI is a new company, so I’m sensitive to any information online that might be inaccurate or otherwise cast what we do in the wrong light. I found online that certain public documents that are accessible by default had been consolidated. A company that vends information had this on a page generated to manipulate SEO to direct searches to its services.
They have an opt out policy. I filled out a form, received an email confirmation, clicked the link in the email to confirm receipt, and shortly thereafter received confirmation that the page would be removed in 48-72 hours. I checked, and it was gone.
A few days later, I received an email solicitation from the company, sent through one of the popular audience contact management platforms, which always have an unsubscribe link. I clicked it, and then, just to provide feedback to the company in case they’d use it, wrote them a one-line message saying I hadn’t signed up for their email list but asked them to remove my information from their system, so it’s a trust issue for me to believe they’ve removed it if they’re now emailing to promote their business.
This email was redirected to one of the popular customer service management platforms (unrelated to the contact management platform) It uses “ghost workers” (I’m borrowing the term Gray and Suri use in their book Ghost Work) to reply to email, SMS, and chat messages and may power the chatbot you last used, claiming to be “artificial intelligence.” I know this because I received an automated response email telling me my message had been received and I could expect a response within 24 hours.
The next message was from a person who read the message, and realizing that nothing in their toolkit allowed them to take any action on my behalf, confirmed escalating it to their privacy and security team. The next person send a message with instructions detailing how to fill out their opt out form to be removed from their system. I replied, now annoyed, if sympathetic to the 2nd level support worker who probably had more pressure to have more and faster replies than the 1st, and was using a superpower to scan messages very quickly for a single phrase like “opt out” to see if there was a script to use.
I replied saying I was attempting to point out a trust issue with their service and that I’d been added to the email list without my consent as a result of following those very instructions. The next reply, from a third person, sounded like they treated my message as a request to unsubscribe, warning me I would no longer receive promotional emails from the company, but I may continue to for 48 hours.
Finally, I received an automated email asking me to evaluate my customer service experience, with two links to click, according to whether my experience was good or bad.
Why it’s interesting
AI systems designed to break down interactions to steps and learn patterns in interactions require hand-correction to work. Usually these systems target the most expensive parts of a system in order to automate it.
The way customer service policies are turned into manuals and scripts is breaks multi-step processes into individual steps to be executed. The way that ML systems are built is ingesting large sets of records of interactions in order to learn patterns in them, and hopefully reproduce the right ones while avoiding the wrong ones. In this case, end-to-end email transcripts might be a likely source of training data.
Why I care
In today’s world, our systems are usually evaluated at the transactional level. That is: identify a point of interaction, identify the right outcome, observe the actual outcome. We also make products out of these things, by deciding what we want and then tweaking the systems to make outputs appropriate for our products.
Binary transactions that in and of themselves are shaped into a hierarchy of transactions designed only to make charts go up and to the right don’t lead to what we expect them to. In this particular case, the entire system’s design doesn’t consider error, and as a result, each error is treated as a valid input to the next step. Once there’s a successful intervention, the user is asked for input on whether or not it was a good experience.
System here really means system of systems. The system I used to contact customer service about the web content system, which itself monetized data scraped from a system that published information for free. That system connected to another system, which downstream was connected to another final system.
My iteraction with the system had a human in the loop at ever step. That means it probably won’t get better by automating more of it. By proposing, “Machine learning will identify the right cases, so we can be hands off the wheel,” unless you’re okay with moving towards a shitty AI.
How do you evaluate an experience that shouldn’t ever have happened?
It’s easy.
Our hands are on the wheel, even when you take them off.
Let’s have a conversation
This blog is supposed to be brief, but from time to time we have editorial thoughts we’ll share. This is the first one. Let us know what you think! Write us at:
sixtyseconds@deevui.com