Categories
Sixty Seconds of AI

Opposites Attract

I had a conversation with today about challenges in doing VUI design that are difficult and tedious to pay attention to. One of the biggest ones is interestingly is polarity. I was chatting with my friend Ann how colleagues had made a claim about good conversation design, and rather ironically used a word that literally meant the opposite of what was intended.

(The word “preempt” was used to mean “thoughtfully anticipate” instead of “prevent” or “replace with”.)

Ann pointed out that we knew what they meant, but I had to disagree in spirit, since we were talking about speech and text recognition system design.

Why it’s interesting

In human language, polarity can be expressed at probably any level. Like positive and negative numbers, within the systems we use, their effect on meaning is intuitive. This is how devices like irony can work without explanation.

Our interpretation of language, being more about intuitive meaning than literal meaning allows us to “get” the meaning even when the words used don’t actually mean what’s intended. What’s more, polarity implies the existence of its opposite.

If I say “on” for “off” it’s easier to forgive a detail when you get the gist. “Don’t forget to turn the lights on before you leave” makes sense pragmatically without correction to a human, but not, “Don’t forget to turn the lights blue before you leave.”

Why it matters

Neither of these would be understood correctly by a machine. NLP systems interpret meanings from the literal words. They only have what you said to go on.

Quite unintentionally, a pretty good case was made about why conversation design is hard to do at all, let alone well. Static models and intents are the source of meaning, which is derived from text, which makes things quite rigid.

It’s not fun, and even quite boring, for this kind of rigid attention to detail to be required in order for software work as intended. That might be why you can’t remember the last time you invited a bunch of conversation designers to a party.

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Editorial Sixty Seconds of AI

Towards a Shitty AI

Today I’m working on a research project with potentially heavy and far-reaching implications. As I checked email this morning, I was lost in thought how the AI that this interaction would require might be the kind of hand-wavey systems that requires suspending disbelief.  You know the kind, that’s pitched both in house and to VCs.

  1. Want a thing that exists in today’s world
  2. Data
  3. ??? (waves hands)
  4. AI reproduces thing in the world closely in the spirit and letter of what we want

A response to a thread in my email got me to thinking more about that.  Here’s a summary.

Dee VUI is a new company, so I’m sensitive to any information online that might be inaccurate or otherwise cast what we do in the wrong light.  I found online that certain public documents that are accessible by default had been consolidated.  A company that vends information had this on a page generated to manipulate SEO to direct searches to its services.

They have an opt out policy.  I filled out a form, received an email confirmation, clicked the link in the email to confirm receipt, and shortly thereafter received confirmation that the page would be removed in 48-72 hours.  I checked, and it was gone.

A few days later, I received an email solicitation from the company, sent through one of the popular audience contact management platforms, which always have an unsubscribe link.  I clicked it, and then, just to provide feedback to the company in case they’d use it, wrote them a one-line message saying I hadn’t signed up for their email list but asked them to remove my information from their system, so it’s a trust issue for me to believe they’ve removed it if they’re now emailing to promote their business.

This email was redirected to one of the popular customer service management platforms (unrelated to the contact management platform) It uses “ghost workers” (I’m borrowing the term Gray and Suri use in their book Ghost Work) to reply to email, SMS, and chat messages and may power the chatbot you last used, claiming to be “artificial intelligence.”  I know this because I received an automated response email telling me my message had been received and I could expect a response within 24 hours.

The next message was from a person who read the message, and realizing that nothing in their toolkit allowed them to take any action on my behalf, confirmed escalating it to their privacy and security team.  The next person send a message with instructions detailing how to fill out their opt out form to be removed from their system.  I replied, now annoyed, if sympathetic to the 2nd level support worker who probably had more pressure to have more and faster replies than the 1st, and was using a superpower to scan messages very quickly for a single phrase like “opt out” to see if there was a script to use.

I replied saying I was attempting to point out a trust issue with their service and that I’d been added to the email list without my consent as a result of following those very instructions.  The next reply, from a third person, sounded like they treated my message as a request to unsubscribe, warning me I would no longer receive promotional emails from the company, but I may continue to for 48 hours.

Finally, I received an automated email asking me to evaluate my customer service experience, with two links to click, according to whether my experience was good or bad.

Why it’s interesting

AI systems designed to break down interactions to steps and learn patterns in interactions require hand-correction to work.  Usually these systems target the most expensive parts of a system in order to automate it.

The way customer service policies are turned into manuals and scripts is breaks multi-step processes into individual steps to be executed. The way that ML systems are built is ingesting large sets of records of interactions in order to learn patterns in them, and hopefully reproduce the right ones while avoiding the wrong ones.  In this case, end-to-end email transcripts might be a likely source of training data.

Why I care

In today’s world, our systems are usually evaluated at the transactional level.  That is: identify a point of interaction, identify the right outcome, observe the actual outcome.  We also make products out of these things, by deciding what we want and then tweaking the systems to make outputs appropriate for our products.

Binary transactions that in and of themselves are shaped into a hierarchy of transactions designed only to make charts go up and to the right don’t lead to what we expect them to.  In this particular case, the entire system’s design doesn’t consider error, and as a result, each error is treated as a valid input to the next step.  Once there’s a successful intervention, the user is asked for input on whether or not it was a good experience.

System here really means system of systems.  The system I used to contact customer service about the web content system, which itself monetized data scraped from a system that published information for free.  That system connected to another system, which downstream was connected to another final system.

My iteraction with the system had a human in the loop at ever step.  That means it probably won’t get better by automating more of it.  By proposing, “Machine learning will identify the right cases, so we can be hands off the wheel,” unless you’re okay with moving towards a shitty AI.

How do you evaluate an experience that shouldn’t ever have happened?

It’s easy.

Our hands are on the wheel, even when you take them off.

Let’s have a conversation

This blog is supposed to be brief, but from time to time we have editorial thoughts we’ll share. This is the first one. Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Sixty Seconds of AI

Transparency and Engagement and Links

Hyperlinks for many sites are about “engagement,” which often might be translated to plain language as any or all of the following, among others:

  • Getting you to consume more
  • Getting you to open another page with an ad on it to increase the linking site’s revenue
  • Getting a kickback from an affiliate site if you purchase something (or view or click their ads)
  • Identifying you as coming from the linking site
  • Suggest credibility by citing a reference
  • Following current/common page design practices
  • Aesthetic appeal to tidiness

Why it’s interesting

This site isn’t doing any of those things.  We don’t have affiliates.  We don’t accept advertising, and we aren’t recommending anyone or anything except when we do it explicitly. This page isn’t about aesthetic.

It could be argued that the density of information of linked text increases cognitive load. What if this makes the intention behind what’s linked less obvious?

60 Seconds of AI is intended to be our take on something we at Dee VUI are reading and thinking about in about a minute of your time. We think this will make it easier and faster for you to do that, by simplifying our design to remove the technology from our writing.

Why you should care

You aren’t a product we’re selling to our partners. We try to provide something worth a little bit our colleagues’ time. Since we appreciate a critical perspective from our peers, we do our best to apply one to ourselves as well.

From now on, when we talk about something we’ve read, or we’re referring to something for which we think backstory might be required, you can find the links at the bottom of the page from now on. Our page may not look as cool, but we hope it increases the transparency to readers about who gets something when you click a link.

Most importantly, we don’t want you to click away while you’re reading a thing.  We believe AI researchers and designers desperately need a credible discussion that’s not part of a marketing cycle.  This is ours. We hope you find it useful.

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Sixty Seconds of AI

AI in the New Yorker

Since I read the paper magazine, I’m glad my partner hipped me to this article in the online edition of the New Yorker. The general angle is the author, Matthew Hutson goes to a few AI conferences and summarizes what’s being presented, and debated.

Why it’s interesting

Since a lot of research is published only as the proceedings, papers, and poster sessions at conferences, it’s an especially inside view of an industry that’s technically heavy and abstract enough that even people in the industry probably don’t understand a lot of the underlying concepts. I speak for myself reader, but don’t worry, I won’t judge.

Why we care

I know that the CV people are into some questionable stuff, such as using models trained on small samples of photographs scraped from the internet to identify people in video streams, and much, much worse.

Since I’d expected that was likely to be business run amok, it had not occurred to me that the people doing the research come up with bonkers, idiotic ideas, not the business folks. Mentioned here is a sample of speech to generate a physical image of a person.

Even though we do this as people, it with all of our lifetime data and all of our senses and cognitive ability. And we get it wrong quite a bit. Maybe even most of the time. It’s our ability to do many things at once and reinterpret our perception that allows this to be useful, not the raw percepts.

Guess I’ll be keeping an eye on the camera vision people from now on.

Links

https://www.newyorker.com/tech/annals-of-technology/who-should-stop-unethical-ai

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Sixty Seconds of AI

Algorithms of Oppression

More library books today. Freshly acquired from the Seattle Public Library I have in my hands Safiya U. Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism.

I’ve intended to read it for a long time, but, confession: I read a lot for work in news (especially evaluating claims), short pieces, and research. I rarely reserve any energy for book-length discussions about topics I consider myself conversant or expert in, or at least the core ideas.

Why it’s interesting

There aren’t a lot of books about the racism and bias encoded in technology though, so this was a natural compliment to the new-to-me authors by Ijeoma Oluo and Ibram X. Kendi I was reading.

Why we care

This book has been around for going on three years, which is a long time to sit on my reading list. George Floyd’s murder by police last year caused me to reprioritize my reading list in favor of racial justice and history. I’m sure I’m not the only one.

In that same time, it’s come to my attention there are lots of people who work on ML and AI have been outspoken about being “anti-woke.” Pretending there’s any such thing as culture-agnostic science and technology is about as fundamental a failure of understanding of the technologies I can imagine, working as I do with NLP and language models at the core of my design and research work.

I’ve become a little anxious I’ll soon enough have to have a discussion on the topic within professional relationships, so I thought this might be useful. I’ll let you know once I read it!

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Sixty Seconds of AI

Voices From The Valley

I forgot about this book I’d requested from the library months ago until it showed up in my holds today. The full title is Voices From The Valley: Tech Workers Talk About What They Do – And How They Do It, from an imprint run by editors from Logic Magazine.

6 interviews with people named only by their job title. It’s a humble little book and a quick read, but the breezy and straightforward format is very accessible, almost confessional, such as this discussion with a data scientist.

Why it’s interesting

Until I’d read it, I don’t think I’d ever read anything that represented the perspective of any working people in Silicon Valley side by side with so-called knowledge workers. A cook. A founder. A massage therapist. An engineer.

Why we care

Themes emerge when these perspectives are check-by-jowl and read one after the other. There is a sameness to all of the stories, despite each person’s work being quite different.

Not to say there is equality. The founder says, “I went from being basically broke […] to not having to worry about money anymore,” but not the cook or the technical writer.

Usually critical perspectives on will be alarmist, but this book captures the blasé, the polite fictions with ample convenient parking in Sunnyvale. It rings true to the work, and I’ve parked there.

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Sixty Seconds of AI

Google’s Ethical AI in Harper’s Weekly Review

I was amused to find Margaret Mitchell’s firing landing in this week’s Harper’s Weekly Review when it landed in my inbox this morning.

Why it’s interesting

Making The Economist and Harper’s covering my industry in the same week makes the world of an AI designer and researcher feel very small indeed!

Why we care

When the reporting of a comparatively conservative and staid politico-economic outlet as well as a sardonic feature of a relatively progressive one is fairly identical, it says something to me about the nature of reality.

Large technology companies firing people they hired to maintain a critical perspective on the company and its technology is an interesting mirror to hold up to the industry. If the facts aren’t up for debate in the partisam press, it suggests a narrative arc, an intentional one.

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Sixty Seconds of AI

Dr. Emily Bender on Rasa Chats Podcast

I enjoyed listening to NLP Linguist Dr. Emily Bender of the University of Washington on the latest Rasa Chats podcast.

Why it’s interesting

Dr. Bender recalls a discussion at a conference where the Bender Rule as it came to be called was coined by her colleagues. The Bender Rule, referring to Bender asking the question in each session where it wasn’t explicitly stated, requires NLP practitioners to state the language of the data set used when making claims about research.

Why we care

Why specificity matters quite a bit from the perspective of a technical linguist is a welcome critical insight I don’t hear often in commercial Natural Language Processing.

Dr. Bender’s plain-spoken, funny, and firm in her discussion of why a seemingly simple comment can make commercial AI researchers and engineers uncomfortable, even if it’s a hard technical requirement.

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com

Categories
Sixty Seconds of AI

Margaret Mitchell in The Economist

Listening to The Economist Morning Briefing this past Saturday I heard mention of Margaret Mitchell being fired from Google. Mitchell was the former head of the Ethical AI team at Google.

Why it’s interesting

Artificial Intelligence news is usually limited to hype or marketing claims about the latest technology. I’d seen Mitchell Tweet about this the previous day, and to my knowledge there was no other media coverage or attention. This however is the first time I’ve seen our colleagues in our niche fields or topics like AI in the Economist, fittingly in their Morning Briefing, distributed to smart speakers.

Why we care

Conway’s Law observes “organizations design systems which mirror their own communication structure”.

Mitchell, who Google claims was fired for code of conduct violations, was an outspoken advocate of Timnit Gebru’s, who Google claims was fired for policy violations. Gebru’s research has discussed how bias is encoded in large AI models for language and image recognition. A recent paper also identified the intensive energy use in building and running huge AI models.

It’s hard for me not to connect this with the other recent news of a person hired by the company to create a program addressing bias being fired after she was outspoken about bias. After Gebru’s firing, April Curley announced on Twitter that she’d been fired in September. Curley, a diversity recruiter, was hired to head Google’s initiatives to boost recruitment from Historically Black Colleges and Universities (HBCUs).

It’s more difficult for me, re: Conway, to avoid noting they’re all women, two of them women of color hired to help address bias within Google’s technologies and internal policies. I’m an AI designer, and all of my professional work was within and for large technology companies.

Let’s have a conversation

Let us know what you think! Write us at:

sixtyseconds@deevui.com