|

Creating Ethical Chatbots

By Michael Castelluccio
December 1, 2019
0 comments

We generally think of chatbots as digital sales or service personnel providing information to customers, users, or employees within a company’s local network, but the future of these AI-enabled assistants is open-ended.

 

The definition of a commercial chatbot, according to Chatbots Magazine, is “a service powered by rules and sometimes artificial intelligence that you interact with via a chat interface (texting or talking).” The global market for these assistants is growing at a compound annual growth rate of 34%, and the technology still hasn’t reached its potential either in numbers or applications.

 

In a recent Chatbots article, Daniel Thomas provides an example of how unexpected the new applications can be. There’s a bot called Woebot, which was developed by Alison Darcy, a Stanford clinical psychologist. The online agent assists patients suffering from depression and anxiety. Darcy explains how: “CBT [cognitive behavioral therapy] is a readily translatable treatment for digital therapies, because it’s really structured, it’s based on data, it’s time-limited, and evidence-based. The big realization lightbulb moment…was that actually some people don’t want to talk to a human [therapist]. That’s kind of the value proposition of Woebot—that he’s very much a robot.”

 

When launched, Woebot helped more than 50,000 patients suffering from depression and anxiety in its first week. Currently, it’s estimated to be handling up to two million conversations a week on Facebook Messenger. A study published by Darcy and two Stanford researchers in JMIR Mental Health compared results from 70 participants where half used Woebot and half read the National Institute of Mental Health e-book Depression in College Students. Woebot’s group achieved significantly reduced symptoms compared to the control group. (See “The Science Behind Woebot” at woebot.io/the-science.)

 

GETTING CHATBOTS RIGHT

 

We spoke to Carlos Meléndez, cofounder and COO of Wovenware (wovenware.com), a creator of chatbots and AI software development services and solutions based in San Juan, Puerto Rico. Meléndez pointed out that AI, machine learning, ever-improving NLP (natural language processing), and very large databases all contribute to the technology used to create chatbots. But he also cautions that as bots become more capable and conversational, the disappearing boundaries of the services and tools impose new responsibilities on those designing the AI algorithms. Meléndez says, “Chatbots are a transformational technology. We have to be careful how we move ahead.”

 

Some of the best practices that Meléndez and the Wovenware team have adopted continue to evolve and frequently focus on the central issue of transparency—both for the purchaser and the end users.

 

As a member of the Forbes Technology Council, Meléndez frequently posts about AI development on the council’s blog. In June 2019, he wrote about the responsibilities of AI developers in “Expanding on Asimov’s Laws to Create Responsible Chatbots.”

 

You might recall Asimov’s three rules for robots that try to cover all contingencies for safe human-robot interaction. Meléndez’s post offers five principles for designing responsible and ethical chatbots. They are:

 

  1. “Be transparent.” Always let the user know they’re interacting with a software-informed robot.
  2. “Determine how you’ll use the chatbot.” Once you determine what your chatbot can and can’t do, make sure the user knows as well.
  3. “Know when it’s not appropriate to use a chatbot.” One example when it would be inappropriate would be a doctor’s office communicating diagnoses to patients via a bot.
  4. “Enable chatbots to appropriately communicate with diverse audiences.” A bot needs to be culturally sensitive, and using translation software can introduce linguistic-based misunderstandings. “Build your bot from the ground up in each language in which you’ll use it to communicate with customers.” Wovenware doesn’t use translation services.
  5. “Make sure it is not learning the wrong things.” AI software can continually learn on its own and is subject to a vulnerability called the “unsupervised child.” Wovenware continually monitors its installed chatbots and provides monthly reports on the activities of the software and its ongoing development.

 

No doubt, chatbots will become smarter and more diversified in the future. And as Meléndez points out, so should the demands for responsible design.

 

Michael Castelluccio has been the Technology Editor for Strategic Finance for 24 years. His SF TECHNOTES blog is in its 21st year. You can contact Mike at mcastelluccio@imanet.org.


0 No Comments

You may also like