Transparency in Automated Decision-Making

May 6, 2014 Kris Hammond

A tidal wave of conversations is occurring as to how we bring intelligent systems into the work force, but there is one issue that has fallen by the wayside.

Transparency.

We demand transparency of our coworkers. It is quite painful to work with people who are completely uncommunicative about what they are doing or why they are making the decisions they make. In fact, stellar communication skills are a prerequisite for most jobs we have today.

To boil it down, communication plays three crucial roles for us.

  1. Action: It allows us to understand what is going on in the world so we can act or react appropriately.
  2. Trust: It provides us with the information we need to establish trust in a relationship.
  3. Decision-Making: It allows us to provide input and guidance in those instances where decisions are being made improperly. If we don’t understand why decisions are being made, then we cannot assist in making them.

This issue of transparency is even more important as we move towards a world of automated decision-making. More and more, we are surrounded by black boxes that make opaque decisions about logistics, security issues, money matters, diagnosis and an ever-growing variety of data-driven functions. If we are not careful, we will end up with many of these systems making decisions for us without ever understanding why they are doing so.

To be fair, if HAL could have explained his mission to everyone to get on the same page, he wouldn’t have had to kill most of them before being rolled back to presentience by Dave.

The more we understand about both the inner workings of the systems we deal with as well as how they make their decisions, the more we will be able to work with them, rather than against them. We need to build systems that are designed to explain their thinking to us.

Solutions that involve the output of traces or logs of activity are not enough. This kind of output is only understandable by those who have expertise in the systems themselves, rather than by those who will be using them. But, what if you had a communication layer built into the technology that matched the expertise of a user?

I believe the answer is machine-generated narratives – a communication layer and natural language output that not only provides the answer, but the reasoning that went into deriving it. This type of solution enables our systems to communicate with us, coordinate with us and genuinely work with us.

If we continue to create a world in which our machines cannot explain themselves to us, we will have to blindly trust them. As the decisions they make become more integrated into our lives, we will become more dependent upon technologies that make decisions we do not understand.

If, however, we create a world in which our systems can communicate with us, we will be building a world in which we can collaborate with them, better guide them, and become smarter by working with them. In effect, we will be crafting partners and coworkers rather than a breed of passive aggressive and painfully uncommunicative bosses.  And, isn’t that better than having to shut them down after they go on a well justified but badly communicated killing spree?

Kris Hammond is the Chief Scientist of Narrative Science. Connect with Kris on  and Twitter.

Previous Article
Data vs. Decision-Making
Data vs. Decision-Making

At Narrative Science, we approach data in a better way. We find the meaning and insight and communicate it ...

Next Article
Have A Quill Engage Question? We’re here to help!
Have A Quill Engage Question? We’re here to help!

As the lead support person for our free application, Quill Engage, here are some answers to frequently aske...

×

Get Narrative Science blog posts in your Inbox

Keep an eye out for your confirm email!
Error - something went wrong!