A Framework For Understanding Artificial Intelligence and Data

February 2, 2016 Alice Robbins

Artificial Intelligence Framework

“Any sufficiently advanced technology is equivalent to magic.” – Arthur C. Clarke

As artificial intelligence (AI) has reemerged, unrealistic expectations have been pinned on the technology. I’ve heard comments ranging from, “AI is basically magic,” or “AI is like a crystal ball, ask it any question and it’ll tell you the future.”  

Yes, AI, or more specifically intelligent systems, can greatly improve, augment or automate our knowledge processes but the technology isn’t ‘magic’ per-se. To fully understand what it can do for us, we need to first understand how it works.

A helpful and easy-to-understand framework can be found in “Practical Artificial Intelligence for Dummies” by our co-founder, Kristian Hammond. He believes that, “to demystify the AI systems in use today...look at how AI systems reason, and in particular, how they look at the world (assess), draw conclusions (infer), and make guesses about what is going to happen next (predict).” 

How Intelligent Systems Look At The World

An intelligent system that is familiar to most of us is Amazon’s recommendation engine. It is a prime example of how intelligent systems assess data. Amazon creates a personalized summary of its’ shoppers in order to match individuals against other similar customers and produce a source of predictions. The data for this assessment is transactional: what we touch and what we buy. Amazon’s recommendation engine then uses this information in its reasoning in order to build up profiles and share recommendations.

As Hammond shares in his book, “the sheer volume of the number of transactions often drives these systems. Even with very weak models of the world, systems can sift out relevant data from the noise because they have so much to work with. And the more we interact with these systems, the more they can learn about us.”

From the business use case perspective, systems aimed at predicting specific outcomes such as customer churn and equipment down time require the availability of historical information and rules related to those issues to be successful. Also, keep in mind that incorporating a variety of data, like interactional data and environmental data would also help make the learning algorithms more accurate than solely relying on the volume of data.    

How Intelligent Systems Draw Conclusions

“Inference is perhaps the most misunderstood aspect of artificial intelligence because inference is usually thought of as consisting of “if‐then” rules. While this is a fine characterization of the basic layer of intelligent systems, it’s like describing human reasoning as ‘just a bunch of chemical reactions,” Hammond says. “A more powerful approach is to start with the idea of relationships between things; objects and actions, profiles and categories, people and other people, and so on. Inference is the process of making the step from one thing to the other.”

The world is rarely so black and white so this type of inference is uncommon and very little can be done with it. Beyond purely deductive reasoning is the world of evidence‐based reasoning, which includes assessing similarity, categorizing, and amassing points of evidence. Hammond’s brief explanations of such efforts is spot on.                           

Checking similarity - When Amazon’s engine recommends a book, it first considers who I am similar to and what category of reader I might fall into. The engine bases this consideration on how close my profile is to that of other customers or to a generic profile that defines a category.

The match is rarely perfect, so the system needs to judge how well my profile lines up with others. It has to come up with a score indicating how similar the profiles are. It considers which features in the profiles match (providing evidence for the inference) and which features don’t match (providing evidence against the inference). Each feature that matches — or doesn’t match — adds or subtracts support for the inference I want to make.    

Categorizing - For categorization, techniques such as Naïve Bayes (using the likelihood associated with each feature that it implies membership in a group) can be used to calculate the likelihood that an object with a particular set of features is a member of a particular group. This technique adds “walks like a duck,” “quacks like a duck,” and “looks like a duck” to determine that a thing is, in fact, a duck.

The power of Naïve Bayes is that the systems that use it do not require any prior knowledge of how the features interact. Naïve Bayes takes advantage of the assumption that the features it uses as predictors are independent. This means systems making use of the technique can be implemented easily without having to first build a complex model of the world.

Amassing evidence - While rules drive inference for most AI systems, almost all the rules include some notion of evidence. These rules can both collaborate and compete with each other, making independent arguments for the truth of an inference that has to be mediated by a higher‐order process. At the core of these arguments are quantitative scores and thresholds, but the conclusions that are drawn using them are more qualitative in feel.

For example, an advanced natural language generation system can take in data and generate the following: “Joe’s Auto Garage has engaged in suspicious activity by making multiple deposits in amounts that are just below the federal reporting thresholds. There are ten deposits of $9,999 over a six-week period.”

How Intelligent Systems Guess What Will Happen Next

One focus of reasoning (for humans and machines) is particularly useful: prediction. Making guesses about what is going to happen next is important so that we can deal with predicted events and actions appropriately. Predictions about individuals or events can be made based on checking outcomes against other similar individuals or events and projecting those outcomes back onto the original individual or event.

Hammond states, “This combination of calculating similarity and projecting forward based on that similarity is called collaborative filtering and is at the center of most transactional recommendation systems. The goal of prediction is often not to recommend things to an individual but instead to anticipate a problem to be avoided. The target is the prediction itself, rather than the person who is receiving it.”        

A slightly different approach to predictions is using profiles to classify individuals (and their behaviors) into groups. In essence, the dynamic of assessing, inferring, and predicting comprises the core of many intelligent systems. Once you understand the basic drivers of these systems, you can start to understand how you might be able to use these systems to your own advantage (no magic required.)


CITO Research white paper

Previous Article
To Start an Artificial Intelligence Strategy, Follow the Data
To Start an Artificial Intelligence Strategy, Follow the Data

The intelligent systems succeeding today are built on a foundation of massive data sets. The data tied to t...

Next Article
5 for FinTech: Predictions for the Year Ahead
5 for FinTech: Predictions for the Year Ahead

2016 will prove to be a pivotal year for financial institutions as FinTech innovations hit the mainstream. ...

×

Get Narrative Science blog posts in your Inbox

Keep an eye out for your confirm email!
Error - something went wrong!