Assessing the Effectiveness of Artificial Intelligence in Your Organization, Part 2

This is the second of three posts that explore a critical aspect of your AI maturity, understanding how effective it is, and what you can do as an organization to improve its effectiveness.

How well does AI work in your organization?

The first thing to do is to look at how well your organization both understands and uses its data science teams and their work in utilizing artificial intelligence and machine learning in business applications. We will divide this into three aspects, each of which will be explored below.

What data is used & how are you using it?

Let’s start with understanding the sources of data you’re utilizing, because that’s a great place to begin. If you run into a lot of roadblocks even accessing what feels like basic data sources, your data science maturity is relatively low. Even more mature organization can often run into bottlenecks from busy data science teams that are unable to get around to internal requests in a timely manner, but your  understanding of where the issues are will help you assess effectiveness much easier.

Then, you need to get an understanding of how the data is being used. For more advanced organizations, access to data is less of an issue than the questions you are trying to answer or the problems you are trying to solve with the data.

One rule of thumb is to ensure that the value of the outcomes you’re able to measure is worth the effort being put into accessing and processing the data. That being said, the first time is often the hardest, and strategic use of artificial intelligence and machine learning can make subesequent times much easier.

How is artificial intelligence used?

After you have an understanding of how your data is being used, it’s now time to understand exactly how you are applying AI and machine learning in order to get quicker answers, more accurate or repeatable answers, or simply put, process your data in a way that human processing would either take too long to do, or be impossible to achieve.

Is it explainable?

In future posts, we’ll get into even more details beyond explainability, such as bias, repeatability, and other key measures of success. But explainability is a key factor. Let’s explore that now.

Great results from your AI tools and their application are obviously always a desired result. If you are unable to understand how those results were achieved, however, what you have is a black box that can potentially be risky. For instance, if you don’t know why you are getting the results you are getting, there could be any number of decisions made by your artificial intelligence

Explainable AI solves this by not only offering the benefits that machine learning offers to an organization, but it allows technical (and non-technical) users to understand how results are achieved, so that they can prevent potential issues, or make more meaningful improvements.

In the next article, we will talk about measuring results from you work with AI in your organization.


Greg Kihlstrom