Improve Your Chances of AI Success with Decision Modeling

Improve Your Chances of AI Success with Decision Modeling

Increasing Use of Narrow AI in Business Automation

In this series of articles I explore how decision management and modeling can help increase the success of AI deployments.

As part of their digital transformation initiatives, companies are going beyond the established use of predictive analytics by embedding ‘narrow’ artificial intelligence models within their automated systems. The outcomes of these models directly control the system’s actions. Not to be confused with Strong AI, which attempts to approximate general human intelligence, this ‘Narrow’ (or ‘Weak’) AI uses machine learning to supplement (or even replace) human judgement in very specific areas. Typically these systems automate the acquisition of business insights from data, previously requiring human experience, and then act on these insights with or without human supervision.

AI models provide observations that can be used by companies to: segment and target higher value customers; personalize service and product offerings to better anticipate customer requirements and predict and avoid customer churn. By these means companies hope to acquire, satisfy and retain higher value customers. Other uses of narrow AI include: optimizing transport logistics by anticipating demand for products; detecting fraud in real time and automating market sentiment analysis to understand the public mood and anticipate market changes.

Contrary to the hype, many are discovering that this use of narrow AI and machine learning does not guarantee success and has its own drawbacks. Frequently, initial attempts to use AI models can be expensive and of surprisingly limited value. Highly trained and expensive personnel and sophisticated, high-performance hardware can balloon budgets while the promised business benefits remain elusive. Why is this?

Narrow AI Informs Business Decisions

Some projects lose sight of the fact that narrow AI models are used, first and foremost, to inform a business decision, either directly by predicting something a business can act on to make the decision more profitable or, indirectly, by providing insight into the hidden relationships in business data that might be exploited. This is often a commercial decision such as:

  • Should we target this customer for sales and services? What’s their risk/reward profile?
  • What kind of customer is it: to which products and pitches would they be eligible and most responsive?
  • What specific actions should we take to pre-emptively please customers and prevent churn?
  • How can we reduce costs and minimize delays by predicting demand for products and services?
  • Does the pattern of customer behaviour suggest fraud or some other danger (e.g., incipient insolvency)?

It may cover automation of decisions which previously required human guidance:

  • Are recent changes in blood chemistry indicative of a medical condition?
  • Given past behaviour is this employee likely to be unreliable?

When AI projects lose sight of this business decision they deliver less benefit as a result.

This is because an accurate definition of this business decision and its business context are valuable assets in selecting, training and using machine learning techniques. Decisions help to focus the application of each AI model, define its business value and guide its evolution.

How Decision Models Help

In this series of posts we’ll be looking specifically at how decision models help to:

  • Define a Business Context: to show how the use of AI models fit into the big picture—how they collaborate to generate business insight, what their requirements are and exactly how they impact company behaviour.
  • Improve the Transparency of AI models: to make even the most opaque AI models yield an explanation for their outcomes—essential to meet the increasing public and regulatory demand for transparent decision-making in all areas of business.
  • Define Business Goals: to focus AI projects and provide both a business case and a means to measure their success.
  • Help SMEs to Steer AI: providing the best fusion of machine learning and human expertise by depicting how existing expertise informs and constrains AI.

In this article, let’s look at the first of these…

How Decisions Clarify the Business Context of AI

Business decisions provide a defining context for the use of AI Models and this is important because…

Each AI Model Should Be Applied to a Well-Defined Task

Narrow AI models are best developed for a specific purpose; to be used at a specific time and under specified circumstances. They rely on training and test data that are well understood and of adequate quality for the task at hand. In many circumstances they collaborate with business rules and other analytic models to achieve a business outcome. They perform poorly if their purpose is vaguely specified or appropriate data is not provided. They also behave poorly if they are unfocused and try to address more than one issue simultaneously (the ‘jack of all trades’ problem).

Defining a decision model is an excellent way of expressing both the ‘big-picture’ and the specific details of the context in which AI models are used. A DMN decision requirements diagram (DRD) shows how AI models collaborate with other sources of business information and knowledge to make a specific contribution to a business decision. It drives out the data and knowledge requirements of models and establishes a clearly defined relationship between them and the business decisions that use them. Such a context defines the inputs and outputs of the AI models and how these align with the key business actors, processes (using process oriented metadata in the DRD) and rules.

Example Decision Model

Consider the example DMN decision requirements diagram (DRD) below. This diagram shows how two AI models collaborate to determine the details of a client mortgage offer. For clarity, we’ve added some colour to better illustrate the integration of narrow AI (note: this isn’t part of the DMN standard). Real DMN DRDs are, at this level, agnostic about the technology used to implement decisions.

 

AI models (green) contribute to a decision model

A Decision Model DRD Illustrating the Context for AI Two Models (in Green)

Be aware that there is much more to a DRD that just a box and line diagram (see below) and much more to a decision model than just a DRD. However, just this single diagram tells us a lot about the use of narrow AI in this example.

  • The yellow rectangles present conventional decisions based on business rules or human decision making.
  • The green decisions (rectangles) in this example are AI models. The Determine Property Risks decision uses Bayesian inference to classify the key physical perils within the vicinity of the property and Determine Likely Property Value uses a neural network to estimate the value of the property given its survey details. This highlights the fact that each decision in a DRD can be underpinned by business rules, machine learning models or any other executable representation. The DRD is agnostic of this and just shows the dependencies between them all.
  • The yellow knowledge sources (with wavy bases) are sources of knowledge that inform or constrain their respective decisions. They represent guidelines, policies, regulations, legal mandates, etc.
  • The blue knowledge sources, for decisions using weak AI, represent training and test data sets used to build the model. This representation holds for batch and continuously trained machine learning models.
  • The rounded rectangles represent data sources.
  • The lines represent information (or knowledge) dependencies. In each case item at the ‘arrow’ or ‘circle’ end of the line is dependent on the information or knowledge made available by the item at the ‘plain’ end.

How This Helps

Notice how the DRD:

  • Allows us to be explicit about the combined use of AI and business rules in business decision-making, so that all stakeholders can see the extent to which AI is being used and how.
  • Clearly defines all the dependencies in our decision making, helping us to understand and manage the collaboration and handle change in individual parts as well as informing all stakeholders precisely how AI is contributing.
  • Captures all the data requirements of the AI models—both in training and in use, in addition to the rest of the decision-making elements. The process of completing a DRD drives out the information required to support models and its relationships with data used elsewhere in the same decision. It can also identify missing data early.
  • Allows us to document the collaboration of AI modules using a recognized international standard for decision modeling: DMN. This means we can share our models more easily with others and take advantage of the many DMN tools on the market.

The DRD is just the diagrammatic representation the dependencies between the elements of decision-making. The DMN standard also defines many properties for each of the symbols on the DRD that provide a wealth of additional information (such as its business goal and key performance indicators). In addition, the decision logic diagram provides detail on the business behaviour of each decision-making element. In the case of a decision using narrow AI, this could be a reference to a specific machine learning model and a definition of its interface.

Conclusion

The process of decision modelling adds much needed rigour to the design of AI model deployments, forcing AI engineers to be explicit about their models’ business contribution, data requirements and use scenarios early.

Currently, the AI and machine learning community have no notation to express how models interact with input data, training data and each other to achieve business outcomes or how their use fits into a business process. DMN (and, by association, BPMN) is well placed to fulfil this role. DMN could be used, for example, to illustrate the architecture of AI model ensembles and to show, using conventional decision tables, how ensemble voting works.

In our next article we consider how decision modelling can improve the transparency of notoriously inscrutable AI models like neural networks and support vector machines…

A Practitioner’s Review of DMN 1.2

A Practitioner’s Review of DMN 1.2

In April 2016, James Taylor and I summarized our experiences of decision modelling in DMN by publishing two wish lists of the features I’d like to see in the next version of the standard. Specifically, I listed new feature suggestions for the way the standard represents decision requirements and decision logic elements.

Now the new version, DMN 1.2, has been released (the OMG committee ratified the proposed changes on June 21). We look at the major new features that made it into this release, how these impact users of the standard and what we think of these changes.

If you are new to decision modeling find out why it’s important and see our overview of DMN1.1.

(more…)