Breaking through the FUD (Fear, Uncertainty, Doubt) in AI

blog-image

AI has grown in popularity in recent years, but Fear, Uncertainty and Doubt over the technology remains. Panelists at the Data Futurology event investigate these fears, share their own stories of implementing AI, and outline the process and culture changes needed to make it a success.

Australia has come a long way since the term ‘artificial intelligence’ (AI) first cropped up in public discourse. Initially a nebulous concept, AI has since become a tangible and popular fixture of the domestic economy.

Nationally, the technology is tipped to be worth $315 billion by 2028, with more than 285 applications across the business, government and research sectors. But despite its value and ubiquity, concerns remain over AI’s potential to inadvertently cause harm, in the form of algorithmic bias, misuse of data, and existential risk.

In fact, fear, uncertainty and doubt over the technology is still so commonplace that its acronym - FUD - is now practically a household term. With use cases for AI growing each day, what can be done about this FUD?

Panelists at the Data Futurology Advancing AI Breakfast Series shared some of their expert tips.

Moderator: Dr Stephen Hardy, CEO, Ambiata
Panelists:
- Dave Abrahams, EGM Data, Insurance Australia Group
- Peter Worthington-Eyre, Chief Data Officer, Department of the Premier and Cabinet, Government of South Australia
- Dr Catherine Lopez, Head of Data Strategy & Analytics, ME Bank

Defining AI

The definition of AI has evolved over the years - from a variant of ‘cybernetics’ in the 1950s, to ‘expert systems’ in the 1980s, to the more recent ‘deep-’ and ‘reinforcement learning’ applications.
For the purpose of this discussion, we focus on the most modern use of AI: automated decision making and its ability to mimic human reasoning.

Before you begin using AI

Draw a line between decision making and decision support

The extent to which AI tools influence business decision making should always be considered upfront, said Peter Worthington-Eyre from the Government of South Australia.

His department is using AI to detect illegal rubbish dumping and prioritise inspections.

“There is a big difference between a completely automated decision that ends up with a process being finalised and executed – and an automated decision that gives a right of review,” he said.

In other words, using AI to augment, not replace, human decision making may be a more ethical option.

Interrogate your reasons for using AI

Before AI is put to this use, though, organisations should consider when it is – and isn’t - acceptable to use AI for decision making. You start by asking why you are using the technology, said Dr. Catherine Lopes at ME Bank, whose firm is using AI for prediction modelling.

For Peter it comes down to consequence and risk.

“If you are looking at a media feed and you give someone the wrong news article and they don’t click on it, it’s a pretty low consequence,” he said.

“[Conversely], if you are deciding through the model whether someone either does or doesn’t get a service - or is or is not prioritised - then the consequences are different.

“[It’s all about] the level of risk that you are willing to accept based on the consequence of that model being wrong,” he added.

When implementing AI

Once you have qualified the use of AI within your organisation, there are three key steps to follow.

Data collection, processing and preparation

“What are the factors or bottlenecks that you have overcome to create your data assets?”

To reduce FUD, the data fed into AI machines - to inform any decisions it churns out - should be reviewed for quality, and cleansed where appropriate. Data cleanliness minimises the risk of bias and avoids a ‘garbage in, garbage out’ scenario.

“A lot of the data we are looking at over a long time series wasn’t collected for AI,” said Worthington-Eyre.

“We really need to understand the people that collected that data […]; what legislation [it] was collected under; how did those business processes change over time. What was the variability in the collection of that information across a large number of people; what does it actually mean; and what are the biases introduced into that data?”

For example, biases can occur when the quality of data declines with the size of the researcher’s workload (‘pressure of demand bias’). Peter says collecting less data of higher quality is often more effective than the reverse.

Model building and development

“What are the big blockers to building models well, and what are some ways we can overcome these?”

Building models that accurately represent and predict real-world data is the next step in overcoming FUD on your AI journey. Here, Catherine recommends that Data Scientists start by building a simple baseline and avoid getting swept up with the latest machine learning “toys”.

“I see people jumping in to use a fancy, technology-oriented approach too easily and too quickly,” she said.

To start with, develop trust by building basic things and making sure they work. Only after proving that they work - and you have control over the algorithms - should you optimise and build more complex solutions, she said.

Dave Abrahams says processes need to be followed in the development phase.

These processes can help bridge any cultural differences between Data Scientists (the model builders) and Engineers (the deployers) - making sure models designed to solve a problem actually do so in a real-world context.

“[At IAG we have brought in] well-known and more modern engineering practices that have been used in software deployments and engineering for a long time. [For example, processes] around version control, the testability of models or code’, [and] the codification of […] changes.

Model deployment and operation

“What has been your biggest challenge when deploying AI at scale?”

When it comes to deployment, Catherine notes that while machine learning is more complex than traditional IT deployments, the philosophy and knowledge built up in the engineering disciplines are still relevant. She goes on to mention the nascent field of MLOps, which is rapidly gaining momentum amongst Data Scientists, ML Engineers and AI enthusiasts.

According to the Continuous Delivery Foundation (CDF), the term MLOps is defined as “the extension of the DevOps methodology to include Machine Learning and Data Science assets as first-class citizens within the DevOps ecology”

Dave Abrahams agreed with the adoption of the MLOps approach.

“We have done a lot of investment in building out some of those [deployment] platforms ourselves and using a lot of open-source capability, to really uplift the ‘devops’ world of model deployment. [This gives] us confidence in the output being […] used by customers and staff members,” he added.

Scaling Up Your Use of AI

Start small and grow large

While these tips can ensure the integrity of AI-backed operations for anyone starting out, Dave offers an extra tip for safely scaling up its usage.

“Start small and grow large,” said Abrahams, whose firm is using AI to predict the severity of road incidents, based on customer descriptions of incidents.

“[We start by using AI] with a small percentage of web traffic [..] say 10%.

“We monitor and measure […] the accuracy of decisions that are being made, and whether a manual assessor would have made the same decision.

“As you get more confidence [in the technology’s efficacy], and build that accuracy, then grow and expand it,” he said.

Closing Thoughts

While this advice can help firms tightly control risk, AI, like any new technology, will always carry some possibility for adverse consequence. That said, with the right intention and processes, there is no reason for FUD to stunt your use of AI.

“If we come from a fundamental place that we are trying to improve things; we are going to be very transparent, ethical, and have a right of review [then] that’s a good place to start,” Peter concluded.

Meanwhile, Ambiata Chief Executive, Stephen Hardy reminded us all to, “Understand what you are trying to optimise, because the machine will optimise it, so make sure you pick the right thing […].”

Get More Tips on Overcoming FUD In AI

Avoiding FUD over AI is a complex and multifaceted area. If you need assistance in this area, please feel free to contact us at info@ambiata.com or on the web form on our main page.

You can also hear more from the panelists by listening to the full panel session recording here.