AI demands a contract of trust, says KPMG

(Photo by Gerd Altmann from Pixabay)

The UK’s AI Summer is upon us. But only if we avoid the shadow of false information, and the cold wind of poor design, short-term goals, and mistaken belief in quick cost savings that I have explained in my previous post.

But what strategic steps can be taken to highlight the positives and minimize the negatives? And more importantly, why should they bother?

One answer is that it’s not about ethics at the macro level – that’s important even in the global, social economy.

Leanne Allen is a partner, Financial Services, at consulting and services giant KPMG UK. Speaking this week at the Westminster eForum on the next steps for AI, he explained that all organizations should use AI responsibly to make their business smarter and more responsive. In turn, other benefits will be obtained – both financial and social. He said:

Consumers and the investor community are demanding more from organizations, and that’s across all industries. Whether that’s better, less disruptive to customers, or the end to a more personalized ‘business experience’, or the notion that business like Services Finance must do more to help address inequality and promote financial stability.

The expectations of companies to innovate and drive real value from data and new technologies continue to accelerate. And adopt advanced techniques that include machine learning [ML] and AI gives organizations the leverage to respond to those needs.

In this sense, the development of what Allen calls “user experience” is as important as the general thought for ethics, because the long-term business will do for more and more stable thoughts, he says:

That is a better and faster decision. That is more accurate, which means understanding customers better, and leads to improved products and services. That’s things like risk pricing or pricing the product correctly, and making changes in performance. And that, of course, has been beneficial to organizations in driving internal costs.

So, in Allen’s opinion, there is “no debate” about the benefits for organizations and customers from the use of big data analytics and AI. But users should avoid being excluded from all these new possibilities. Here is the real situation, he warned:

All that potential comes with new risks and improvements. The fact is that without proper care and control of both the design and use of advanced technology, we have already begun to see the lack of danger.

Improper decision making patterns cause financial damage to consumers, and can cause damage to organizations. Improper pricing leads to social cohesion [sic] to be locked out of insurance, and that removes access to risk.

Poor sales and customer service are other examples, or advertising campaigns, weak pricing, and ‘target creep’ in terms of data usage, which has resulted in non-compliance with existing data protection laws.

These are just a few examples of the damage and problems that the industry is facing.

That is quite a list of downsides. And the impact is a loss of trust between the consumer/public and whoever is trawling their data. Such influences can have a significant impact on people’s credit history and financial calculations, for example.

Also Read :  Buy an Apple iPhone 14, Get a $300 Walmart Gift Card With This Cyber Week Deal

This is why decision makers should not – intentionally or otherwise – make consumers believe in risk in pursuit of easy wins, Allen says:

Trust is the key to an organization’s success or failure. So, as companies progress, changing their business to become more information and insightful, they need to focus on building and maintaining trust.

We are seeing many organizations now starting to take their own initiatives to develop governance and management of the use of big data and AI. But the pace of progress is different.

Often, we are seeing Financial Services as a leader, and these organizations are making their own ethical decisions. They are working on these and taking risks, based on principles such as fairness, transparency, ‘explainability’, and accountability. Together, those who actively promote confidence.

The ‘true north’ of corporate justice

However in a deepening economic crisis, the consumer struggle can use the concept of Financial Services as a price for the fair society with a grain of salt. But hopefully the companies are sincere.


For Ian West, Partner in a different segment of KPMG’s UK practice, as Head of Telecoms, Media and Technology, trust is a “hot topic” of business. He added:

We need to make sure that businesses are ready to deliver AI roles. KPMG distils the actions necessary to point an organization to the ‘true north’ of engagement and social leadership by setting out five pillars for leadership Honest AI.

Talk about mixing up your metaphors! But West (or North?) continues:

First, it is important to start preparing employees now. The immediate problem for businesses when using AI is the impact on the workplace. But organizations can prepare for that by helping employees adjust to the role of technology early in their careers.

There are many ways to do this. But it is worth thinking about cooperation with schools to create programs to solve the need for skills. This will help educate, train, and manage new AI workers, and will also help support mental health.

Second, we recommend improving supervision and management. Therefore, there should be an industry-wide policy on the deployment of AI, especially on the use of data and standards of privacy. And this comes back to the challenge of trust. AI stakeholders need to trust the business, so it’s important that organizations understand the data and processes that influence their AI in the first place.

Third, self-governing algorithms raise cybersecurity concerns, which is one of the reasons why machine learning management is so important. Strong security needs to be built into the design of algorithms, and in data management. And of course, we can have a bigger discussion about quantum technologies in the medium term.

Fourth, there is the bias that can occur in AI without proper controls or controls to mitigate it. Managers must understand the operation of sophisticated systems that can help eliminate the bias at the time.

The behaviors used to train algorithms must be relevant, appropriate for the purpose, and permissible. It is arguably necessary to have a dedicated team for this, as well as to set up an independent review of the main structure. Infidelity can be detrimental to relationships.

And the fifth, the business needs to be more transparent. Transparency underlies all previous steps. Not only be transparent with your employees – of course, that’s very important – but also give customers the clarity and information they want and need.

Think of this as a contract of trust.

I take

Well done. Important advice, therefore, is not to sacrifice user trust in your quest to gain a competitive advantage. Take your customers with you on the conversation. Help them see that you make their lives better, with your business smarter.

Also Read :  3 Best Artificial Intelligence Stocks to Buy in December 2022


Leave a Reply

Your email address will not be published.

Related Articles

Back to top button