KPMG says AI requires a trust contract

(Photo by Gerd Altmann from Pixabay)

Solve the uk summer of artificial intelligence. But only if we avoid the shadows of data bias, the cold winds of poorly designed algorithms, myopic goals, and the misconception in quick cost savings that described in my previous report.

But what practical steps can decision makers actually take to highlight the positives and avoid the negatives? And just as importantly, why should they bother?

One answer is that it’s not just about ethical behavior at the macro level – although that is vital in socially connected global markets.

Lianne Allen is a Partner in Financial Services Data at advisory and services giant KPMG UK. Speaking this week at the Westminster eForum on the next steps for AI, she made the point that all organizations must use AI responsibly to make their businesses smarter and more responsive. And in return, other benefits will accrue – economic and social. She said:

Society’s consumers and investors demand more organizations, and this is in all industries. Whether this is the upside of better, frictionless services for consumers, an end to “industry experimentation” with greater personalization, or a desire that industries such as financial services should do more to help tackle inequality and promote sustainable finance.

Expectations for companies to innovate and realize real value from new data and technologies continue to grow rapidly. Adopting advanced technologies including machine learning [ML] and AI give organizations an advantage in responding to those demands.

In this sense, improving what Allen calls the “customer experience journey” is just as important as the general desire for ethical behavior, because in the long run the business will become more considerate and sustainable, as she suggested:

This makes better and faster decisions. This is increased accuracy, which means a better understanding of customers, and leads to better products and services. These are things like pricing risk or pricing products more accurately, and enabling an incremental change in operational efficiencies. And that of course has been really helpful for organizations in reducing internal costs.

So, in Allen’s view, there is “no disagreement” about the benefits to organizations and customers from using big data analytics and AI. But users should avoid getting sucked into all these new possibilities. This is where the real danger lies, she warned:

All of this potential comes with new and improved stakes. The truth is that without proper patrols and governance over both design and use of advanced technologies, we are already starting to see unintended damage.

Unfair bias of decision model results causes financial harm to consumers, and can cause reputational damage to organizations. Unfair pricing practices cause systematic groups in the community [sic] Being locked out of insurance, this removes access to risk pooling.

Selling products and services that are inappropriate or of poor value to customers is another example, or targeted advertising, dynamic pricing, and “stealth for purpose” use of data, resulting in non-compliance with existing data protection laws.

These are just a few examples of the damages and challenges facing the industry.

This is a list of downsides. The harmful effect is the loss of trust between consumers/citizens and those who network their data. Such fallout could have far-reaching effects on individuals’ credit history and financial inclusion, for example.

Allen said this is why decision makers should never – intentionally or otherwise – jeopardize consumer confidence in their pursuit of easy wins:

Trust is the determining factor in the success or failure of an organization. So, as companies move at such a rapid pace, transforming their business to become more data- and insight-driven, they need to focus on building and maintaining that trust.

We are now seeing many organizations embark on their own initiatives to build governance and controls around the use of big data and AI. But the pace of progress varies.

Usually, we see financial services leading the way, and these organizations set their own ethical principles. They put it to work and take a risk-based approach, aligned with core principles such as fairness, transparency, “interpretability” and accountability. Collectively, these actively boost confidence.

“True North” corporate ethos

However, in a growing recession, struggling consumers may take the idea of ​​financial services leading the way towards a fairer society with a pinch of salt. But let’s hope the companies are honest.

For Ian West, partner in a different division of KPMG’s UK operations, as head of communications, media and technology, trust is the ‘golden thread’ of business. he added:

We need to ensure that companies are ready to deploy AI responsibly. KPMG summarizes the actions needed to steer any organization toward the “True North” of corporate and civic ethics by laying out five guiding pillars for ethical AI.

Talk about mixing up your metaphors! But the West (or North?) continued:

First, it is necessary to start preparing personnel now. The most pressing business challenge when implementing AI is disrupting the workplace. But organizations can prepare for this by helping employees adapt effectively to the role of machines in their jobs much earlier in the process.

There are many ways to do this. But it is worth considering partnering with academic institutions to create programs to meet the skills need. This will help educate, train and manage the new AI-powered workforce, as well as help with mental health.

Second, we recommend developing strong oversight and governance. Therefore, there should be enterprise-wide policies around AI deployment, specifically around data use and privacy standards. This is due to the challenge of trust. AI stakeholders need to trust the business, so it is imperative that organizations fully understand the data and frameworks behind AI in the first place.

Third, autonomous algorithms raise concerns about cybersecurity, which is one reason why managing machine learning systems is an urgent priority. Strong security must be built into algorithm creation and data management. And of course, we can have a bigger conversation about quantum technologies in the medium term.

Fourth, there is the unfair bias that can occur in AI without the right governance or controls to mitigate it. Leaders must make efforts to understand the workings of complex algorithms that can help eliminate this bias over time.

The attributes used to train the algorithms must be relevant, appropriate to the goal, and allowable. It is arguably worth having a team dedicated to this, as well as setting up an independent review of important models. Prejudice can lead to harmful social impact.

Fifth, companies need to increase transparency. Transparency is the basis of all previous steps. Not only be transparent with your workforce – of course, this is very important – but also give customers clarity and the information they want and need.

Think of this as a contract of trust.

My opinion

Good. Therefore, the important lesson is not to sacrifice user trust in your quest to gain a competitive advantage. Take your customers with you on a shared journey. Help them figure out how to make their lives better, and your business smarter.

Leave a Reply

Your email address will not be published. Required fields are marked *