Ethics and AI – Why Business Needs To Get A Grip
- Responsible AI has high potential for market opportunities
- AI which avoids regulatory pressure will be better prepared for the future
- Irresponsible AI decreases value for investors and lowers firms reputations
Stephen Hawking once said that artificial intelligence could spell the destruction of humanity. Whilst the words might be more akin to something you’d hear on a sci-fi movie trailer, with visions of a menacing Terminator-like machine alongside, his warning is frighteningly not too far-fetched if we consider just how capable AI has become in recent years, and just how little attention is seemingly being paid to where the line should be drawn.
With creations such as ChatGPT now on the market, capable of almost anything – from debugging code to crafting articles and essays for opportunistic students, and even creating music and providing interior decorating ideas – we might do well to heed Hawking’s words and proceed with caution.
Sure, we may not be at the point of a Terminator style takeover, but we can’t forget about the darker developments of the moment; deep fakes of politicians, TikTok algorithms sharing harmful content or propaganda, bots swarming Twitter armed with cryptocurrency scams. In other words there is work to be done.
That’s not to say of course, that if we can use AI in an effective and ethical way, it could provide significant advantages to the future of humanity. From self-driving cars to smart phone personal assistants and healthcare focused chat bots and services, it would seem the tech of the future could make day to day life easier in a lot of ways.
To help bring this idea to light, Claudia Zeisberger, Senior Affiliate Professor of Entrepreneurship and Family Enterprise at INSEAD and Anik Bose Managing General Partner at BGV have created ‘The Business Case For Responsible AI’.
Their research paper points to the clear parallels between responsible AI and the ESG movement, with both being, as the researchers suggest, not just a means for good, but a source means for good business. They are not alone in their thinking – as Manoj Saxena, chairman of the Responsible Artificial Intelligence Institute, said recently, “responsible AI is profitable AI.”
“The term “responsible AI” speaks to the bottom-line reality of business: investors have an obligation to ensure the companies they invest in are honest and accountable. They should create rather than destroy value, with a careful eye not only on reputational risk, but also their impact on society”, says Professor Zeisberger.
And creating value and attracting investment for AI has never been more on trend.
Additionally, McKinsey & Company has reported that AI could increase global GDP by roughly 1.2 percent per year, adding a total of US$13 trillion by 2030. Considering many firms’ ambitious promises of sustainability and net zero targets by this date, making ethics and responsibility a key feature of AI could very well enable global corporations to step up to the plate and make the possibility of delivering a greener economy all the more reachable.
But what is the roadmap for getting there and where have others gone wrong? The researchers posit three reasons why investors need to embrace and prioritise responsible AI, as well as realise the dire consequences of not doing so.
Firstly they outline that AI requires guardrails. “One only has to look at social media, where digital platforms have become vehicles that enable everything from the dissemination of fake news and privacy violations to cyberbullying and grooming, for a taste of what happens when companies seemingly lose control over their own inventions”, says Professor Zeisberger.
“With AI, there’s still an opportunity to set rules and principles for its ethical use. But once the genie is out of the bottle, we can’t put it back in, and the repercussions will be sizeable”.
In truth, the responsibility of AI and the businesses that created them is in fact intertwined, as scholarly analysis has previously discussed at length. For instance, American author Jack Balkin has said that the use of AI as an arbitrary censoring tool on platforms such as Facebook and Twitter cannot be a “get out of jail free card” for businesses simply because the AI has developed autonomy or its own first amendment rights as a US citizen to block users.
Secondly, the researchers suggest avoiding too much regulatory pressure, as this can impose strong consequences In particular, they point to government legislation tightening digital regulations on online safety, cybersecurity, data privacy and AI.
“The European Union has passed the Digital Services Act and the Digital Markets Act (DMA). The latter aims to establish a safe online space where the fundamental rights of all users are protected, says Professor Zeisberger. “In a recent study on C-suite attitudes towards AI regulation and readiness, 95 percent of respondents from 17 geographies believed that at least one part of their business would be impacted by EU regulations, and 77 percent identified regulation as a company-wide priority.”
Thirdly the researchers say that responsible AI has high potential for market opportunities. According to their study, an estimated 80 percent of firms will commit at least 10 percent of their AI budgets to regulatory compliance by 2024, with 45 percent pledging to set aside a minimum of 20 percent. The researchers suggest this regulatory pressure generates a huge market opportunity for PE and VC investors to fund start-ups that will make life easier for corporates facing intense pressure to comply.
Professor Zeisberger said: “Investors wondering about AI’s total addressable market should be optimistic. In 2021, the global AI economy was valued at approximately US$59.7 billion, and the figure is forecast to reach some US$422 billion by 2028. The EU anticipates that AI legislation will catalyse growth by increasing consumer trust and usage, and making it easier for AI suppliers to develop new and attractive products.”
Whilst the business case seems solid, there are practical implications to consider. Truly harnessing and progressing ethical AI will require, the paper suggests, the development of specialised talent, new processes and ongoing monitoring of portfolio company performance. However, despite highlighting the challenges that implementing ethical AI practices so intricately can create, the researchers say it’s worth the effort.
And, like all things connected to new technologies, the pace of change is fast. AI’s impending regulation, and the market opportunities it presents will transform how PE and VC firms operate. Some will exit, shifting resources to sectors with less regulation. Others, fortifying themselves against reputational risk while balancing internal capabilities, will add screening tools for AI dangers. Those who can position themselves as early adopters might find themselves becoming industry leaders.
As we have mentioned in our top 10 ways to do business better in 2023, robots and automation can not just provide a positive impact but endless opportunity – some of which we cannot yet image. But to make this a reality, as Professor Zeisberger‘s work shows, we cannot afford to completely take the reins off and stand in awe of AI’s brilliance.
If we do, we may well be facing the rise of HAL 9000. We cannot say we were not forewarned!