Rishi won’t rush to regulate, giving industry an even bigger role in building trust in AI
Written by Associate Director, Mike Norris.
AI regulation is the hot technology topic both sides of the Atlantic this Autumn. Setting the scene ahead of this week’s AI safety summit, Rishi Sunak declared last week that he was in ‘no rush’ to regulate the sector. By comparison, the White House Deputy Chief of Staff described the Executive Order launched by the Biden Government today as “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.” At the same time, vociferous debate continues within the EU about its regulatory approach.
We are definitely seeing the leading Western nations go up a gear in their engagement with AI; its opportunities and its risks. At the core of this intensification is a technology that is evolving more rapidly than anything we’ve ever seen before. Facing that reality, leaders across the world are attempting to match that blistering pace, working hard to put their countries at the front of the queue for investment, while simultaneously developing their thinking around how regulation can protect society, without stifling innovation.
In the UK, despite comprehensively documenting the plethora of risks posed by AI misuse – from more advanced cyber attacks to misinformation – disruption to the labour market, child sexual abuse, fraud and terrorism, the Prime Minister has doubled down on the message that there will be no rush to regulate the sector. This approach to regulation opens the door for innovation, but equally places a far greater onus on companies to take the lead in building and maintaining trust in the technology, and industry’s ability to manage its development responsibly.
Trust in AI is vital in protecting the sector’s licence to operate and innovate, creating an environment where AI businesses can continue to thrive. This will be a particularly important consideration for industry in markets like the UK if they continue to pursue less intensive regulatory regimes. As well as having a robust governance infrastructure to enable good decision making, effectively communicating a clear narrative, evidencing a responsible approach to technology development and implementation, is key.
While stakeholders so far might have been primarily enthused by the upsides of AI, they will become increasingly nuanced in their thinking as the sector matures and, as a result, will quickly lose confidence in businesses that don’t proactively demonstrate a comprehensive approach to both opportunities and risks. This means being bold in tackling some of the more challenging questions around AI head on, not just engaging with its ability to improve commercial performance.
Technology will continue to develop at light speed and an inevitable consequence will be more high-profile cases of AI misuse. If industry as a whole hasn’t laid positive foundations, these challenges to society’s confidence in AI will be that much harder to combat when they hit. Instinctif is trying to support industry in its efforts. For example, last year, we launched our CyberOptic tool to help companies benchmark their preparedness for communicating during cyber attacks. These types of attacks are already becoming more commonplace, and, with AI’s ability to facilitate increasingly sophisticated forms of phishing and data extraction, preparing effectively is essential to support operational continuity and safeguard reputation.
With the UK’s AI Summit fast approaching, the focus of the international community right now is AI safety. However, while having these important discussions, we can’t lose sight of the incredible economic and social opportunity of AI. If our aim is to build trust, a positive narrative is generally more advantageous long term than a negative one. AI has the power to help humanity thrive in more ways than anything that has come before and it is already revolutionising traditional sectors. While we’re still only scratching the surface of what AI could do, businesses must be proactive now in communicating the wider social value of their AI applications, not just seeing the technology as something that can deliver quick commercial wins. If businesses can do this, it will be a great foundation for a sector that can thrive based on public confidence in AI as a force for good.
Looking at the next 12 months, the Prime Minister’s speech last week re-enforces why engaging with policy makers about AI at this formative moment in the UK political cycle has to be a priority. Labour is likely to want to champion AI just as much as the Conservatives, particularly its ability to drive economic growth. Labour will also want to demonstrate its pro-business credentials ahead of the election. That being said, while there may not be a rush from either major party to legislate, it is inevitable that regulation in some form will come. It is likely that policy makers from all parties will form their views about how this is done over the coming months as they develop their platforms ahead of the election. That means businesses have a critical window for engagement with politicians right now before opinions more firmly rooted.
So, when businesses reflect on this latest series of major AI announcements and look ahead to how the discussion might develop at the AI Safety Summit this week, three important considerations for corporate communications should be:
- Through good governance and communications, businesses need to be bold and engage with the big issues and challenges around AI, not just the upsides. This will demonstrate that the technology can be trusted, and that companies can be trusted to manage its development responsibly.
- AI businesses should work even harder to highlight the positive social impact of their technology, beyond the commercial benefits that come from AI-driven cost savings and efficiencies.
- ‘No rush to regulate’ doesn’t mean no regulation. There is a critical window right now to engage with politicians before policy direction becomes more set.