AI and Risk Culture in Focus at GRC Conference in Stockholm

As GRC professionals gathered at the annual GRC conference in Stockholm last week, it was clear that AI was going to dominate the agenda. Experts and practitioners discussed the future of internal audit and risk management, circling around the hot topics of governance and data quality. At the same time, with the current geopolitical environment, adding escalating financial and organized crime, companies are facing a range of new and more aggressive risks.

Risk managers are now in the driving seat of risk culture, and organisations are moving from risk administration to risk management. The core issue at stake is the need to know what to protect and what to protect it from. Cyber security continues to be an absolute top priority, in combination with financial crime risk, with a threat landscape that evolves at unprecedented speed leveraging AI and social engineering. In light of recent large-scale cyber security cases in Sweden, conversations on board and CEO levels will need further reinforcement to understand actual vulnerabilities.  

Rasmus Forssblad, Director at Advisense who delivered a lecture on AI and internal auditing at the conference, says that a general theme is that the risk landscape has shifted rightwards and upwards within the risk matrix.  

In the past, risks were mainly associated with business potential, something with an upside, for example in the context of assessing new product, market entry or regulatory risks. Now the whole risk dialogue and discussion about exposure is increasingly negative. If risks are not addressed, the company can be severely damaged. This might of course reflect the general outlook in terms of geopolitical instability and the rise of financial and organised crime here in Sweden 

Rasmus Forssblad, Director at Advisense

The new risk landscape 

According to cyber security expert Anne-Marie Eklund Löwinder also speaking at the conference, four out of five data breaches are caused by human error. Her message to the audience was “trust no one, check everything”. The frequency and cost of ransomware is escalating and cyber attacks are maneuvered by organised crime in franchise-like models. Examples were given of recent advanced financial crime cases, including one where a company was defrauded of 25 million HKD through a sophisticated AI deep fake live teams meeting. The repeated advice, is that interaction is key across sectors (something which was highlighted by experts in Advisense Talks episode on the symbiotic interplay between cyber and financial crime, organisations and personnel need to turn on their mental firewall.  

The general message is that everyone should assume that they are hacked or that they will get hacked. Investment levels are too low in relation to the risk levels, prevailing short-term thinking causes risk exposure, and there is still too little recognition of vulnerabilities on board and CEO levels.  

On the topic of social engineering and the human factor, experts discussed incentives for users to not only follow security rules but also to maintain an effective risk management culture within the organization. Is it enough to understand the magnitude of the consequences of a cyber attack or working with a supplier which engages in money-laundering? What incentives are needed to ensure employee buy-in? 

Governance is key, and organizations need to consider people, processes and technique. The conference confirmed that reinforcing this is more important than ever. Senior management should make sure that there are designated resources, regular reviews, validate results (including if there are any at all), who is reporting to whom, is there a continuity plan drawn up, implemented and moreover tested to validate that it is properly and regularly maintained. Furthermore, cyber security should be included in third party due diligence, regardless of size and track record, as proven by recent experiences in Sweden. Vendor evaluation is critical to ensure trustworthiness and step one is to write in the contract that you have the right to audit, and then do follow up.  

The value of AI and future uses 

When ChatGPT was banned in Italy, the Italian stock market lost 20 per cent according to Marc Eulerich Professor at Universität Duisburg-Essen during his lecture on Generative AI: Transforming Internal Auditing & Governance. The country’s data protection watchdog said its developers did not have a legal basis to justify the storage and collection of users’ personal data in order to train the site’s algorithms. The ban has been lifted, but the temporary turmoil tells us something about the valuation of the potential that AI holds.  

Although the agenda is dominated by the topic of AI, most organisations are only just starting to work on the AI strategy. Data quality is key and the step towards automation and using AI in processes involving large amounts of manual work. Experts suggest that they see every week that output from AI applications can go wrong. It is obvious that if you train AI using human data, human errors will follow into the model. 

A lot of organisations have not come as far as one would like to think and are struggling with legacy systems. Before old systems are updated or renewed, Before that happens, it is questionable if AI can be usefully added. Progress is made in the areas of document and data analytics, but also planning, forecasts and report writing. AI should be regarded as a complementary functionality, an enabler to substantially improve processes that should have been much more efficient anyway, and to improve quality. It will not be long before AI can support the internal audit function to find evidence both internally as well as externally and analyse if and how various guidelines are effectively complied with in an organisation.  

Companies need to carefully consider both risks and threats, but also the possibilities of AI, and those that are not actively addressing this potential will fall behind.  

On the topic of AI and internal auditing, we at Advisense advise to firstly address a few fundamental issues:  

  • Establishing proper governance for the use of AI in your organisation, with clear guidelines on how AI should be used internally and externally.
  • Privacy and data leakage in terms of specific routines and processes in place to protect use data and prevent leakage.  
  • Purpose and results review, to ensure that the use of AI is in line with expectations and goals, and safeguard that there is no bias or prejudice influencing outputs.  

To continue the conversation, please reach out to Charlotte Eklund, Director and Head of Internal Audit or Markus Persson, Managing Director, Cyber & Digital Risk Sweden. 

Markus Persson

Managing Director, Cyber & Digital Risk

Charlotte Eklund

Director, Internal Audit

Rasmus Forssblad

Director

Let's connect

AI and Risk Culture in Focus at GRC Conference in Stockholm AI and Risk Culture in Focus at GRC Conference in Stockholm
I want an Advisense expert to contact me about:
AI and Risk Culture in Focus at GRC Conference in Stockholm

By submitting, you consent to our privacy policy

Thank you for connecting with us

An error occurred, please try again later