Consultation: AI Cyber Security Code of Practice

ACCA welcomes the opportunity to comment on the open call for evidence issued by DSIT, UK. We support the importance and need for a trusted eco-system for AI and commend this initiative particularly given the rapid developments in AI globally, and the heightened cybersecurity considerations linked to this.

In the section that follows this one, we provide responses to the specific survey questions that were posed.

Our perspective on those responses is influenced by the following factors:

  1. ACCA is a professional member body training accountancy and finance professionals. While some of our members as part of their work develop AI systems in addition to being trained in accountancy and finance, our members are most likely to be involved as system operators, data controllers, end-users, or assurance providers. The last of these refers to third-party, independent certification/verification of AI systems, particularly in relation to their deployment within organisations, as opposed to their development. The ACCA Qualification provides ACCA students and members with the opportunity to upskill in advancements in technology including AI, to enhance their professional skill set. A future integrated AI-driven learning and exam experience will enable ACCA to deliver personalised and tailored education support to help each and every learner through the ACCA journey. ACCA’s current (and planned) use of AI across learning and assessment will have a profound impact on our partner network; improving the ability to do business with ACCA, improving partner learner outcomes by working closer with ACCA, and delivering finance professionals with the optimal experience and skill set for the modern workplace.
  2. We support a principles-based approach as we feel that there are too many as-yetunseen scenarios with AI, and consequently with the cybersecurity of AI. We are therefore supportive of the way that ‘principles’ have been given prominence in this Code, and this call for evidence.
  3. We are a UK headquartered, and highly global body with offices in over 55 countries and members in 180+. In general, and across policy areas, we support global standards, and actively leverage our policy staff based around the world to advocate for consistent global standards and to draw attention to best practices advocated by the UK. We’re therefore supportive of the government’s stated approach of starting with the voluntary Code as a step towards a global standard.
  4. We believe that given the very fast pace of change in AI, industry participants who are at the frontline of latest developments are best placed to manage constantly changing and newly emerging cyber risks. And the government is best placed to setup an overarching regulatory structure and principles, while giving space to industry experts to work within that. In that sense, philosophically, we see the value of a pro-innovation approach as explained in the government’s AI whitepaper. However, this is provided it comes with appropriate safeguards and the ability to revisit requirements if needed, which is consistent with the government’s proposed approach as per its response to the views received on the whitepaper.
  5. As an education body that trains accountancy and finance professionals, we think deeply about skills. And are acutely aware that there is an urgent need for upskilling in the AI space, particularly for members like ours who are not technology experts. This is a consideration which will apply to the vast majority of staff in organisations across the country. We see opportunities for the Apprenticeship Levy to be expanded for instance to a ‘Growth and Skills Levy’ that is more flexible and can be used to fund shorter-term accredited training programmes that upskill and reskill workers on the cybersecurity of AI. Companies should also be able to increase the proportion of their unspent levy funds to their supply chains – we’d suggest from 25% to 40%. This could unlock millions of pounds to develop AI skills. Ultimately cybersecurity issues linked to AI need staff to be trained on current and emerging risks – absent a focus on this aspect, the standards and frameworks will fail to achieve impact.

To read the response in full, please download the consultation document on this page.