Of late, I have been receiving a number of requests from directors and clients on how they should be managing 'this new AI risk' in their organisations.
In this post i'd like to discuss the risks, threats and compliance frameworks for the use and development of AI Technologies.
AI Risks & Threats
Lets start by talking about the types of threats and risks associated with AI in organisations.
As AI technologies continue to advance and adoption continued to rise, so too do the potential risks and threats they pose. Some of the primary concerns facing most organisations:
Lack of Control
At the moment, AI usage is the wild west, its a new and emerging risk most directors and organisations are not across, employees can use LLMs (Large Language Models) for example, and this technology is now built into all phones and devices. This is referred to what we call 'Shadow AI'. Nearly every website uses chatbots, and nearly every organisation is using products 'containing AI' or considering building AI into their products and services. Most organisations are still unsure of actually where AI is being used within their organisation and how its being used, and where AI, specifically LLMs, falls into the organisations risk tolerance.
2. Data Privacy and Security Risks
Data Privacy is one of the biggest risks facing organisations at the moment with regard to AI. Some of the risks include:
Privacy Violations and Shadow AI: Employees posting sensitive information and IP to LLM's and other AI-based technologies to improve their productivity, however AI systems like LLMs store requests by default and are periodically 'reviewed' by actual humans for the purposes of quality and development. This can inadvertently expose sensitive personal information, leading to privacy breaches.
Data Breaches: AI systems often rely on large datasets, making them attractive targets for cyberattacks. Some datasets are in house, others within cloud repositories, some both and we expect to see a large number of breaches increase in the coming years, focusing on AI datasets.
Data Poisoning: Most AI technologies are built off what we call training sets, which is data used to train the AI and its responses. Malicious actors can introduce biased or incorrect data into the training sets, compromising the AI model's accuracy.
Vulnerabilities: Like any system, an AI system is also prone to vulnerabilities and security failures, leading to inadvertent exposure of stored data and trained data sets.
3. Algorithmic Bias and Discrimination
These datasets and the training data directly, can input algorithmic biases and discrimination into the responses (often goes undetected), for example:
Biased Outputs: AI models trained on biased data can perpetuate harmful stereotypes and discrimination. This can in turn lead to reputational damage and loss of trust, due to the responses/treatment of individuals. Some examples of real-world biases seen include:
Racial Biases
Gender Biases (AI struggles particularly with non-binary gender identification)
Hiring Algorithms (for example Amazon's AI-powered hiring tool was found to be biased against female candidates. The algorithm was trained on historical hiring data, which was predominantly male, leading to the tool favoring male applicants.
Loan Approval Algorithms
Stereotyping
Unfair Decision-Making: Biased AI can lead to unfair decisions in areas like hiring, lending, and criminal justice.
4. Job Displacement and Economic Disruption
This is another massive concern facing employees today, I dont know of any tech worker who hasnt thought about the security of their job in the future with regards to the consistent growth of AI. Main risks and threats include:
Automation of Tasks: AI can automate routine tasks, leading to job displacement and economic disruption.
Skill Gap: The rapid evolution of AI may outpace the ability of the workforce to adapt, creating a skills gap.
5. Malicious Use of AI
Every day we see continual increases of attackers leveraging AI. AI allows attackers to develop and deploy attacks (for example phishing campaigns) at scale and very quickly, to capitalize on current events happening around the world. Obviously AI is being used to assist defenders in detecting and preventing cyber-attacks, but right now these defensive capabilities are slightly lagging behind attacker and their offensive operations. Main risks and threats include:
Deepfakes and Misinformation: AI can be used to create deepfakes and spread misinformation, and to orchestrate scams (VERY COMMON), undermining trust and causing social harm.
Autonomous Weapons: The development of autonomous weapons raises ethical concerns about the potential for misuse.
Cyberattacks: AI can be used to launch sophisticated cyberattacks, such as targeted phishing attacks or automated hacking tools which we are now seeing wide adoption.
Financial Market Manipulation: This point in particular is one that is not often considered, and is one of the biggest risks facing organisations (especially listed ones) and specific industries, and in my opinion, were not far away from seeing this kind of manipulation in the wild. A malicious actor could develop a sophisticated AI system capable of rapidly analysing news, social media sentiment, and market trends (it sounds like hollywood I know), but this AI could be used to:
Identify Vulnerable Stocks: The AI could pinpoint stocks with high volatility or those that are susceptible to manipulation, such as Insider Trading.
Coordinate False Information Campaigns: The AI could then generate and disseminate false news or rumors on social media platforms to artificially inflate the price of a target stock. This could involve creating deepfake videos or highly convincing text-based content.
Algorithmic Trading: The AI could then trigger automated trading bots to buy large quantities of the target stock, further driving up the price.
Profiting from the Surge: Once the price reaches a peak, the malicious actor could sell their holdings, profiting from the artificial inflation.
Covering Tracks: The AI could then coordinate a campaign to spread negative information or rumors to drive the price back down, minimizing the impact of the manipulation.
6. Lack of Transparency/Explainability performance
The last one is just a general lack of understanding as well as performance issues with the AI system. Main risks and threats include:
Black-Box Models: Many AI models are complex and difficult to understand, making it challenging to explain their decision-making processes.
Accountability: Lack of transparency can hinder accountability and make it difficult to identify and address issues.
Suboptimal Decision-Making: Performance issues in AI systems, such as inaccuracies or delays, can lead to suboptimal decisions in critical areas like healthcare, finance, and autonomous systems, lack of user trust, and potentially causing harm or significant financial losses.
Mitigating AI Risks and Threats
we've now explored the various risks and threats associated with AI, so lets talk about implementing robust strategies to mitigate these challenges.
AI inventory, Accountability & Governance
Firstly, you want to do an inventory of AI technology usage within the organisation, what AI technologies are you using? how is it being used, for what purpose? what are our employees habits? what are the data and exposure risks from the usage of these technologies?, and most importantly how does it align with our organisational strategy, any risk frameworks, and our risk appetite?, does our strategy even reference AI and AI opportunities?
The first part is to gain this understanding across the business and the why. (why is it being used.)
Once an inventory is complete an organisation should identify the management and board members (and internal staff) who will be accountable for AI decisions, oversight and also enforcement of AI policies within the organisation. It should be added as an item of oversight for any applicable sub-committees, for example the Audit & Risk Committee, to provide management reporting and oversight in alignment with the organisations.
Does the ARC and management have the necessary skills to manage AI usage and adoption across our organisation?, do we need to arrange some additional training?
Often organisations will adopt what we call an AI governance framework which we will discuss later. Its also important to note any expectations and/or feedback from various stakeholders and the potential impact (impact assessment), for example if you are a listed company, and you changed your base software product suite to now be AI driven, and that AI technology presents issues (for example misleading information, biases etc) or exposed customer data, then that will affect the company's reputation and company financials and subsequently impact the shareholders.
Some good resources:
2. Ethical AI Development and Deployment
There are a number of factors to consider when developing AI solutions and of course deploying them within an organisation or making them accessible for customers (for example a chatbot on the website). To ensure that AI is used and developed responsibly and ethically, organisations should ensure:
Adhere to Ethical Guidelines / Standards
Fairness: Develop AI systems that treat all individuals fairly, without discrimination.
Transparency: Make AI systems transparent and explainable, allowing for understanding of their decision-making processes.
Accountability: Establish clear accountability for the development, deployment, and use of AI systems.
Privacy: Protect user privacy and data security when developing and using AI systems, including storage and training data.
Beneficence: It sounds simple, but use AI to benefit society and minimise harm. Don't just utilise AI for the sake of it.
Some good resources:
Ensure Bias Mitigation
Diverse and Representative Data: Train AI models on diverse and representative datasets to reduce bias.
Regular Bias Audits: Conduct regular audits to identify and address bias in AI systems.
Fairness Metrics: Use fairness metrics to evaluate the fairness of AI decision-making.
Human-in-the-Loop: Incorporate human oversight to correct biases and ensure ethical outcomes.
Some good resources:
Links referenced above (OECD & Partnerships)
Human Oversight
Continuous Monitoring: Organisations should ensure they monitor AI systems to identify and address potential issues and ensure that they have a risk-reporting system in place which aligns with their current risk reporting frameworks and aligns with their risk appetite.
Human Intervention: This is most often seen with LLM's, but organisations should implement mechanisms for human intervention to override AI decisions when necessary.
Ethical Review Boards: These boards are not too common, but organisations can establish ethical review boards to oversee AI development and deployment. Often though, this is taken into consideration at the board-level instead or by in-house risk or development teams.
Some good resources:
Robust Security Measures
Protect AI systems from cyberattacks by implementing robust cybersecurity measures, such as:
Data Privacy and Security: Implement strong data protection measures, including encryption, access controls, and regular security audits. Data should be encrypted both at rest and in transit, access controls should restrict access to AI components (such as datasets and stored data)
Implement Model Security: For example Model Protection, to protect AI models from theft and unauthorised access, Model Poisoning protection, implementing techniques to detect and mitigate model poisoning attacks and protect against adversarial Attacks that can manipulate AI models.
Regular Security Assessments: Conduct regular security assessments to identify and address vulnerabilities in the AI and other systems.
Some good resources:
Transparency and Explainability
Model Interpretability: Develop techniques to make AI models more interpretable, enabling understanding of their decision-making processes.
Transparency in AI Usage: Be transparent about the use of AI in products and services, especially when it impacts individuals/stakeholders.
Accountability: Establish clear accountability frameworks for AI-related decisions and outcomes within the organisation and with the board.
Some good resources:
Responsible AI Governance
AI Governance Framework: As mentioned previously, organisations should establish a comprehensive AI governance framework which complements their existing risk management framework (s) and align with the overall risk appetite. The purpose of the AI framework is to oversee AI development, deployment, and use. There are a tonne of AI frameworks our there that an organisation can adopt or align to, and they are listed in the resources links below.
Risk Assessment: Conduct regular risk assessments to identify and mitigate potential risks. It's crucial to consider a wide range of factors, including technical, ethical, and societal implications. At a high level your risk assessment should encompass:
Technical Risks
Model Reliability: Ensuring the AI model's accuracy, precision, and robustness.
Data Quality: Assessing the quality and representativeness of training data.
Adversarial Attacks: Identifying vulnerabilities to adversarial attacks that can manipulate the model's output.
Model Decay: Monitoring the model's performance over time and addressing potential degradation.
Ethical Risks
Bias and Fairness: Identifying and mitigating biases in the model's decision-making.
Privacy: Ensuring the privacy of personal data used to train and operate the AI system.
Transparency and Explainability: Making the model's decision-making process understandable and accountable.
Job Displacement: Assessing the potential impact of the AI system on employment and the economy.
Societal Risks
Misuse and Malintent: Considering the potential for malicious actors to misuse the AI system.
Unintended Consequences: Identifying and mitigating unintended negative consequences of the AI system.
Ethical Implications: Evaluating the ethical implications of the AI system's decisions and actions.
Specific Risk Assessment Techniques
Threat Modeling: Identifying potential threats to the AI system and assessing their likelihood and impact.
Vulnerability Scanning: Scanning the AI system for vulnerabilities, such as security flaws or data privacy issues.
Red Teaming / Penetration Testing: Simulating attacks to identify weaknesses in the system's defenses or to test inputs, functionality and responses.
Ethical Impact Assessment: Evaluating the ethical implications of the AI system's design and use.
Compliance with Regulations: Ensure compliance with relevant regulations and standards, such as GDPR and CCPA, we will talk about this in a later section in more detail.
Some good resources:
OECD AI Principles (More a set of principles than a framework but ill add here anyway)
Continuous Learning and Adaptation
Staying abreast of the latest AI news as well as developing skills.
Stay Updated: Stay informed about the latest AI advancements and potential risks.
Continuous Learning: Encourage continuous learning and skill development to adapt to the evolving AI landscape. This should also include
Iterative Improvement: Regularly evaluate and refine AI systems to improve their performance and address emerging challenges.
There are a myriad of learning resources and news available online, a great introductory course is from AttackIQ:
Vendor Risk Assessment
The is typically your first step if you are looking to procure an already-built AI solution from a vendor. You should ensure you are doing due diligence on your vendor to assess their capabilities, reliability, and alignment with your organization's goals and objectives. If we put aside the standard due diligence practices (for example cyber security practices, financial stability, customer references, data management etc) and just look at it from an AI perspective you should be evaluating:
AI Expertise: Evaluating the vendor's technical expertise, including their team's experience, research capabilities, and track record in AI development.
Model Quality: Assess the quality and performance of the AI models, including their accuracy, precision, and robustness.
Data Quality and Privacy: Evaluate the vendor's data collection, storage, and usage practices to ensure compliance with data privacy regulations (e.g., GDPR, CCPA etc) and also evaluate how their solution 'learns' and using what data sets.
Security and Compliance: Assess the vendor's security measures, including data protection, access controls, security employed for the solution and incident response plans. A detailed vendor questionnaire can be found here.
Scalability and Performance: Evaluate the vendor's ability to scale their AI solutions to meet your organization's growing needs.
Implementing User Policies
Your organisation should ensure that the have policies in place for all employees to provide guidelines on the usage of technologies such as LLM and to ensure adherence with other security policies. A sample policy can be found here. To complement your policies, it is recommended that you employ some awareness training for employees focused on the risks associated with AI, for example accidental exposure of information via LLM, Attacks like Deepfakes and phishing using AI etc.
AI Standards, Legislation and Compliance
So now we've covered the risks and threats from the use of AI technologies and recommended mitigation measures. Let's cover off any applicable standards, legislation and compliance.
AI is still in its early days, and as a result there isnt a whole lot of legislation just yet, but it is definitely on the way, several countries and regions are actively working on regulatory frameworks to govern AI development and deployment. Here what exists so far.
Australia
We dont have a defined standard as yet, but DLA have a good writeup here on how things are progressing: AI Regulation in Australia: What we know and what we don't | DLA Piper
At this stage what we have are;
This standard provides practical guidance for businesses using high-risk AI systems. It sets out best practices for AI development, deployment, and use.
This in turn has led to the development of the AI Ethics Principles, to guide the development and use of AI. The principles cover areas such as human-centered values, fairness, and transparency.
Proposed Mandatory Guardrails for High-Risk AI
The Australian government is also considering introducing mandatory regulations for high-risk AI systems. These regulations would likely cover areas such as data privacy, transparency, and accountability.
European Union
AI Act: This comprehensive regulation aims to classify AI systems based on their risk level and impose specific obligations on developers and deployers. It covers a wide range of AI applications, from high-risk to low-risk.
Key provisions of the EU AI Act:
Risk-Based Approach: The Act categorises AI systems into four risk levels: unacceptable risk, high-risk, limited-risk, and minimal-risk.
Prohibited AI Practices: Certain AI practices, such as real-time biometric identification in public spaces, are outright banned.
Strict Requirements for High-Risk AI: High-risk AI systems, such as those used in critical infrastructure or healthcare, will be subject to rigorous requirements, including:
Robust risk assessments
Data quality and quantity
Human oversight
Transparency and explainability
Cybersecurity measures
Record-keeping
General Purpose AI (GPAI) Models: The Act introduces specific requirements for GPAI models, such as transparency and robustness.
Market Surveillance: The EU will establish a robust market surveillance system to monitor compliance with the AI Act.
For more detailed information and the official text of the EU AI Act, please refer to the following link: https://eur-lex.europa.eu/eli/reg/2024/1689/oj
United States
While the US has not yet enacted comprehensive federal AI legislation, the AI Bill of Rights provides a significant framework for ethical AI development and deployment. This blueprint outlines five key principles:
Safe and Effective Systems: AI systems should be safe, effective, and accountable.
Algorithmic Discrimination Protections: AI systems should not discriminate on the basis of race, color, religion, sex, national origin, age, disability, or other protected characteristics.
Data Privacy: AI systems should be designed to protect people's privacy.
Notice and Explanation: People should be informed when an automated system is used to make an important decision that affects them, and they should be able to understand the decision.
Human Alternatives, Consideration, and Fallback: People should have the option to opt out of automated systems and have access to human intervention.
More information can be found here:
What is the AI Bill of Rights? | IBM
Blueprint for an AI Bill of Rights | OSTP | The White House
Sector-Specific Regulations
While not directly targeting AI, various sector-specific regulations in the US indirectly impact AI development and use. For example, in healthcare, the Health Insurance Portability and Accountability Act (HIPAA) imposes strict data privacy and security standards, which must be considered when developing and deploying AI-powered healthcare solutions.
China
China Doesn't have any specific standards, but the following Laws and Regs apply:
Algorithmic Recommendation Regulations: These regulations aim to regulate algorithms used in recommendation systems, including those powered by AI.
Personal Information Protection Law: This law imposes strict data privacy requirements, which can impact the development and use of AI systems.
Other Notable Regions
Canada: While there isn't specific AI legislation, Canada has implemented regulations related to privacy, cybersecurity, and intellectual property, which have implications for AI development.
Singapore: Singapore has also taken a proactive approach to AI regulation, focusing on ethical guidelines and industry standards.
Japan: Japan has implemented guidelines for AI development and use, emphasizing safety, security, and ethics.
What about frameworks / Certifications and Compliance?
At this stage only 1 standard is generally talked about (and audited) when it comes to AI which is ISO 42001, which provides a framework for organisations to manage AI risks and ensure compliance.
Understanding ISO 42001
ISO 42001 is an international standard that outlines guidelines for managing the risks associated with AI. It covers a wide range of AI applications, from autonomous vehicles to healthcare diagnostics. Key aspects of ISO 42001 include:
Risk Assessment: Identifying and assessing potential risks associated with AI systems.
Ethical Considerations: Ensuring AI systems are developed and used ethically.
Data Privacy and Security: Protecting sensitive data used and generated by AI systems.
Transparency and Explainability: Making AI systems transparent and understandable.
Human Oversight: Maintaining human oversight and control over AI systems.
Information on the standard can be found here: https://www.iso.org/standard/81230.html
Other Relevant Regulations
While ISO 42001 provides a comprehensive framework, Its important to call out that there are other regulations and acts that organisations need to consider (depending on their industry and geographic location). Some key regulations include:
GDPR (General Data Protection Regulation): Ensuring compliance with EU data privacy laws, especially when AI systems process personal data.
CCPA (California Consumer Privacy Act): Adhering to California's privacy laws, particularly for organizations operating in the state.
Australian Privacy Principles (APP): Ensuring compliance with Australian data privacy laws, particularly when AI systems process personal information.
Other Industry-Specific Regulations: Industries like healthcare, finance, and autonomous vehicles have specific regulations governing AI usage.
Key Compliance Strategies
To navigate the regulatory landscape when it comes to leveraging AI and to address compliance requirements, (while mitigating risks and building trust with stakeholders) organisations should adopt the following strategies:
Risk Assessment and Management: Conduct regular risk assessments to identify potential risks and implement mitigation strategies.
Data Privacy and Security: Implement robust data protection measures, including data encryption, access controls, and regular security audits.
Ethical AI Development: Adhere to ethical principles, such as fairness, accountability, and transparency.
Transparency and Explainability: Make AI systems transparent and understandable, especially for high-risk applications.
Human Oversight: Maintain human oversight to ensure AI systems are used responsibly.
Regular Compliance Audits: Conduct regular audits to assess compliance with relevant regulations and standards.
Stay Updated on Regulatory Changes: Monitor regulatory developments and adapt compliance strategies accordingly.
Leveraging AI Opportunities
It's not all bad! we need to talk about Opportunities that come from AI. AI obviously has a lot of risks, but also has the potential to revolutionise industries and solve complex global challenges. Some of the standout benefits include:
Enhanced Decision-Making: AI can analyse vast datasets to identify patterns and trends that humans may miss, leading to more informed and effective decision-making.
Increased Efficiency and Productivity: Automation of routine tasks can streamline operations, reduce costs (for example increase efficiencies), and free up human workers to focus on more strategic initiatives.
Innovation and New Business Models: AI can fuel innovation by enabling the development of new products, services, and business models.
Improved Customer Experience: AI-powered personalisation can enhance customer satisfaction and loyalty.
Scientific Breakthroughs: AI can accelerate scientific research and discovery, leading to breakthroughs in fields like medicine, materials science, and climate change. Were already seeing AI technologies in use significantly in the medicine field, for identifying cancers and other serious health issues.
Competitive Advantage: AI can be a business enabler, allowing your organisation to have a competitive advantage in your market.
Wrapping up
As AI continues to evolve, it presents both immense opportunities and significant risks. To harness its potential while mitigating its downsides, organisations must adopt a proactive and strategic approach. Board members and leaders play a critical role in steering their organisations through this complex landscape.
By prioritising AI governance, ethical considerations, and robust security measures, organisations can ensure that AI is used responsibly and for the benefit of society. It's imperative to invest in AI talent, foster a culture of innovation, and stay abreast of the latest advancements. By doing so, organizations can position themselves for long-term success in the age of AI.
If you are a Leader, here are your Key Takeaways:
Understand the AI Landscape: Gain a comprehensive understanding of AI technologies, their potential applications, and associated risks.
Establish Strong Governance: Implement robust AI governance frameworks to ensure ethical and responsible AI use.
Prioritise Data Privacy and Security: Protect sensitive data and mitigate cyber threats.
Foster a Culture of Innovation: Encourage experimentation and innovation while maintaining ethical considerations.
Invest in Talent and Training: Build a skilled workforce capable of developing, deploying, and managing AI systems.
Monitor and Adapt: Continuously monitor the AI landscape and adapt strategies to emerging trends and challenges.
Comments