sign up log in
Want to go ad-free? Find out how, here.

Juha Saarinen speaks with Kordia Group's Aura Infosec about AI security for businesses

Technology / news
Juha Saarinen speaks with Kordia Group's Aura Infosec about AI security for businesses
Alastair Miller, Aura Infosec. /supplied
Alastair Miller, Aura Infosec. /supplied

Artificial intelligence and machine learning are becoming commercialised and integrated into business applications, but what are the information security implications for organisations wanting to implement the technology? 

To get an understanding of the issue, we spoke to Kordia Group's Aura Infosec, and its principal advisory consultant Alastair Miller.

Miller said AI tools can provide time-saving and other benefits to the business, but they also come with significant risks that need to be understood and managed.

AI capabilities are evolving rapidly, which means the issues and risks of AI are changing and may not be fully understood, Miller explained.

Here are some things Miller suggests to watch out for when using AI.

Quality of responses - can you rely on what the AI tells you?

AI systems vary in the quality of their responses.

They can provide unreliable responses as a result of:

  • Limited relevant training data
  • Biases or knowledge gaps of the people who trained the algorithm
  • Deliberate manipulation by other users (adversarial training)
  • Users ‘tuning’ queries to obtain a favourable or desired response, thus reinforcing the user’s bias

The current generation of AI chatbots tend to present opinions, predictions and contentious information in such a way they may be interpreted as uncontested facts.

They are also prone to confabulation, where responses contain plausible but fictitious information (e.g. providing made-up statistics or citing research articles that don’t exist), especially when poorly chosen queries are submitted. This is also called hallucinating.

Therefore the information provided by AI tools should be used with caution. AI tools should be assessed to make sure they’re suitable before they are used for business purposes. Users of AI tools should be aware of their limitations and benefits when querying or interacting with these tools.

Responses from AI systems should be reviewed by a suitably qualified person before relying on them for important decisions. 

Data protection - be careful what you share

Only Private AI/ML systems are to be used for the upload of any data. Any Private AI/ML System must have adequate technical and business signoff.

Data uploaded to Public AI/ML systems may be shared with other people without the user’s knowledge:

  • Service providers may allow their staff or third parties to access user data in order to improve the model or provide support
  • Some models may include elements of user data in responses to other users
  • Copyrighted or proprietary company data may be exposed to other users outside the organisation 

Confidential or personal information should only be uploaded to approved Private AI systems that minimise the risk of data exposure.

Be mindful of intellectual property

Public AI systems may keep and use user-submitted data to improve their algorithms. It’s important to understand the system’s terms of use before uploading any proprietary information. You should get legal advice before uploading third-party content to an AI system to confirm whether it is permitted by the licence or terms of use for that content.

There are uncertainties about who owns the intellectual property rights of content generated by AI. You should get legal advice before using AI to create text, images or audio for publication.

Public perception of AI isn't always positive

Creating content with AI instead of human artists or photographic models can lead to controversy and negative public response. Care should be taken when using AI to generate content, especially when the content:

  • Appears to show real people who were not involved (“deep fakes”)
  • Appears to show people from marginalised and underrepresented groups
  • Includes culturally significant designs and motifs
  • Emulates or could be mistaken for the work of a particular artist or creator

What to consider for staff use of AI

It is important that staff follow the following guidelines when interacting with any AI/ML system.

Do not:

  • Paste or upload any company or partner source code into a public or unapproved system.
  • Paste or upload any confidential, sensitive or personally identifiable information (PII) data into a public or unapproved system.
  • Paste or upload any company business processes or strategic documents into a public or unapproved system.
  • Paste or upload any material owned by or licensed from third parties into public or unapproved systems.
  • Use public or unapproved systems to do create content for any project that will be published or shared out the organisation.
  • Rely on AI/ML generated responses and content without first sanity checking or peer reviewing the response for reasonableness, and accuracy 
  • Review any licence agreement or terms of use before using any public or unapproved AI system and only use systems in a way that abides by these agreements.
  • Get advice from your manager, or from Information Security, Data, Legal or other domain owners at an organisation if you aren’t sure about the risks involved in using AI systems at work.
  • Thoroughly check any response for accuracy, bias and the possible presence of copyrighted material.
  • Treat AI-generated predictions (eg. financial forecast data or behavioural risk scores) with scepticism – they’re indicative, not absolute fact.
  • For any code given by an AI ensure that it follows all the review and scanning tools that any coder would have to follow.

What to consider for organisational use of AI

It is important that the organisation and staff follow the following guidelines when purchasing or developing an AI/ML system.

Do not:

  • Accept any marketing material about an AI/ML as accurate, it is usually inflated.
  • Integrate an AI system with Company systems without express sign-off from the data owners and security teams.
  • Upload PII or commercially sensitive data to a system that is being trialled or evaluated, unless there is a non-disclosure agreement in place with the vendor and you understand how the uploaded data will be stored, used, and whether it can be removed at the end of the trial.

Do:

  • Consider whether AI solutions are the best way to solve your business need
  • Follow standard business case criteria and ensure sign-off from all relevant parties, especially data governance
  • Conduct a Privacy Impact Assessment (PIA) if the system will process personal data
  • Ensure as much understanding of the model as possible so good explanations can be given for results
  • Select systems that allow an organisation to retain ownership and control of the organisation’s data and intellectual property if it will have access to sensitive or proprietary information
  • When developing ensure as much understanding of the model as possible so good explanations can be given for results.
  • Rigorously test the AI before and during implementation, to understand its limitations and biases.
  • Understand that that the output of AIs reflect the developers and data training sets who created them and the queries submitted to them. Be aware of the system’s limitations and biases and put counter-measures in place for them if necessary eg. before enacting commercially significant changes.
  • When developing ensure data training sets have sensitive data anonymised and ensure the data is as representative of the Company consumer base and unbiased as possible.
  • Verify the terms of any copyright or third party intellectual property before including such content in training data sets
  • Ensure any client or staff member interacting with an AI system is aware of that fact and has an option to contact a human being.
  • Provide onboarding and training to staff using AI systems that covers the system’s limitations and appropriate use, including the type of data that can and cannot be uploaded to the system, and the impact of unconscious or intentional bias in queries submitted to such systems.
  • Plan for potential updates or revisions to the AI engine, rules and syntax to be required – specialised development and testing skills may be necessary, and potentially re-training the AI system once changes have been applied
  • Cater for the need to apply software patches to the underlying components of the platform, and potential impact on the underlying AI engine

Recognise that future legislation, regulation or industry codes of practice. may require changes to AI systems in use, or the outright prohibition on using AI for certain purposes.

Hackers use AI as well. Here's what to look out for

AI is a very powerful tool for threat actors to use, Miller noted.

It can be used at various stages of an attack, such as the reconnaissance phase. Threat actors can use AI to scan the external facing systems of an organisation looking for weaknesses.

They can also scan social media to find all the employees of the organisation and then see if any of them are disgruntled or looking to leave. These people can then targeted to see if they would sell their login credentials.

This information can also be used to craft future attacks.

AI can be used to deliver numerous forms of attack. They can automate ‘credential stuffing’ attacks where already compromised email addresses and passwords are tried to see if they have been reused. Sadly this attack is quite effective.

Phishing emails are also crafted by AI trained on all the leaked emails out there. If the threat actor has gained access to an organisation they can sit there and train the AI on all the emails that flow about, so they AI learns everyone’s communication style. Making spotting phishing even harder.

Threat actors can also use AI to create phone or video communications which can be hard to spot and people are not quite as wary about as phishing emails.

Where the business is using ML/AI tools to try and protect itself the threat actors, if they are well funded, may have purchased versions for themselves so they can use AI to model how to avoid those protective ML/AI tools.

The threat actors are always at the forefront as they have a much bigger appetite for risk than businesses or companies selling security solutions.

They will be using AI tools before anyone else, and state-sponsored threat actors will have access to AIs with massive computing power and trained with more aggressive capabilities.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

8 Comments

Good summary.

Up
5

Agree, we need a like button for articles.

Up
4

As a bit of an add-on, there's this from the MIT Technology review, and given development tools can create AI systems capable of hacking other's systems without the need for human intervention (in New Scientist), I think we should be be extremely cautious about security, and maybe start to hedge against against an unreliable web with backup analogue systems.

Up
3

AI. Things are already getting nightmarish.

Google swears it’s not trying to kill journalism, but many of its latest projects seem geared toward that end. Google is paying five-figure sums to small publishers asking them to test out a generative AI platform geared toward newsrooms. News outlets are asked to publish three of these AI-assisted articles a day, in exchange for sending analytics and feedback to Google, according to a report from Adweek on Tuesday.

Google, and the rest of the internet, is slowly becoming filled with AI-generated slop. Researchers found that a “shocking” amount of the web, 57.1%, is already AI-translated garbage. Beloved blogs like “The Hairpin” are being turned into AI clickbait farms under the guise of reputable brands. It’s a side effect of AI being injected into everything, and Google is leading the effort.

https://gizmodo.com/google-paying-news-outlets-publish-ai-generated-pie…

 

Up
1

Can you imagine being in charge of the unsolicited submissions at a publishing house at the moment, having to wade through a torrent of AI generated manuscripts? Doesn't bear thinking about - although I do wonder if anyone is working on an AI manuscript reader for that unfortunate task.

Up
1

A balanced article, but this is not the whole picture.  We need to be seriously thinking about securing AI models, and understanding how they are developed, and where the data is coming from and whether it is clean.

There are many threats to AI Models, which need to be dealt with at the same time, as this link indicates: https://www.ibm.com/blog/announcement/ibm-framework-for-securing-genera…

Governance and AI Ethics are paramount as to whether an organisation can actually trust the output, and whether it remains secure in terms of the data used, generated and protected, especially as it certainly may contain company information.

Do your own research, some technology leaders have even gone as as far as providing intellectual property protection for their AI models, including IBM, Microsoft and Adobe for their clients.  It could be some time before legislation catches up with reality, so ensure your organisation has good AI Principles, including guidance for employees, and make sure they are aware of the risks, benefits and how they can use AI safely.

Up
2

Also companies can be held legally responsible for AI logic faults; the below is a fun example. When it comes to share market trading cases of AI crashing the market in short spikes are notorious and highly damaging. More deadly ones are when health insurance companies use AI now to deny legally and medically necessary care causing deaths. No humans involved apparently.

https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refun…

Up
2

It's going to be a fun future when your initial claim is assessed by an AI claims-bot. And then the next level beyond them is a super-AI bot.

Up
0