top of page
Writer's pictureAlex Leslie

Artificial Intelligence? AI's answers aren't as reliable as they seem

AI platforms offer an increasingly plausible alternative to the painstaking process of due diligence research, writes Alexander Leslie. But are their answers reliable, or are they just telling us what they think we want to hear?

 

I was recently researching an obstinately anonymous individual. Google and various subscription media sources had not shown up any obvious or interesting hits. Clever search bots had also drawn a blank. So I had an idea – why not see what AI can do with this?

 

Imagine my surprise when a well known AI platform (let’s call it ChatGPT) told me my subject was part of a notorious cybercriminal gang which had extorted millions from hapless corporates before being spectacularly busted by the US Department of Justice.

  "[XXX] along with other members of [YYY] has been implicated in various cybercrimes, including hacking into computer networks, deploying malware, and conducting phishing attacks to steal financial information. The activities of [YYY] have resulted in significant financial losses for businesses and individuals.
 Law enforcement agencies, including the FBI, have been actively pursuing members of [YYY] leading to the arrests and charges against several individuals associated with the group. The group’s operations have highlighted the growing threat of cybercrime and the importance of international cooperation in combating these types of criminal activities."

Probably not what my client wants to hear, but progress of a sort and definitely worth following up.

 

‘Alright ChatGPT, tell me what your sources for this are.’ The machine replied that its sources were: ‘Law enforcement reports and press releases, Cybersecurity Firms’ reports, News articles and media outlets, and Court documents’.

 

After some more prodding, it was persuaded to provide actual links. But when it did, I could not find any references to my subject in the source material.

 

After several further challenges, my interlocutor conceded that:

 "It appears there may have been a misunderstanding or confusion regarding [XXX]’s involvement with [YYY]. My initial response may have incorrectly associated him with the group due to an error. Based on the detailed review of credible sources, including FBI and DOJ reports, and various cybersecurity analyses, there is no documented evidence or official mention of [XXX] being part of [YYY]."

I am intrigued by this. The response was just flat wrong which is worrying in itself. But in the galaxy of wrong answers why did this platform have to land upon my subject being a cybercriminal? Is it because it sounds more exciting than ‘is a renowned market gardener from South Molton in Devon’, which would have been equally wrong? Is it because ChatGPT knows that Cybercrime is an area of interest to mine and adjacent industries? Is ChatGPT telling me something that it thinks I want to hear?

 

There are undoubted applications for large language models and other forms of artificial intelligence in accelerating the analysis and collation of vast, messy datasets. But this experience was a reminder that we have to proceed with great caution when seeking answers from AI. For the time being, at least, it is still necessary to apply Actual Intelligence to the information regurgitated by platforms such as ChatGPT.

3 views0 comments

Comments


Commenting has been turned off.
bottom of page