Some people dismiss artificial intelligence as something that will forever be sub-human. We sometimes get befuddled with the imagery. Awkward clunky robots that clearly aren’t human. Computing machines that definitely aren’t human. Synthetic life forms with an abundance of logical thought that can be moulded to look human; but we can still tell the difference. They’re not human!
But maybe we’ve been looking in the wrong places!! Artificial intelligence may take more time to develop human-like characteristics; but outperforming human thought is already something that machine-learning models can easily achieve. Sceptics should consider not artificial intelligence, straining with concepts like free will and consciousness, but should rather entertain the thought of profound astuteness.
Computers can currently assimilate data much quicker and much more accurately than humans. They can do so without making mistakes, without getting tired and without the need for sustenance. Once again, we struggle with this because we compare it to humanoid thought and cognitive ability. But when we overcome that barrier, we can see machines for what they truly are – they offer profound astuteness.
Machine-learning pervades all industries, but it is beginning to dominate financial services. Disciplines as diverse as assessing credit scores to identifying elderly financial abuse are now routinely carried out by computer models with little human intervention. But, perhaps, of more interest is the use of machine-learning informed alert models that are displaying predictive capabilities. Cynics should not imagine computers trying to forecast the future but rather should think of a profoundly astute algorithm spotting patterns and frequencies in data that suggest a likely outcome.
The technology is now being deployed right across the board in identifying fraud, money-laundering, credit-card defaults, phishing and cyber-attacks. The pace of change is staggering. Not only have these models been fully deployed and shown to be more effective than humans, but generations 2 and 3 are already in place.
In tackling attempted money-laundering, for example, algorithms have tended to spot more potential events but have also produced a higher number of false positives (suspicions that proved unfounded). However, newer definitions of the models now have the ability to examine the false positives separately and re-classify them before they need any final examination by humans.
Computers have also invaded stock-markets. Some 40% of trades on major platforms are now executed exclusively by machines. In addition, there are a multitude of algorithms advising on stock picks. Intriguingly, the latest development is for machines to write up share recommendations (in any language, of course) highlighting the key positives and downplaying the negatives – all with absolutely no human intervention.
This, naturally, brings us to some serious shortcomings in all of this pacey programming. Machines can exhibit biases and prejudices while gleaning patterns from the source data. Some of these biases can be comical (misinterpreting phishing events for fishing trips) but others can be damagingly unethical (sexism, racism, etc.). A final thought; computer algorithms have just identified Covid-19 vaccines in double-quick time. Should we really care how they did it?