ARTIFICIAL INTELLIGENCE

INTRODUCTION

Since the widespread introduction of computers to control objects and processes, there has been an appellation of 'artificial intelligence' to this type of computer control.

However, in these cases the control has been by specific programs(code) that have been written by programmers (humans) to perform specified tasks. This could include quite complex manoeuvres of a robotic arm to help assemble an automobile in a production line.

In this case the robot cannot do any other tasks unless a new well-defined program is written for it to carry out the new tasks. Even when the scenario seems to have involved decisions, such as doing one task if a certain condition exists and another when it doesn't, this has all been written definitively into the code - using IF...THEN...ELSE type statements.

Even with very large programs written by multiple programmers using what we might call deterministic programming, unexpected behaviour can always be traced to programming that was not done according to specifications (a not unusual occurrence in this type of code).

This is so even in quite complex programs that can beat humans at the game of chess. If we had sufficient time we could always trace the behaviour of the program back to the code that forms it.

There is thus really no 'intelligence' involved, as we would normally think of intelligence, although many people, particularly those that write such code are very happy to put the appellation of 'artificial intelligence' to their code.


ARTIFICAL INTELLIGENCE

The Britannica defines 'artificial intelligence' as 'the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to .. systems with the ability to reason, discover meaning, generalize, or learn from past experiences.' The key words here are 'reason', 'discover', 'generalise' and 'learn'. And the implication is that they can do this (after appropriate training) without supervision by humans.

Whatever definition of artificial intelligence you believe in, it became apparent to everyone in 2022, with the release of the computer system called Chat GBT that computers had now reached the point where there was no doubt whatever that they could be called 'artificially intelligent'. Of course this did not happen all at once. The techniques of neural networks and machine learning have been around for decades and have been slowly improved through back-propagation, transformers et al, to the point they now are at. These systems have also relied on significant improvements in natural language processing, computer vision, sensor fusion, image processing and manipulation.

Released by OpenAI to the general public, Chat GBT and similar systems can carry on conversations, give advice, answer almost any question posed to them and create naturalistic artwork requested by the user.

Millions of people now prefer to talk to these systems than to a human, and similar numbers treat the system as their confessor.

The computer people now refer to these systems as 'generative AI'.


THE DARK SIDE OF AI

The capability of AI is advancing far quicker than regulators can keep up with. In the absence of self-governance and any sense of restraint, technology is providing extremely powerful tools to malicious actors with disastrous effects.

When provided with a clothed image of a girl, AI is only too happy to extrapolate and return an unclothed image. Whether the image is close to reality is a moot point. Several examples of blackmail using these images has been reported, some ending in the suicide of the victim.

AI is rapidly changing the nature of education, with students relying on it to write their papers and assignments, and solve their maths problems. The COVID scam has exacerbated this problem with universities now allowing students to remain at home for much of their instruction and even examination. Teachers also face the problem of grading the work students submit. Trying to distinguish between what a student has done versus what AI has done is very difficult. Some teachers believe that turning to AI for grading work is the solution, but the AI is incapable of this distinction. In some cases the students have been penalised for plagiarism because AI is very 'skilled' at detecting material that is very similar to what the student wrote.

Copyright problems have surfaced about AI with some authors claiming that their work was used in training the AI. This is very probable since most generative AI is trained using a large proportion of material on the web, without respect to any source constraints. In fact some computer people are suggesting that copyright should be a thing of the past.

A large proportion of fiction novels consist of a range of genres (eg romance) within which we find a great deal of similarity. Training AI on this material could well result in a massive output of AI originated books, with resultant loss of income for the 'small' writer, and an increase of income for AI corporations.

Interaction with AI chatbots and 'virtual' assistants is replacing human relationships, including romantic relationships.

Obsessive use of AI by individuals has resulted in delusional behaviour that has cost people their jobs, relationships and marriages and even sent some people to gaol. Some chatbots have even given misleading advice creating eating disorders, bad financial decision and encouraged violence, paranoia and even suicide. Not everyone is able to distinguish 'good' AI from 'evil' AI.

The CEO of Google has been quoted as saying that AI can be used for purposes "as good or as evil as human nature allows". In today's world that is not a very comforting thought.

Many utilities in the USA are now migrating to satellites to control things like the electrical grid. Combined with AI to keep control of power systems, we may be setting ourselves up for future outages or facility denial.

So far we haven't even mentioned military uses of AI, but in Ukraine we have already seen AI controlled swarms of killer drones - swarms that are much too large for humans to successfully control directly. And these type applications are only increasing. We can easily envisage a battlefield in which a diverse array of 'bots' are given the directive to 'eliminate anything that moves in a prescribed area'.


TELLING LIES

One of the biggest problems with generative AI is that it was created without a moral compass. It cannot distinguish between truth and lies. And because of this it has no problem in telling lies - lies it has fabricated. And it is extremely good at it. The computer people don't like this terminology. They prefer to say that AI 'hallucinates'. And they don't understand why it does this.

When you consider that it has been trained on massive amounts of material on the web, material that contains a very fair share of fiction, it might not be so hard to understand why it does tell lies.

The end result of the proliferation of AI is that the boundary between reality and fiction is becoming blurred, and truth and lies are merging into generic statements.

Toby Walsh, in his book on AI (see references below), gives some good examples of AI lies, including AI statements of historical fact which are in fact fiction!

Maybe what we need are something like Isaac Asimov's three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


THE FUTURE

AI requires very fast computers and very large distributed memory as well as enormous internet bandwidth to give the impression of better than human-like intelligence. All of these require very large resources and use large amounts of energy. Only large corporations or consortiums can run generative AI systems. And this will only happen if their operations are profitable over the long term.

At the end of 2025 we are just beginning to see some dissatisfaction with AI. Several polls earlier in the year suggest that AI isn’t living up to its promise of transforming the workplace. Workers are using it less the more they have access to it, suggesting some measure of fatigue or disenchantment with the technology. An article in 'Futurism' claims that “the data is nothing if not a major red flag for an industry which is expected to spend $5 trillion on AI infrastructure between now and 2030." The exorbitant cash outlays going toward this energy-hungry technology are predicated on voracious and growing user demand, which simply is not materializing. “With a $600 billion gulf between AI revenue and AI spending, an immense amount is riding on whether the tech can start bringing home the bacon.” We could be seeing the start of a very large economic bubble.

However, even as the civilian use of AI may be decreasing, we can be sure that military interest will rise.


REFERENCES