AI technology has the potential to create authentic images – including sexual ones – that experts can find challenging to prove that they are fake.
This article in The Daily Telegraph is about how school children are using AI to create pornographic images and films of other students. Still, even though these are fake, experts can have difficulty proving that they are manufactured. While this can be an issue of concern, this should not lead to a discussion against AI technology. While any system can be abused, abuse should not lead legislators to attack or try to put breaks to technology.
While society must become conscious of how technology can be abused, this should not lead to arguments intended to hinder the use of such a technology. The argument in this article reminded me of the statements that Dom Mintoff made against computers in the 80s. He used to scare people into believing that computers would take jobs from workers. On the contrary, computers ended up creating new jobs.
AI technology is enhancing our children’s intelligence and creativity.

School children in the UK are using AI to generate indecent images of other pupils, internet safety groups have warned.
The UK Safer Internet Centre (UKSIC) said it has begun receiving reports from schools that children are making, or attempting to make, indecent images of each other using AI image generators.
The child protection organisation said such images – which legally constitute child sexual abuse material – could have a harmful effect on children, and warned that they could also be used to abuse or blackmail them.
UKSIC has urged schools to ensure that their filtering and monitoring systems were able to block illegal material across school devices to combat the rise of such activity.
David Wright, UKSIC director, said: “We are now getting reports from schools of children using this technology to make, and attempt to make, indecent images of other children.
“This technology has enormous potential for good, but the reports we are seeing should not come as a surprise.
“Young people are not always aware of the seriousness of what they are doing, yet these types of harmful behaviours should be anticipated when new technologies, like AI generators, become more accessible to the public.
“We clearly saw how prevalent sexual harassment and online sexual abuse was from the Ofsted review in 2021, and this was a time before generative AI technologies.
“Although the case numbers are currently small, we are in the foothills and need to see steps being taken now, before schools become overwhelmed and the problem grows.
“An increase in criminal content being made in schools is something we never want to see, and interventions must be made urgently to prevent this from spreading further.
“We encourage schools to review their filtering and monitoring systems and reach out for support when dealing with incidents and safeguarding matters.”
AI pictures indistinguishable from real imagery
In October, the Internet Watch Foundation (IWF), which forms part of UKSIC, warned that AI-generated images of child sexual abuse are so realistic that many would be indistinguishable from real imagery, even to trained analysts.
The IWF said it had discovered thousands of such images online.
Artificial intelligence has become a focal point in the online safety debate over the past year, in particular, since the launch of generative AI chatbot ChatGPT last November, with many online safety groups, governments and industry experts calling for greater regulation of the sector because of fears it is developing faster than authorities are able to respond to it.

Watch out, AI has all the potential to become the biblical Beast. Including its number.
If technology does not progress in parallel to wisdom, on the same binary, then I’m afraid we risk falling victims to our own unwise desires.
Collectively.