13 C
Congleton
Wednesday, February 25, 2026
0,00 GBP

No products in the cart.

Home Our Areas Alsager ‘Sensible people should be worried by rise of AI’

‘Sensible people should be worried by rise of AI’

0
30

Photo: a sensible AI image of a solicitor.

 

It is a worry to “any sensible-thinking person” that companies are unleashing technologies “that appear to be able to self-replicate and do other things, and we are incorporating them into military hardware without a full understanding of how they work”.
That was the stark warning from Congleton MP Sarah Russell, speaking in Parliament on 10th December during a debate on artificial intelligence.
Ms Russell told colleagues that while she was “generally a very optimistic person”, but that optimism must be grounded in knowledge and regulation.
“We do not have to be catastrophists or conspiracy theorists to be concerned,” she said. “It is important to be optimistic on the basis of understanding the technology that we use and then regulating it appropriately. That does not mean stifling innovation, but it does mean making sure we know what we are doing.” She began her remarks by thanking Iqbal Mohamed for securing the debate and acknowledging the importance of the subject.
“There are two problems — maybe three — with AI,” she explained.
“The first is that we do not distinguish very well between what is and is not AI. Although AI and tech are obviously related, they are not the same thing.
“It is important that when we talk about AI we distinguish it from tech. There is a need to regulate a lot of tech much better than we currently do, but AI poses very specific problems.
“The first one is the fact that we do not fully understand the models.”
Ms Russell went on to highlight the dangers of bias and poor-quality data feeding into AI systems.
“One problem is rubbish in, rubbish out, and there is a lot of rubbish going into AI at the moment,” she said.
“We have a huge amount of in-built gender bias in our society.
“That means that, for instance, if we ask for AI to generate a picture of a female solicitor, as I am, we will get a picture of a woman who is barely clothed, but has a library of books behind her.
“That is not how female solicitors that I know go to work, but that is how AI thinks we are, and that has real-world impacts.”
She cited further examples of algorithmic bias, including freelance pay rates and professional networking platforms.
“If we ask AI to suggest an hourly rate as a freelancer, it is on average suggesting significantly lower rates for women than it is for men.
“Questions have been raised recently about LinkedIn. I and a lot of women I know are finding that we have significantly less interaction via LinkedIn than we used to.
“Various women have now changed their gender on their bios to male and suddenly find that their engagement levels go straight back up. LinkedIn appears to think we are not interesting and people will not want to read our content, so it is stopping showing female content at the same rate, it would appear.
“I caveat that I have not been able to speak to LinkedIn directly, but certainly a lot of women I know are reporting these problems.”
She warned that the problem began with the data itself.
“Huge amounts of the image training data is based on what is publicly available on the internet, and that image training data of women on the internet is largely pornographic, which influences what comes out the other end of these models,” she said.
“When we look at that in terms of children, we have real problems. Nudification apps are huge and need to be dealt with.”

Insight

Ms Russell said she would have liked to go further into issues around health data, gender, and the lack of adequate training material in those areas, but time was limited.
“I would like to get into how I am worried about that and deal with health and how we do not have good enough training data on the interaction between gender and health and various other matters, but I will stop now,” she concluded.
“I thank everyone for their time today. I know colleagues will pick up important points.”
opening the debate, Mr Mohamed said that artificial intelligence was the “new frontier” of humanity.
“It has become the most talked about and invested in technology on our planet.
“It is developing at a pace we have never seen before; it is already changing how we solve problems in science, medicine and industry; and it has delivered breakthroughs that were simply out of reach a few years ago,” he said.
“The potential benefits are real, and we are already seeing them. However, so are the risks and the threats, which is why we are here for this debate.”
He said that one example of the good was AI systems in the NHS, which can analyse scans and test results in seconds, help clinicians to spot serious conditions earlier and with greater accuracy, ease administrative loads and to improve how hospitals plan resources.
But he also listed the downside – In November 2025, Anthropic revealed the first documented large-scale cyber-attack driven almost entirely by AI, with minimal human involvement. A Chinese state-sponsored group exploited Anthropic’s Claude AI to conduct cyber-espionage on about 30 global targets, including major tech firms, financial institutions and Government agencies.
He added: “Mental health professionals are now treating AI psychosis, a phenomenon where individuals develop or experience worsening psychotic symptoms in connection with AI chatbot use.”