Regtech Sumsub has acknowledged the rise of deepfakes as well as AI being used in deceptive ways to target unsuspecting UK consumers. The firm noted in a blog post that after over a month from the Online Safety Act coming into force, websites hosting mature content are compelled to verify users’ age to protect minors, and many are “seriously concerned about its ethical and practical implications.”
Early data from a survey of 2000 UK consumers shows people “are worried that the Government and OFCOM are incapable of properly enforcing the Act.”
Respondents were also concerned about “censorship through overreach – flagging that online content they’d view as ‘safe for all’ has been impacted, as well as harmful content being left unrestricted.”
Despite this though, there is clear “support for the Act and its aims, and furthermore, many of those who disagree with the Act do so because it’s inaccurately or insufficiently enforced.”
Early support for new online regime – parents of young children especially
Findings from global verification and anti-fraud Regtech Sumsub have revealed broad support for these new online processes.
- 64% of people agree verification checks protect children from potentially harmful content online.
- This figure is higher than ‘empty nest’ parents with adult children (57%), while 78% of parents with children under 18 living at home agree
Uneven application causing problems
Despite broad support, many are finding the Online Safety Act “either excessively or insufficiently enforced.”
And 35% report seeing age restrictions “for ‘safe for work’ (non-harmful and not just for adults) content.”
Other key findings:
- Almost half, 48%, worry that these new rules and how they’re being adopted increases risk of censorship
- Inversely, 32% report regularly viewing adult content on the internet that should be restricted but isn’t.
One in four people mistrust AI to correctly guess age
Interestingly, 26% report not trusting AI-augmented “facial recognition scans to accurately estimate age.”
Significantly, 69% of those aged 25-34 – those “typically with the most digital literacy and exposure to AI – trust it the most.”
Conversely, those aged 55 and above were “most suspicious of facial scan-led verification methods – with a third not trusting the technology.”
Among those who disagreed that these new checks were helping to protect children:
- 57% believed that “they are already too easily bypassed” and “aren’t fit for purpose”.
- 50% don’t think the government and OFCOM can properly enforce the Act – leaving harmful content online and viewable by children
The growing sophistication and availability of AI generated images, videos and documents presents “a new and growing threat to traditional methods of identity verification – including age estimation.”
From Q1 2024 to 2025 in the UK, Sumsub’s data shows that:
- Deepfakes have increased by 900%
- Synthetic document forge, when Gen AI is used to create artificial identity documents, increased by 275%
Pavel Goldman-Kalaydin, Head of AI/ML at Sumsub:
“We’ve seen countless examples of people getting past age verification checks – from sophisticated 3D models to simple screenshots from low resolution games. Clearly more has to be done to plug the gaps and ultimately protect children from harmful digital content. Online firms have a role to play too. Adopting multiple layers of verification – such as a scan of an ID document like a passport alongside a live facial scan – is vital. Facial recognition technology isn’t foolproof. This will make it that much harder to bypass checks while minimising unnecessary user friction to encourage compliance.”
Kat Cloud, Head of Government Relations at Sumsub:
“Our survey clearly shows that the Online Safety Act has laudable aims, and that the Government is right in its attempts to protect children from harmful digital content. It has fallen short however. Whether it’s due to sites’ being reluctant to limit traffic, a fear of falling foul of the regulator, or a lack of clarity in exactly what counts as ‘adult content’ and therefore what should be restricted, the Act is being applied unevenly. OFCOM needs the will and resources to tackle websites that don’t comply, and websites need to know exactly what does or doesn’t need age verification.”
This representative survey shared by Sumsub of 2,000 UK respondents was reportedly “conducted 18-20th August, c3.5 weeks after the UK’s Online Safety Act was implemented on Friday 25 July.”