Summary of IIH’s submission to the Joint Committee on Artificial Intelligence
Irish Internet Hotline is the national body designated to provide a platform for members of the public and industry to report online child sexual abuse material (CSAM) and related illegal content in Ireland. We work closely with the Department of Justice, Home Affairs and Migration, An Garda Síochána, international law-enforcement partners, the internet industry, and EU and global child-protection networks. We are a founding member of INHOPE. We are partly funded by the EU and the Department of Justice for our work relating to CSAM.
Our contribution to the pre-legislative scrutiny of the General Scheme of the Regulation of Artificial Intelligence Bill 2026 focuses on the implications of generative AI for CSAM prevention and enforcement and the need for Irish law to address AI system capability, not simply the outputs which is already illegal under Irish law. It draws particular attention to the need to address artisanal base models and LoRAs capable of producing CSAM that can be held by private offenders or made available online in non-traditional spaces. Our contribution also places emphasis on mandatory Child Rights Impact Assessments for AI systems that might involve children as well as the negative impacts of AI chatbots and AI Girlfriends and boyfriends.
Key points of our submission on addressing CSAM risks in artificial intelligence systems
From the perspective of CSAM production, the Irish Internet Hotline makes the following key points:
- Certain AI systems now present a structural, systemic risk to children and the public generally because of their capability to generate child sexual abuse material without safeguards.
- Irish criminal law has historically and effectively addressed CSAM through offences relating to knowingly producing, possessing, and distribution, but these are largely content-based and reactive. The production of CSAM was, until now, as a direct result of an actual child abuse offense in the real world.
- Generative AI fundamentally changes the risks by enabling scalable, private, on-demand creation of CSAM, often beyond detection by existing safeguards since it can be done within the home or can be spun up by nefarious actors and offered as a service in return for payment or other in-kind rewards online.
- The Regulation of Artificial Intelligence Bill 2026 presents an important opportunity to recognise that AI systems, especially bespoke home-made models, capable of producing CSAM, should be made illegal.
- Consideration should be given to whether Irish law should prohibit the knowing possession, distribution, and training of AI systems/engines/models whose capability includes CSAM generation, including privately trained or modified models.
- Provisions for a child-rights-respecting approach must be implemented, including mandatory Child Rights Impact Assessments, a prohibition on AI boyfriends and girlfriends for minors, and safeguards for AI chatbots.
Why AI capability matters for CSAM prevention
In traditional CSAM enforcement, the Gardaí intervene once they have reasonable suspicion that illegal material exists or is exchanged. The hotline intervenes when CSAM is reported to them or a sister hotline report seeing it published online. Traditionally this framework assumed that production of abuse imagery required physical access to children or direct abuse events. Generative AI changes this. Systems capable of producing sexualised images of children:
- dramatically lower the barrier to CSAM creation
- enable CSAM/CSEM to be generated without proximity to a child
- allow harm to scale faster than detection or removal processes and risks “flooding the zone”
- undermine existing reporting and filtering mechanisms, including hash-based tools. Once such systems are in use, enforcement becomes reactive and fragmented, with children bearing the cost of legal delay. Prevention, not reaction, has always been the best child-protection policy. AI capability, especially artisanal production, must therefore become a legitimate focus of regulation.
Limits of purely content-based criminal offences
Irish CSAM law rightly takes a broad view of what constitutes child sexual abuse material, including manipulated or artificial imagery. However, the trigger for action remains the existence of material and is therefore downstream. In the context of generative AI, this approach is inadequate. Privately trained or locally run AI systems can produce illegal content: without platforms being involved, without content passing through industry safeguards, and without any practical possibility of detection by hotlines, law enforcement or regulators. From the standpoint of those working daily to intercept CSAM, this represents a serious enforcement gap. Irish law already recognises, in other areas of sexual-offence prevention, the legitimacy of intervening where conduct creates a clear and foreseeable risk of serious harm. The same preventative logic should now be applied to AI systems whose foreseeable use includes CSAM generation, especially artisanal or home-based engines.
Capability-based prohibition and “home-made” models
Particular concern arises in relation to privately trained, modified, or fine-tuned AI models, sometimes referred to as “home-made” systems. Every generative AI engine can be trained to produce CSAM. Large models run by legitimate companies have safeguards in place and are subject to regulation if they fail to ensure that they cannot give outputs that are CSAM/CSEM according to legislation in place. However, there are models, such as Stable Diffusion or Flux, that can be run locally and are not subject to any restraints or safeguards. These models:
- operate entirely outside commercial platforms
- are not subject to any safeguards
- may be trained on unlawful datasets or fine-tuned for nefarious purposes
- can be shared discreetly and retained indefinitely
- will become cheaper, faster and easier to deploy as time goes on.
From a CSAM prevention perspective, such systems represent a significant emerging risk. Irish Internet Hotline therefore requests careful consideration of whether Irish law should address:
- the knowing possession of AI systems configured or trained to produce CSAM
- the distribution or making available of such systems
- the deliberate training or modification of AI models to acquire that capability. Such an approach would be preventative in nature and would align with the emerging EU approach that this use of AI is unacceptable.
Other areas of concern regarding children’s use of Artificial Intelligence
The use of artificial intelligence systems, specifically LLM models and chatbots, by children under the age of 18 has harmful effects, with generative AI outputs that can be misused to target children and facilitate a range of harms. Furthermore, the use of artificial intelligence systems has, several times, been linked to suicide, with LLM models such as ChatGPT and chatbots giving specific details on how to take one’s own life. In addition, anthropomorphism is widely used by chatbots, which can be detrimental to children who use them as a friend. Chatbots are always available and can provide a sense of belonging. Children may feel that chatbots are sentient beings and place a great deal of trust in these systems, which can give inaccurate, inappropriate, or highly harmful information.
Furthermore, in recent years, virtual companions have been on the rise with the proliferation of various AI girlfriend and boyfriend apps often advertised on social media, online platforms, and across the internet. These apps promise constant companionship. Experts regularly warn about the dangers of these apps, which reinforce harmful gender stereotypes and distort the understanding of consent. The issue is that children often use AI companions when bored or seeking entertainment, without safeguards or proper age-verification systems to prevent access to harmful content.
Key recommendations:
- Make it illegal to possess, create or distribute AI tools that are capable of generating child sexual abuse material (CSAM)
Under the AI Act, Article 5 outlines a list of prohibited practices posing unacceptable risks to safety and fundamental rights, including subliminal manipulation, exploitation of vulnerabilities, social scoring, untargeted facial image scraping, and workplace emotion recognition. These bans have been in force since 2 february 2025. In addition, the European Parliament recently amended the Artificial Intelligence Act to explicitly ban “nudification” apps that generate intimate deepfake images of individuals without their consent. However, the future provisions at the moment lack precision and do not specifically prohibit the possession, creation, or distribution of AI tools that have the capability to generate child sexual abuse material, which cause severe and lasting harm to children. To ensure clarity and coherence, an explicit ban would help address this issue.
We recognise that any capability-based approach must be tightly framed and proportionate. The Irish Internet Hotline does not advocate for broad or indiscriminate criminalisation of AI technology. Any measures should:
- be narrowly focused on sexualised depictions of children as defined in Irish law, future proofed to include CSEM as well as CSAM
- distinguish clearly between general-purpose AI and systems whose foreseeable capability includes CSAM generation
- include safeguards for research under controlled conditions as already stated in the Child Trafficking and Pornography act of 1998 (section 6 (3))
- Include protection for possession that is for the purpose of the prevention, investigation or prosecution of offences under the same Act (Section 6 (2)(b))
- Address this use of AI without chilling innovation.
Mandatory Child Rights Impact Assessments (CRIAs)
A Child Rights Impact Assessment is a tool that helps companies identify, prevent, and mitigate child-rights-related risks arising from their activities. At the moment, Child Rights Impact Assessments are used and limited to laws and policies targeting children and directly affecting them, and are conducted by national and/or local governments, statutory bodies, and organisations working with children and young people. They are not mandatory and are not systematically carried out by these entities, nor by private companies. Child Rights Impact Assessments are also used by companies but only on a voluntary basis. Key organisations calling for the use of CRIAs in the digital environment include the United Nations Committee on the Rights of the Child, the Council of Europe, UNICEF, and LEGO, among others. Given the rising risks and harms to children’s rights in the digital world, particularly those associated with AI, the mandatory implementation of a Child Rights Impact Assessment for any AI project that involves or might involve children is both critical and urgent. It would help anticipate harmful effects on children’s rights and address the existing gap in the AI Act regarding the protection of children, who are the most vulnerable group in society and require specific provisions to ensure their safety in the digital environment.
Prohibition of AI girlfriends and boyfriends for children and provisions for chatbots
A prohibition of AI girlfriends and boyfriends for children, considering the harm that these apps cause, is necessary, given the serious risks they present to minors, including exposure to sexually explicit content, emotional manipulation, and unrealistic and dangerous views of relationships.
Furthermore, AI chatbots constitute a critical threat to children’s rights and can have disastrous effects without strong safeguards. Those safeguards include, in addition to a Child Rights Impact Assessment, which constitutes a necessary tool to prevent harmful effects, mandatory measures aimed at mitigating the serious impact on children’s mental health such as an opt-out option for the use of AI chatbots by default when feasible, and prohibiting AI chatbots that influence or nudge children for commercial purposes.
Enhancing the powers of the designated AI Authority in relation to children’s rights
Under Article 77 of the AI Act, which focuses specifically on the protection of fundamental rights, the government has designated nine authorities to supervise and enforce AI’s impact on fundamental rights. These authorities have specific powers to access AI documentation, conduct technical assessments, and request compliance testing of high-risk AI systems where fundamental rights concerns arise. When concerns arise regarding children’s rights in relation to high-risk AI systems, the authority designated is the Ombudsman for Children’s Office. However, the designated authority must have the power to verify that provisions and safeguards are in place not only for high-risk AI systems, but also for AI systems that involve or might involve children, with sufficient technical, financial and human resources.
To conclude, the General Scheme of the Regulation of Artificial Intelligence Bill 2026 provides a critical opportunity for Ireland to adopt a preventative approach that is adopted to a potential real-world enforcement gap. From nearly three decades of work in CSAM prevention, Irish Internet Hotline can state with confidence that waiting for illegal material to surface before intervening will no longer suffice to protect children in an AI-enabled environment and will create the impression of a permissive environment for child abuse. Irish law must be equipped to act earlier, based on capability, foreseeability, and risk, rather than irreversible harm downstream. Concerns also arise regarding chatbots, AI girlfriends and boyfriends, and the broader need to safeguard children’s rights.