Generative AI tools like ChatGPT have become part of everyday life. Parents use them for everything from writing emails to getting quick legal explanations. But in family court proceedings, especially care cases, using an AI assistant isn’t as simple—or as safe—as it might seem.
In fact, a parent pasting details of their case into a public AI tool can accidentally commit contempt of court, a criminal offence, or cause serious problems with the evidence in their case.
The judiciary has published refreshed guidance on AI use. That guidance emphasises that private court information must never be entered into a public AI tool — exactly the kind of mistake parents might make if they treat AI like an ordinary word-processor.
The updated guidance is blunt: AI “hallucinations” — made-up cases, misquotations, misleading summaries — are real and recurring. Judges are warned not to paste confidential documents into public AI tools. Anything entered should be treated as if “published to the world.”
This isn’t just a heads-up for judges: it signals that courts take AI misuse seriously. If even judges are cautioned to treat AI output as public and unverified, it’s a strong indicator for parents and litigants-in-person that the risks are more than theoretical.
Here’s what every parent (and practitioner) needs to know.
- The family court is a private space—and AI tools are “third parties”
Most children cases are held in private. The law tightly restricts who can see or receive information about the case. In England and Wales, the two big rules are:
- Section 12 of the Administration of Justice Act 1960 – which makes it contempt to publish “information relating to” private children proceedings; and
- Section 97 of the Children Act 1989 – a criminal offence to publish anything likely to identify a child during ongoing proceedings.
“Publish” doesn’t just mean posting on Facebook. It includes sharing information with anyone who isn’t legally allowed to receive it. An AI platform, even one you use privately, counts as a third party.
So when a parent copies a social worker’s statement or a court order into ChatGPT to “summarise,” they may well have communicated prohibited information. Anonymising the text doesn’t solve the issue—if the content relates to the case, the restriction usually still applies.
The Family Procedure Rules reinforce this: parents can only share information with a small list of people (lawyers, experts, certain professionals). AI tools are not on that list.
- The transparency reforms don’t give parents new freedoms
From January 2025, accredited journalists and legal bloggers can report more about family cases under a Transparency Order. This has caused understandable confusion.
But these reforms only change what reporters may publish—not what parents can share.
Parents remain under the same strict confidentiality duties unless the judge gives explicit permission.
- Data protection matters too
Many court documents in care proceedings contain sensitive personal data: medical records, police material, education and safeguarding information. Sharing this data with an AI provider can amount to disclosing personal data without the controller’s consent, which is a criminal offence under the Data Protection Act 2018.
Even if a parent is acting for personal reasons, the law still restricts what they can share from documents controlled by the court, local authority or Cafcass.
- AI makes mistakes—and sometimes makes things up
Judges are increasingly warning parents about “hallucinations,” invented legal authorities, and the ease with which AI can produce fake messages, altered images or false transcripts.
Submitting AI-generated material to the court can lead to:
- contempt for a false statement of truth;
- findings that a parent has attempted to mislead the court;
- or, in extreme cases, criminal investigation.
And AI output is not expert evidence. Without the court’s permission, it carries no weight.
- Deepfakes and harassment: the criminal cross-over
AI tools that generate sexualised images or impersonate someone’s voice are now firmly on the radar of the criminal courts. Sharing or threatening to share intimate deepfake images is already an offence. Further offences covering the creation of deepfakes are expected to come into force soon.
If one parent uses AI to harass, impersonate or monitor the other, the family court can make non-molestation orders banning that behaviour. Breach is a criminal offence.
- What should parents do?
The safest rule is simple:
Do not upload anything from your family case into a public AI system.
If a parent genuinely needs help understanding documents, they should speak to their solicitor or ask the court for appropriate directions. Judges are alive to the pressures on unrepresented parents and would much rather answer questions than deal with accidental contempt.
In short, AI can be a helpful everyday tool—but within family proceedings, it carries serious legal risks. A moment of convenience can have far-reaching consequences. Parents should tread carefully, seek proper advice, and keep their case where it legally belongs: in the privacy of the family court, not the cloud.


