AI, Care and Social Justice: Why the Early Years Must Shape the Debate

March 10th 2026

Why read this article?

  • Understand why artificial intelligence is becoming an important issue for the Early Years sector

  • Reflect on the relationship between AI, care and human connection in early childhood education

  • Explore the ethical, environmental and social justice questions AI raises

  • Learn why AI literacy and responsible use matter for early years organisations

  • Consider how the Early Years sector can shape how AI develops in education


    About this article

    Artificial intelligence is no longer something happening somewhere else. It is already shaping how we work, communicate and make decisions across education and public services. For the Early Years sector the real question is this: are we simply going to react to AI as it arrives, or are we going to deliberately shape how it is used?

    I was invited to speak about AI at the Nursery World Summit on 5 March, probably because I have been writing about it in Nursery World for some time. The conversations I have had with educators are often mixed. Some are curious about the possibilities. Others are understandably anxious about what it might mean for their jobs and for the children they care for.

    In early childhood education the discussion quickly brings us back to something fundamental: care. We often talk more about education than care, yet we know the two are inseparable. Care depends on human connection and relationships. It sits at the heart of child development and draws on deep traditions in psychology, philosophy and ethics. Relationships, empathy and attunement are the foundations of early learning.

    AI might support educators, but it cannot replace the human relationships that young children need to thrive.


    AI should support educators, not replace relationships

    This does not mean we should reject AI. Instead, we must shape how it is used. AI should be public, person centred and responsive to individual needs, talents and abilities. Used well, it could reduce mundane administrative work so educators can spend more time teaching, observing, and building relationships with children and families.

    The reality is that AI is already embedded in much of our daily lives. Email systems, search engines, social media platforms and shopping apps all use AI in different ways. Many of us are using it whether we realise it or not.


    Concerns about AI: trust, misinformation and environmental impact

    There are, of course, legitimate concerns. Copyright issues around creative work, particularly imagery, are one example. The energy consumption of AI systems is another. There is also the growing problem of misinformation and poor-quality AI generated content flooding the internet. They call it “slop”.

    In a sector built on trust this matters enormously.

    If trust is the foundation of early years practice, then responsible use of AI must become part of our professional responsibility.


    How early years organisations can prepare for AI

    Organisations therefore need to start thinking about how AI is used safely. A simple AI action plan can make a big difference. This might include risk assessments, data protection checks and cybersecurity safeguards. Existing policies such as safeguarding, social media and online safety should be reviewed to ensure they remain relevant in an AI influenced world.

    Skills are also essential. Everyone in an organisation needs some level of AI awareness.

    Reception teams need to recognise phishing emails.
    Finance teams need to be alert to deep fake scams.
    Communications teams need to identify misinformation.

    In short, the whole workforce needs a basic level of AI literacy.


    Why data quality and bias matter in AI systems

    Data quality is another critical issue. Government estimates suggest that around 80 per cent of time spent on AI projects goes into cleaning and preparing data. If that data is biased or incomplete, the results will be unreliable.

    AI systems learn from historical data and without careful oversight they can reinforce existing inequalities rather than reduce them.

    The Early Years workforce therefore needs to become AI informed and confident. Educators need the critical thinking skills to understand what AI can and cannot do. AI does not think like humans and it certainly does not care. It generates responses based on patterns in data and sometimes produces inaccurate information.

    Professional judgement will always remain essential.


    Environmental questions about AI

    There are also environmental considerations. AI has the potential to contribute to solutions in areas such as climate science and sustainable resource management. Yet the infrastructure behind AI requires large data centres, significant water use and the extraction of raw materials.

    Any discussion about AI for the public good must consider these environmental costs.


    Protecting human connection in early childhood education

    For the Early Years sector, the question is not whether AI will change our work. It already is. The real challenge is ensuring that it strengthens rather than weakens the values at the heart of early childhood education.

    Used thoughtfully, AI could help with note taking, communication and analysing patterns in data that support early help and prevention services. It could reduce repetitive administrative tasks and allow educators to spend more time supporting children and families.

    But we must guard against a future where automated systems replace human connection. Anyone who has struggled to navigate a digital system or speak to a real person understands how quickly technology can make services feel distant and uncaring.

    Early childhood education must never become one of those systems.


    Why the Early Years sector must shape the AI debate

    AI is not separate from society. It reflects the values and priorities of the people who design and use it. The Early Years sector therefore has an important voice in shaping how these technologies develop.

    The question is not whether AI will shape the future of education. It already is. The real question is whether we will shape AI so that it reflects the values we believe children deserve.


    Frequently asked questions (FAQ)

    What does AI mean for early years education?

    AI refers to technologies that can analyse data and generate responses. In early years settings it may support tasks such as administration, communication and analysing patterns in data.

    Can AI replace early years educators?

    No. While AI may support certain tasks, it cannot replace the relationships, empathy and human connection that are central to early childhood development.

    Why is AI literacy important in the early years workforce?

    Staff need basic awareness of AI to identify risks such as phishing emails, deepfake scams and misinformation, and to use technology responsibly.

    What risks does AI pose for early years organisations?

    Risks include data bias, misinformation, privacy concerns, environmental impact and over-reliance on automated systems.

    Why should the Early Years sector be involved in shaping AI?

    Early childhood education is rooted in care, relationships and social justice. These values should help shape how AI technologies are developed and used in education.

AI Cybersecurity

How I Learned to Stop Worrying and Love the Kill Switch! This is AI blog number four, and by now I thought I’d be off down the rabbit hole exploring…