Skip to main content

Education Summit 2023

18th October 2024 Business Design Centre, London

Blogs

alt

16 Aug 2023

Guest blog: Safeguarding our future – the need for ‘human’ soft skills to protect children from AI harms

Guest blog: Safeguarding our future – the need for ‘human’ soft skills to protect children from AI harms
Manjit Sareen, CEO and Co-Founder of Natterhub, explores the key skills that today's children need to navigate the digital world safely.

Whilst the world continues to grasp the rapid rise of artificial intelligence (AI), it’s critical to remain acutely aware of the potential risk it presents to children.

Born into this digital age, AI platforms and apps are an intrinsic part of children’s lives. AI has huge capability for positive transformations but also comes with significant risks, making the argument for impactful online safety measures essential to protect children from potential harm. 

Positive digital citizenship refers to the conscientious and ethical use of technology, internet and online platforms. It encompasses a set of skills, knowledge and values that children (and adults) need to develop in order to navigate a digital world safely and responsibly. If children are to protect themselves from online risk, they’d benefit from accumulating the soft skills necessary to be accomplished critical thinkers.

The influence of AI on children

From social media algorithms to educational platforms and virtual assistants, children are continually exposed to AI in many forms. With the capacity to gather vast amounts of data, analyse behaviours and shape personalised experiences, children also face the (unintentional) consequences, such as creating echo chambers, promoting harmful content and even fostering addictive behaviour.

The development of sound media literacy skills enables children to have an awareness of how AI algorithms curate content for them. Children will begin to understand more about AI's role in showing personalised ads, search results or social media feeds, and understand how algorithms may influence their perceptions and behaviours. Teaching children to be mindful of their online behaviour can protect them from potential negative consequences.

The issue of AI-driven content

With the right intentions, AI algorithms can generate and distribute positive, educational and informative content, as well as hugely beneficial personalised content for learning. However, it also opens doors for malicious parties to create harmful material that specifically targets children who. by their nature and stage of development, are vulnerable online. Without robust online safety measures, children may become victims of misinformation, propaganda and inappropriate content.

Critical thinking supports children to engage metacognition and differentiate between accurate information and misinformation. Although AI can effectively produce misleading or biased content, critical thinking empowers children to question and verify information before accepting it as true or as aligned with their values and beliefs.

Protecting children's privacy

Concerns around children’s online privacy are well documented and AI relies heavily on data collection to improve its capabilities, which raises further concerns. Without proper safeguards, children's personal information is being collected, stored and exploited without their knowledge or consent.

When children are aware of the data they share online and the potential consequences of oversharing, it can support the choices they make in certain situations. Online privacy awareness teaches children the importance of strong passwords, avoiding suspicious links or downloads and recognising phishing attempts, when AI is being exploited for malicious purposes.

Cyberbullying and harassment

The capability for AI to intensify cyberbullying is significant. Enabling anonymous harassment, creating fake accounts and amplifying harmful messages all stand to impact children's wellbeing. Effective online safety measures can mitigate some of the risks of cyberbullying and provide children with the support and tools needed to navigate such challenges.

Teaching children about ethical considerations surrounding AI use helps them to make responsible choices. This includes understanding data privacy, consent and the potential consequences of AI-driven actions. By developing an ethical mindset, they can not only avoid engaging in, but also recognise harmful or malicious activities that involve AI.

Age-inappropriate content, bias and discrimination in AI

AI's capacity to personalise content could expose children to inappropriate material. With online predators exploiting AI algorithms to identify and target vulnerable children, it’s crucial to develop protective measures to prevent such risks, and systemic change is needed to protect children from online predation. 

Unfortunately, amidst the positives of advanced technology, AI algorithms can perpetuate bias and discrimination, which can negatively impact children from diverse backgrounds. If not addressed, these biases can influence the content children consume, the opportunities they receive and even their sense of self-worth. Implementing robust online safety measures requires addressing and rectifying these biases.

Digital addiction and mental health concerns

We are all at the mercy of the device! We know that AI-powered platforms often employ persuasive design techniques to keep users engaged for longer periods. For children, however, this may lead to digital addiction, impacting their mental health, sleep patterns and general wellbeing. Online safety measures need to focus on behavioural understanding and change to foster healthy digital habits and promote responsible AI use.

Values and soft skills play a significant role in digital citizenship. Children should be encouraged to treat others online with empathy and respect, understanding that digital interactions significantly impact people’s emotions and wellbeing.  An accomplished understanding of the nuance of digital communication helps children to engage positively and effectively with others online. This reduces the chances of misunderstandings or conflicts arising due to misinterpreted AI-generated content.

Collaboration between stakeholders

Protecting children from AI harms necessitates collaboration among parents, educators, policymakers, technology companies, and advocacy groups. Together, they can establish comprehensive online safety guidelines, create age-appropriate content filters, teach soft skills and online behaviour models, and develop AI systems that prioritise child protection.

Regulation and ethical AI development

Governments and technology companies must work together to establish robust regulations and ethical standards for AI development and usage. This includes enforcing strict data privacy laws, ensuring transparent AI algorithms and implementing mechanisms for accountability in the event of AI-related harms to children.

Whilst children should not be responsible for their own online safety, encouraging self-regulation helps children to manage their online behaviour and screen experiences – and awareness of the potential risks builds digital resilience and protects wellbeing. Teaching the importance of self-regulation helps children to maintain a healthy balance between their digital and offline lives.

Conclusion

As AI continues to revolutionise the way we live, its potential to benefit all of society is vast. However, the risks it poses to children and other vulnerable people demand urgent and comprehensive action. Implementing impactful online safety measures is not only necessary to protect our children from AI harms but also to empower them to become responsible digital citizens.

By nurturing soft skills, children can become more resilient and better equipped to identify and mitigate potential AI harms, ensuring a safer and more positive digital experience. Additionally, parents, educators and caregivers play a vital role in guiding and supporting children as they develop these skills and become responsible digital citizens, whilst having rightful access to their childhood.

By working together and prioritising child safety, we can ensure that the potential benefits of AI are harnessed responsibly, securing a brighter, safer future for our children in the digital age.

 


 

Loading

1000

attendees

30

exhibitors

60

speakers

FREE

for providers*

Countdown

Sign up to our newsletter to receive the latest #EdSummit23 updates

NEWSLETTER SIGN UP