In the span of just a few years, AI has rapidly advanced and become an all-pervasive presence in online interactions, social and otherwise. While the potential for this tech to positively impact productivity and labour efficiency is undeniable, these gains are accompanied by significant, and oft unaddressed risks of misuse, misinformation, and manipulation. At a time when access to the internet is granted to children at a very young age, it becomes all the more important for adults to remain aware of these risks and exercise vigilance regarding children’s online activity and safety. In preparing children to navigate this fast-evolving space, it becomes necessary to inform them of these risks and to train them to recognize situations in which they might be vulnerable.
Here are some things children must keep in mind about AI generated content:
- AI outputs suffer from bias issues: Consider AI a helper, and not a truth teller. AI generated content cannot be trusted blindly, as it is not neutral. In training these models, they are fed large quantities of data, and the nature of this data defines the type of responses it produces and the inherent biases that might arise. Human and systemic biases inevitably find their way into AI systems. These may be amplified by algorithmic biases that are based on flawed data or preferences built in by developers. Cognitive biases in data selection and application, separate from social context, can also play a role in how outcomes are produced by AI.
- AI outputs can be false: Not only can biases find their way into outcomes, AI sometimes “hallucinates” and provides incorrect or unsupported information in response to queries. To ascertain the veracity of the information provided by AI, question its outputs and conduct your own research; fact check before coming to a conclusion. Fact-checking sites such as factcheck.org, Google fact check tools, etc. can aid in this endeavour.
- AI biases are recognizable: With some expertise, it is possible to recognise patterns and biases in AI models through a series of exercises and tests, where fairness and correlations are assessed and patterns, identified. For laymen, sometimes, outlandish claims in responses are recognizable without additional assessment. In other, less obvious cases, biases can be identified by examining the sources that were relied on and their credibility.
- AI outcomes may be sourced from copyrighted information: Much of the data consumed by Language Learning Models is sourced from the internet, with scant regard for copyright violations or artists’ craftmanship. In effect, your AI generated content may have been sourced from copyrighted material. To many, this is akin to stealing. They believe it could lead to an overall devaluation of the human labour used to build these databases, with AI competing with the copyrighted works it originally learnt from. The recent trend of AI generated Ghibli-inspired renditions of images is a prime example of flagrant disregard for artists’ say in how their work is used. In the past, Hayao Miyazaki, the founder of Studio Ghibli, has called AI generated animation “an insult to life itself.” Today, Ghibli inspired images have flooded the internet, created at the click of a button, without the consent of the original artist. Whether these are seen as homages or theft, will depend on who one asks. AI companies have themselves acknowledged that implementing copyright laws would hamper their ability to develop this software, arguing for the dilution of these regulations. Courts are now beginning to determine whether the use of open access copyrighted material amounts to fair use.
To mitigate risk to themselves and those around them, children must be advised on the following:
- Erring on the side of caution: It is best not to provide any information to websites or be part of any data collection for AI.
- Opting-out of AI training exercises: On platforms like Meta, one has to specifically navigate to settings and expressly opt out of Meta’s AI training through Facebook and Instagram.
- Dealing with inappropriate content:
- Deepfakes: AI can generate convincing, but fake videos, images, or audio that can be used to bully, harass, or deceive children. It is imperative that children be cautious of content that seems too good (or bad) to be true, and verify information with trusted adults or fact-checking websites. There are a number of ways to detect AI-generated content, including checking whether the video and sound match, detecting colour and textural mismatches, using software that detects deepfakes, etc.
- Hateful or violent content: AI can generate content that promotes hate speech, violence, or discrimination, which can have harmful consequences. Being exposed to such content early on can have significant impacts on the psyche of young people. It can also result in vicious cycles on mutual harm between peers. If children see something that makes them uncomfortable or scared, they must know to block such content or have safe spaces where they can report it to an adult.
- Inappropriate imagery: AI can generate explicit or suggestive images that are not suitable for children. They must be taught not to search for or share explicit content. If they see something inappropriate by accident, they should be able to tell a trusted adult.
- Being wary of chatbot manipulation:
- Grooming: AI-powered chatbots can be used to build trust with children and manipulate them into sharing personal information or engaging in inappropriate conversations. Children must be trained to be wary of such interactions, knowing not to share images and personal information or to meet someone in person that they met online. Safe spaces must be created so children can approach an adult if someone is asking them invasive questions or making them uncomfortable online.
- Psychological manipulation: Chatbots can use persuasive techniques to influence children’s thoughts or behaviours, potentially leading to emotional distress or harm. They must know to look out for language used by strangers that tries to make them feel scared, anxious, or guilty. There is a strong possibility there is a chatbot at the other end of the conversation. Unchecked exposure to such content can have long-term impacts on mental health and emotional development.
- Scams and phishing: More and more, chatbots are being used to trick children and elderly people into revealing sensitive information or clicking on malicious links. This may be used to steal their identity or to access their financial accounts or other personal data. They must be warned not to click on links or provide personal information to anyone or any website that they do not know and trust.
- Avoiding personal data collection:
- Location tracking: AI systems can collect location data from devices, potentially allowing predators or cybercriminals to track the users’ movements. Keep your device’s location services turned off unless absolutely necessary. It is best if adults offer assistance in managing the privacy settings on children’s devices periodically.
- Contact information harvesting: AI can collect contact information, such as email addresses or phone numbers, etc. Unauthorised access to personal data can have serious consequences, including online stalking and identity theft. Contact information must never be shared with strangers.
- Behavioural profiling: AI systems can collect data on one’s online behaviour, interests, and preferences, which can be used to create targeted and potentially harmful content. Caution must be exercised while participating in online quizzes or surveys that ask for personal information. Remind children that they don’t have to share all aspects of their lives online.
In toto, all family members must be aware of the risks associated with AI proliferation and remain vigilant. It is advised that all devices in the household remain updated, as updated patches include security enhancements and bug fixes. Supervision of children’s internet use and a safe space for them to discuss what they encounter on the web, will all go a long way in ensuring a safer and more enjoyable experience on the internet.
Article by: Gauri Anand. With inputs from Rakesh Unnikrishnan (Australia), Lucas Johannes Fellner (Denmark), Team Bodhini