What is artificial intelligence (AI)?
Artificial intelligence refers to the simulation of human intelligence, i.e., learning, reasoning, and self-correction, by machines. While AI now dominates discussions on tech, it is not a new concept. First envisioned by scientists in the 1950s as part of machine learning, it has more recently developed into a tool that possesses the necessary sophistication to perform a range of actions, reducing the need for human labour and intervention. AI is technology that allows computers to comb through and process large datasets using algorithms, and utilise them to solve problems with greater speed and at a scale larger than human capability permits.Â
The most commonly evident form of AI today is arguably that of a chatbot â think Google Assistant, Siri, ChatGPT. However, AI capabilities have been integrated into almost all the technology we use in our daily lives such as GPS (rerouting based on traffic and other conditions), ride sharing apps (Uber, Ola), social media filters (targeted ads, facial recognition, selfie filters, etc.), and email (smart compose, tabbed inbox, itinerary summary tabs, etc.).Â
What are the risks?
One of the greatest benefits of AI is its ability to process large amounts of information and provide quick, tailored responses to specific prompts. But this is a double-edged sword – given its capabilities, it is possible for AI technology to exasperate existing risks that exist with internet use, while also creating new ones.Â
For instance, AI responses to prompts can be in the form of text, images, audio, and other media formats. This means that AI can create images and audio which look like the real deal, based on prompts entered by the user. These features can also be used to create deepfakes, i.e., images of a real person can be superimposed on another personâs body and sounds can be manipulated to generate realistic human voices.Â
Some of the potential risks that AI poses include:
- Misinformation and disinformation â While AI processes all information available to it, in its current form, it is unable to determine which of this information is true, and which is false. Therefore, responses it provides to questions/prompts may be based on incorrect or manipulated information, and therefore, also incorrect. As a result, AI can potentially magnify false information available on the internet.Â
More worryingly, it is actively being used in disinformation campaigns. For instance, in the US state of New Hampshire, voters received phone calls in the voice of US President Joe Biden, urging them not to vote in the primary election. Similarly, in the UK, deepfake audio of London Mayor, Sadiq Khan, making inflammatory remarks almost had serious consequences.Â
Such disinformation has the ability to affect stock prices, election outcomes, and even result in violence.
- Fraud â By cloning voices using AI, scammers call individuals, claiming to be a loved one in trouble, a key person at a financial institution, or an officer at a government body, asking for money or payments due. Sometimes, these calls are used to obtain key personal information that could be used to gain access to bank accounts. AI is also used to send official-looking emails or texts demanding personal information or login data, or which may contain links that contain viruses that collect such information. Chatbots are also used to befriend lonely individuals who are looking for platonic or romantic relationships online, and to trick them into parting with their money.Â
- Pornography â While AIâs media generation features are being used to create art and enhance digital content, they also pose several risks. For instance, the risk of AI-generated pornographic images increases manifold. Recently, fake, sexually explicit AI-generated images of Taylor Swift flooded social media. The non-consensual nature of these images is a serious violation of the privacy of the targeted individuals and can lead to serious repercussions and may even be used for blackmail.
Lack of regulation
Until recently, with the exception of Tamil Naduâs AI Procurement Policy, India had not taken any steps to regulate AI or to mitigate risks. Despite the governmentâs use of its capabilities to improve exam prep, agricultural productivity, information dissemination, etc., there was no system in place to prevent rights violations, facilitate grievance redressal or affix criminal liability for the misuse of AI.Â
However, in December 2023, the Union Ministry of Electronics and Information Technology (MeitY) issued an advisory to social media platforms, directing them to follow the existing IT Rules to deal with deepfakes. Further, on 1st March 2024, it issued a second advisory, requiring platforms in India that are testing or training an AI tool to seek permission from the government before launching the product. It also mandates that platforms ensure their resources do not permit bias, discrimination, or threats to the integrity of the electoral process through the use of AI technology and algorithms. The Minister of State for Skill Development and Entrepreneurship later clarified that this requirement applies only to âsignificant platformsâ and not to start-ups. The advisory requires that disclaimers and consent-based disclosures regarding the fallibility or unreliability of generated outputs must be included before public access can be permitted.Â
At present, this is the extent of regulation that exists. The government has expressed its intention to establish an AI regulation framework later this year.
The European Union’s recent legislation of the AI Act is a significant step towards ensuring trustworthiness of AI. Some of the significant provisions of the Act are classification of risk, banning of AI systems that affect the safety, livelihoods and rights of people, mandatory labelling of AI-generated content that are classified to be risky, flexibility of rules to adapt to technology changes, etc.
Ensuring online safety
The facets of childrenâs online safety involve three things: the appropriateness of content accessed, who they interact with online, and how they conduct themselves while on the internet. So, what can you do to ensure your safety and that of your child while using the internet? Short answer â become AI smart.Â
- Verify the source of messages or emails before providing personal information or clicking on unknown links.Â
- Verify the origin of calls from unknown numbers â reach out to the official numbers of government offices or financial institutions.
- Be sure not to provide sensitive data to AI financial tools.Â
- Establish a safe word with your loved-ones, especially children so that they are able to ascertain it is you on the phone and not a voice cloning software.
- Be wary of who has access to your personal information, images, and other media.
- Double-check information or advice provided by chatbots.Â
- Check permissions and privacy policies while downloading an app or accepting terms and conditions on websites. Check for updates to policy regularly â sure, AI is always collecting data to improve your experience. This data is also, however, collected to benefit marketing and development strategies of companies. Know why your data is being collected, and what your information is used for.Â
- Use strong, unique passwords for your online accounts.Â
- Update your software and applications regularly to have access to any new security improvements.Â
- Be wary of online videogames, where other users may utilize AI-powered chat, cameras, hacking and voice-masking to breach existing protections on your devices.
- Be aware of the security challenges associated with AI-enabled toys, like with all other AI-aided gadgets. Toy companies now provide devices capable of interactive communication, and learning from experiences. For example, the CogniToys Dino, powered by IBM’s Watson, is capable of engaging in intelligent conversation with children and telling them stories. These devices, connected to the internet, can be used to access data and devices that use the same network.
- Make children custodians of their own data and online security. However, it is possible to also use AI to your benefit â AI powered online safety apps like FamilyKeeper, that monitor without snooping and enable parents to be alerted when their children face issues involving bullying, explicit content, and mental health risks, among others can be one avenue. Parental control apps like Google Family Link are also an option, allowing parents to exercise more control.Â
- Since AI is frequently built by tech companies with profit objectives, be wary of what the tech will prioritize over well-being.Â
To sum it all up – be proactive. Skills, which were once thought to be possessed only by humans, are being imitated by AI. With increased dependence on AI-powered technology, parents will also have to keep an eye on childrenâs ability to retain agency without becoming over-dependent. It is important to disconnect from gadgets and technology every now and then to bond with nature and interact with the family/community. Human-to-nature and human-to-human contact is essential for your mental health and a healthy social life.
AI use will change the world around us. Like any other technological development, there will be negative outcomes alongside some amazing advancements. While we enjoy these benefits, we must also remain informed about the possible dangers and stay safe.
Article by: Gauri Anand
With inputs from Team Bodhini