Legal challenges in age of AI

0
2286

Across India, artificial intelligence is rapidly changing the way decisions are made. From banks approving loans to companies shortlisting candidates for jobs and even police departments deploying resources, algorithms are increasingly in control. These AI-powered systems promise speed, efficiency and objectivity. They can process vast amounts of data in seconds, spotting patterns that humans might miss. For a country as large and diverse as India, AI offers the hope of bridging gaps in access, reducing human error and delivering services at scale.

Yet, as these systems become more influential, they also raise important questions. Who is accountable when an algorithm makes a mistake? Can we trust decisions that we do not fully understand? And most importantly, how do we ensure that technology respects the fundamental rights of every citizen?

The Black Box Challenge One of the most pressing concerns with AI is what experts call the “black box” problem. Many advanced AI models, especially those using deep learning, are so complex that even their creators cannot always explain how they reach a particular decision. For example, when an AI system denies a loan application or flags someone as a security risk, the reasoning behind that outcome may be hidden behind layers of mathematical calculations. This opacity is not just a technical issue, it is a matter of justice and trust. In a society where opportunities and resources are already unevenly distributed, the risk is that AI could reinforce existing biases or create new forms of discrimination. If the data used to train these systems reflects past prejudices, the algorithms may unwittingly perpetuate them. Worse, if individuals cannot understand or challenge the decisions affecting them, they may be left powerless in the face of technology.

Imagine this:

You decide to apply for life insurance, confident that your medical reports are in order and your lifestyle is healthy. You fill out the forms, answer all the questions honestly and submit your application. Weeks later, you receive a curt rejection letter. No clear reason is given.

Puzzled, you call the insurance company. After much back and forth, you learn almost by accident that the decision was based on data from your fitness tracker and health apps. You never remember giving explicit permission for this, but somewhere in the fine print, your wearable’s app shared your daily step count, sleep patterns and even your heart rate trends with third parties, including insurers.

The AI underwriting system flagged you as “high risk.” Maybe it was a few weeks of low activity when you were recovering from a minor illness, or a spike in your heart rate after a stressful month at work. Perhaps the system noticed you occasionally skipped workouts, or your sleep data showed a few restless nights. The algorithm did not care about context, it just crunched the numbers and made its decision. There was no human to explain or reconsider your case.

You try to contest the denial, but the insurer points to their data-driven process. They have no way to show or explain the specifics of the decision. The lack of transparency and explainability which is commonly referred to as the “black box” problem means that even the insurer’s representatives may not fully understand or be able to access the rationale behind the automated decision. They will not share the specifics, citing proprietary algorithms and privacy of their risk models. You feel exposed, powerless and angry that your own private data, collected for your benefit, has been used against you without your clear consent.

Now, imagine this is not just about insurance. What if your wearable data is quietly shaping your access to loans, jobs, or even public services? What if a single bad week, a missed workout, or a social media post about feeling unwell becomes the invisible reason you are denied opportunities? You never see the decision being made, but you live with its consequences.

This is the chilling reality of black box AI:

  1. Decisions are made about you, using data you did not know was shared.
  2. You cannot relate to or explain the patterns the AI found.
  3. There is no one to appeal to and no way to clear your name.
  4. Your digital shadow – steps, sleep, likes, posts becomes your fate.

Without strong privacy protections, transparency and the right to explanation, anyone could find themselves at the mercy of invisible algorithms, judged by data they never agreed to share or that tells only part of their story. This is a wake-up call. They remind us that while AI can be a force for good, it can also cause real harm if not used responsibly.

Towards Human-Centric AI India stands at a crossroads. The promise of AIdriven efficiency must be balanced by the constitutional values of fairness, dignity and justice. As the Supreme Court affirmed in the landmark Puttaswamy judgment, dignity is nonnegotiable.

No one should lose their rights because a machine got it wrong. To ensure that technology serves the people, not the other way around, India must bring in regulation.This means:

  • Transparency: People should know when AI is making decisions about them, and they should be able to understand the reasons behind those decisions.
  • Accountability: There must be clear lines of responsibility when automated systems go wrong. Companies and government agencies must be answerable for the outcomes of their AI tools.
  • Right to Explanation: Individuals should have the right to seek an explanation and human review of any significant decision made by AI, especially in areas like finance, employment, healthcare and law enforcement.
  • Bias Audits: Regular checks must be in place to ensure that AI systems are not perpetuating unfairness or discrimination.

The recently enacted Digital Personal Data Protection Act 2023 is a welcome move, giving Indians more say over how their personal data is collected and used. But as technology races ahead, this is only the beginning. There exists a challenge of keeping up with the fast-evolving world of artificial intelligence, where decisions made by machines can have real and lasting impacts on our lives. The EU has set a high bar with its approach to AI governance. Their rules are not just about ticking boxes, they focus on real-world risks, demand openness about how AI systems work and insist that humans remain in control, especially when it comes to sensitive areas like health, safety and fundamental rights.

For AI systems that could seriously affect people’s lives, the EU law requires regular checks, human oversight and thorough risk assessments. This means that before a powerful AI tool is let loose in hospitals, banks or public services, it must pass strict tests to make sure it is fair, safe and accountable.

By learning from these global standards, India can ensure that as we embrace the benefits of AI, we also protect our rights and dignity. The goal is simple: build a digital India where technology empowers people, not the other way around.

Building Trust in the Algorithmic Age The algorithmic age is here to stay. AI will continue to shape the future of India, from smart cities to digital governance and beyond. The challenge before us is to ensure that every Indian can trust, rather than fear, the invisible systems influencing their lives. This is not just a technical or legal issue—it is a question of values. By putting people at the center of AI policy, ensuring transparency and upholding the right to explanation, India can build a digital future that is truly inclusive, just and human-centric. Only then can we fully harness the transformative power of AI, confident that technology will serve society and not the other way around.

Khushbu Jain is a practicing advocate in the Supreme Court and founding partner of the law firm, Ark Legal and can be contacted on X : @advocatekhushbu.