The Indian technological landscape is evolving and is pacing at breakneck speed, and out of which the new buzzword “AI” or artificial intelligence is becoming an unavoidable feature that everyone looking forward to. However, like any leading tech, AI must be used judiciously to create a harmonious blend of innovation and responsibility.
Almost all brands and organizations are implementing generative AI in some aspects to ease up their workforce. As well as providing such features as a USP to stay unique among the other crowd.
The Morning Consult and IBM surveyed IT professionals and found that 59% of Indian enterprises actively deploying AI for organizational use cases, placing India top among other countries in AI adoption.
Despite this progress, challenges persist, a study revealed that 63% of citizens are concerned about the misuse of AI and personal data. This confirms their research 3 years ago claims that 76% of consumers are unaware of how companies handle sensitive data on AI, and 36% don’t trust companies to use their credentials ethically.(Source : KPMG)
Data breaches by unauthorized access are growing concerns; you might have even encountered spam calls to your number or emails as we are bound to use either our phone number or email address as one unique ID for any transactions.
These privacy concerns grow as any data-aided technology becomes more pervasive, and the need for responsible development and deployment becomes paramount. Few steps have been initiated and deployed and this is in the right direction for future of AI technology in India.
India’s Ministry of Electronics and Information Technology has come up with new and revised the AI regulations. Which highlighted the importance of considering transparency and user empowerment.
The major points in the revised regulations are as follows:
- Lifting the requirement for government permission to deploy AI models.
- Requiring transparency in the labeling of AI models.
- Introducing measures to combat misinformation and deepfakes.
The new advisory promotes self-regulation (industry-specific), accountability, and transparency while fostering innovation and protecting societal interests. It marks a new chapter in India’s AI regulation, for positive and inclusive change.
However, companies and individuals must adhere to the norms set out. At the moment about 46% of companies have initiated training and upskilling employees to work with new automation and AI tools to stay competitive in market and also stay updated in the ecosystem. This movement is critical, as per report 30% of employees reported having limited AI skills and expertise.
The Indian AI ecosystem is getting stronger day by day with torch baring companies like Tata Consultancy Services, Infosys, and Wipro. Startups like Haloocom, ZestMoney, SigTuple, and Niramai are focusing on diverse domains, including healthcare, finance, agriculture, and customer service.
To build a responsible technological future, companies should focus on:
- Ethical Training: Models should be trained on diverse and unbiased data to ensure fair outcomes.
- Guardrails: Clear guidelines for deployment should address fairness, transparency, and accountability.
- Regular Audits: Continuous assessment of systems is necessary to ensure compliance with regulations and ethical standards.
As we move forward, it’s crucial to remember that any advancement in technology is a tool, not a cure-all and using the right tool can be still decided by humans. The power lies in capabilities and how we choose to wield it. By fostering transparency, promoting education, and prioritizing ethical considerations, we can ensure that technology becomes a force for positive change in India.
The future of advanced technology in India is bright, but it requires careful cultivation. Like a well-balanced curry, it needs the right mix of innovation, regulation, and a tincture of responsibility. As we continue to stir this technological pot, let’s ensure that the flavor we create benefits all of society.