A central government panel has recommended that an inter-ministerial committee be set up to frame AI regulations and develop AI guidelines.The IndiaAI Mission, headed by the Principal Scientific Advisor, is seeking public feedback on the report released by its AI Guidelines Sub-Committee.The report proposes a coordinated, whole-of-government approach to enforce compliance and ensure effective governance as India’s AI ecosystem grows.
Key highlights of the report
- The report outlines the following principles for AI governance:
- Transparency of AI systems with “meaningful information on their development and capabilities”;
- accountability from developers and deployers of AI systems;
- safety, reliability, and robustness of AI systems by design;
- privacy and security of AI systems;
- fairness and non-discrimination;
- human-centred values and ‘do no harm’;
- Inclusive and sustainable innovation to “equitably distribute the benefits of innovation”;
- and “digital by design” governance to “leverage digital technologies” to implement these principles.
- Life Cycle Approach: To implement these principles, policy makers need to use a life cycle approach.
- They must look at AI systems at different stages of their development, deployment, and dissemination, during which different risks may arise.
- AI actors must have an “ecosystem view.”
- The report proposes a technology-enabled digital governance system.
Artificial Intelligence
- Artificial intelligence (AI) is a broad branch of computer science concerned with building smart machines that can perform tasks that normally require human intelligence.
- Artificial intelligence allows machines to model or improve upon the capabilities of the human brain.
- And from the development of self-driving cars to the proliferation of generative AI tools like ChatGPT and Google’s Bard, AI is fast becoming a part of everyday life – and it’s an area every industry is investing in.
Why do we need regulations on AI
- Ethical concerns: AI systems can make decisions and take actions that affect individuals and society. Establishing rules helps address ethical concerns related to the use of AI, ensuring that it is consistent with human values and respects fundamental rights.
- Privacy: AI often involves processing large amounts of data. Rules can help protect individual privacy by specifying how data should be collected, stored, and used.
- Security: This includes protecting against potential vulnerabilities and preventing malicious use of AI technology.
- Transparency: Regulations could mandate transparency in AI systems, requiring developers to disclose how their algorithms work.
- Competition and innovation: Establishing a regulatory framework will provide a level playing field for businesses, prevent abuse of market dominance, and encourage responsible innovation.
- Public safety: In cases where AI is used in critical sectors such as healthcare, transportation, or public infrastructure, regulations are necessary to ensure the safety of individuals and the general public.
Regulation of AI in India
- Digital Personal Data Protection Act in 2023: The government enacts the Digital Personal Data Protection Act in 2023, which may address some of the privacy concerns related to AI platforms.
- Global Partnership on Artificial Intelligence: India is a member of GPAI. The 2023 GPAI Summit was held in New Delhi, where GPAI experts presented their work on responsible AI, data governance and the future of work, innovation and commercialisation.
- National Strategy for Artificial Intelligence #AIForAll by NITI Aayog: It included AI R&D guidelines focused on healthcare, agriculture, education, “smart” cities and infrastructure, and smart mobility and transformation.
- Principles for Responsible AI: In February 2021, NITI Aayog released the Principles for Responsible AI, a vision paper that examines various ethical considerations of implementing AI solutions in India.
