These views are part of a comprehensive proposal on “Utilization of artificial intelligence and big data in telecommunications”
Post Date – 09:30 PM, Thursday – July 20
New Delhi: Industry regulator TRAI on Thursday proposed the “immediate” establishment of an independent statutory body to ensure responsible artificial intelligence (AI) development and regulation of use cases in India.
TRAI said that the body designated as the “Indian Artificial Intelligence and Data Authority (AIDAI)” should be responsible for developing regulations on all aspects of artificial intelligence, including the responsible use of artificial intelligence.
These views are part of a comprehensive proposal on “Using Artificial Intelligence and Big Data in Telecommunications”.
The regulator’s latest move is significant given the intense debate around the world over the benefits and risks of AI and generative AI. Indeed, many countries are rushing to develop regulations to govern the use of this new-age technology, which promises to bring about sweeping changes while redrawing the contours of many industries.
TRAI recommends that the regulatory framework should consist of an independent statutory body and a multi-stakeholder body (MSB) which would act as an advisory body to the proposed statutory body.
Other suggested principles of the framework include “categorizing AI use cases according to risk and regulating them according to the broad principles of responsible AI.” The report said that an independent statutory body – designated as the “Indian Artificial Intelligence and Data Authority” (AIDAI) – should be established immediately to ensure responsible AI development and regulation of use cases in India.
The Telecom Regulatory Authority of India (TRAI) said: “To ensure the responsible development of artificial intelligence (AI) in India, there is an urgent need for the government to adopt a regulatory framework applicable to various industries.”
Functions that could be assigned to AIDAI include defining responsible AI principles and their applicability in risk assessment based use cases.
Its regulatory role also includes ensuring that the principles of responsible AI apply to each stage of the AI framework lifecycle, namely design, development, validation, deployment, monitoring and refinement.
Other aspects include the development of a model AI governance framework to guide organizations in deploying AI in a responsible manner, and the development of a model code of ethics for adoption by public and private entities across different sectors.
