EnglishHindi

Thursday - May 16, 2024

Weather: 20°C


REGD.-HP-09-0015257

  • Author: Kuldeep Chauhan, Editor-in-Chief, HimbuMail
Panel discussions at IIT Madra over responsible AI in the real world

IIT Madras has established a Centre for Responsible Artificial Intelligence (CeRAI) funded by Google. It is an “interdisciplinary research centre to ensure ethical and responsible development of AI-based solutions in the real world”. Is Mr.Elon Musk listening?  

The CeRAI has come up at a time when artificial intelligence(AI) is breaking new grounds, challenging and threatening “human creativity and imagination in the real world”.

For one thing, ChatGPT, an AI App has crossed bewildering over 1.2 Billion Users in real world in a very short time. It has become a major cause of concern for the world’s top academicians and scientists debating as to what level or degree they can allow AI to take command of human intelligence. 

ChatGPT is being used irresponsibly by various social media platforms,  web portals and students and professionals and perhaps by all, creating an “Absurd perhaps dangerous Virtual world” that has started  staring at the “real world”.

Its users are creating virtual ‘niches’ for themselves, which, otherwise, they don’t deserve or can’t achieve in the real world. There are also fears that indiscriminate use of AI will result in job cuts, creating joblessness in the real world.   

It is this searching concern that IIT Madras’ CeRAI  promises “new hope for human intelligence and creativity and imagination to flourish, subjugating AI to needs and aspiration of human creativity in the real world”.        

According to the IIT Madras spokesperson, CeRAI is aimed at becoming a premier research centre at the National and International level for fundamental and applied research in Responsible AI with an immediate impact in deploying AI systems in the Indian ecosystem.

Google is the first platinum consortium member and has contributed a sum of US$ 1 Million for this Centre.

Google has reasons to feel “uneasy about ChatGPT considering latter’s popularity in the virtual world and billionaire Elen Musk’s investment in ChatGPT and AI”, say the experts.  

The Centre for Responsible AI conducted its first workshop on ‘Responsible AI for India’ on 15th May 2023 today.

It was formally inaugurated on 27th April 2023 by Shri Rajeev Chandrasekhar,  Minister of State for Electronics and Information Technology and Skill Development and Entrepreneurship, Government of India.

Addressing the inaugural session of this workshop, Abhishek Singh, Managing Director and Chief Executive Officer, Digital India Corporation said, “This workshop and the various panel discussions will go a long way in helping us evolve our framework, our guidelines and our policies for responsible AI.”

Mr. Abhishek Singh said AI is playing a major role in all our lives. “Whether we know or not, every day we are using AI-based technologies in some part of our life”.

 It is very important that those at the policymaking level and those who are working at the cutting-edge of developing technologies, are aware of the risks and challenges that AI pose.

“The risks remain while we are using the same technologies for solving societal problems,  ensuring access to healthcare, making healthcare more affordable and making education more inclusive and making agriculture more productive”, he added.

“There is a need for non-biased and non-discriminatory AI framework. “We have unique requirements that require customization”.

One of the primary objectives of CeRAI will be to produce high-quality research outputs, such as publishing research articles in high-impact journals/conferences, white papers, and patents, among others creating  a solid pitch for responsible AI.

IIT Madras says it will work towards creating technical resources such as curated datasets (universal as well as India-specific), software, toolkits, etc., with respect to the domain of Responsible AI.

Mr. Sanjay Gupta, Google’s Country Head and Vice President, India, said, “As India’s digital ecosystems increasingly adopt and leverage AI, we are committed to sharing the best practices we have been developing since 2018 when we began championing responsible AI”.

To help build a foundation of fairness, interpretability, privacy, and security, we are supporting the establishment of a first-of-its-kind multidisciplinary Center for Responsible AI with a grant of $1 million to the Indian Institute of Technology, Madras.”, he added during the launch.

A panel discussion on ‘Responsible AI for India’ was also held during the workshop.

The Centre aims to foster various partnerships and collaborations with government organizations, academic institutions and industries. 

For instance, with NASSCOM’s Responsible AI initiative to build course material, skilling programs, and toolkits for Responsible AI.

Then there is with Vidhi Legal to work on developing a Participative AI framework. With CMC Vellore, to explore areas of mutual interest in the domain of responsible AI.

With SICCI to help their members better understand the implications of Responsible AI. There is TIE to help mentor startups in this space,  besides RIS, a think tank of the Ministry of External Affairs, Government of India.

Highlighting the need for such centres, Prof. V. Kamakoti, Director, IIT Madras, said, “We have now reached a stage where we have to assign responsibility to AI tools”.

“We need to interpret the reasons for the output the AI gives”.

Aspects of human augmentation, biased data sets, risk of leakage of collected data and the introduction of new policies besides substantial research must be addressed, he said.

“There is a growing need for trust to be built around AI” “It is crucial to bring about the notion of privacy. AI will not take away jobs as long as domain interpretation exists.” Prof Kamakoti says.

Speaking about the work that would be taken up in this centre, Prof. Balaraman Ravindran, Head, Centre for Responsible AI (CeRAI), IIT Madras, said, “It is important for the AI model and its predictions to be explainable and interpretable”.

 When the models are to be deployed in various critical sectors/domains such as Healthcare, Manufacturing, and Banking/Finance, among other areas, he added.  

Ravindran added, “AI models need to provide performance guarantees appropriate to the applications they are deployed in”.

This covers data integrity, privacy, robustness of decision making. “We need research into developing assurance and risk models for AI systems in different sectors”.

CeRAI will also provide Sector-specific recommendations and guidelines for policymakers.

With the achieved research outputs, the centre will help to  formulate sector-specific recommendations and guidelines for policymakers.

CeRAI will provide all stakeholders with the necessary toolkits for ensuring ethical and responsible management and monitoring of AI systems that are being developed and deployed.

The Centre also plans to create opportunities for conducting specialized sensitization/training programs for all stakeholders to appreciate the issues of Ethical and Responsible AI in a better manner.

This will enable them to contribute meaningfully towards solving problems in respective domains.

It will hold a series of technical events in the form of workshops and conferences on specialized themes of deployable AI systems with a strong focus on ethics and responsibility principles that need to be followed.

Facebook Twitter Whatsapp Insta Email Print

Motive

The mainstream media houses dominated by the city- centric editors have been indifferent to the problems and issues faced by the Himalayan people down the centuries. HimbuMail is born to fill this gap and seeks to become their real voice.


 

💰 Donate TO Us !


Donate Now »


Why Donate ?

HimbuMail is new web newsepaper and is being run on no-profit basis by professionals, who need financial support for  sustainable operation of the web news portal.


your support is Supreme!

Subscribe to HimbuMail

 

Himbumail
Install App on Your Device