(Commonwealth_India) After Britain, South Korea, and France, India is now stepping into the spotlight as the next host of the AI Action Summit. The Government of India has opened the floor for public comments until June 30, inviting people to help shape a summit that could play a significant role in setting the global tone for AI governance. For India, this event isn’t just about hosting another international conference—it’s a chance to bring the voices of the global majority into mainstream AI discussions and to offer a distinct approach to the tricky balancing acts that come with regulating such powerful technology.
Another central question is whether AI models should be open or closed. There is often a conflict between those advocating for strict control over AI systems and those advocating for complete, unrestricted openness. India does not want to find itself in a situation where a few large tech companies, primarily based in the U.S. or China, control advanced AI and determine who has access and who does not. However, such openness does not imply that everything should be unregulated.
Instead, India can lead the way in advocating for a truly open and transparent model—one where foundation AI systems can be independently tested and understood, not locked away or blindly copied. This is when India’s novel AI Safety Institute (ASI) kicks in. ASI would push for openness, not just for modernization, but to make certain we aren’t introducing political prejudices or restrictive practices from elsewhere. Self-determining testing and answerability need to be vital to this vision.
More generally, ASI also has to guarantee that AI systems—particularly in areas like health care, education, and public services—are dependable, safe, and work for society. The IndiaAI task’s focus on “Safe and Trusted AI” is already helping ventures in reducing partiality, consolidating privacy, and testing governance models. These ideas need to find their way into the summit’s discussions and reinforce similar efforts happening around the world, like the EU’s push for “Trustworthy AI.”
For India, creating faith in AI means making ascendancy that’s open, clear, and sincerely protects people’s rights. And this does not always need new laws—it can come through consolidating institutions like ASI and making clever use of current legal frameworks. When individuals know an AI system has been autonomously tested, is safe, and respects their confidentiality, they’re far more likely to accept and benefit from it. That’s positive news not just for human rights but for innovation and business as well.
This kind of approach is especially important for countries in the global majority. Without strong oversight, they risk becoming testing grounds for unproven or half-baked technologies developed elsewhere. This kind of “innovation arbitrage”—where companies exploit weak regulation to push risky products—has already led to well-known harms, from biased hiring tools to AI systems that wrongly impact healthcare or education outcomes.
India’s leadership at the AI Impact Summit could serve as a pivotal moment. By pushing for openness, transparency, and regional autonomy, India has the chance to unite other global majority nations behind a shared vision—one that puts people first and doesn’t just follow the lead of tech superpowers. The goal should be clear: to build a future where AI is accessible, safe, and beneficial for all, not just the powerful few. This is India’s moment to bring that vision to life.