AI Horizons: Experts Call for Dynamic Regulation and Shared Accountability in Global AI Governance

AI Horizons

The future of artificial intelligence governance, safety, and accountability took center stage during a high-level panel discussion on “Building Safe and Trusted Intelligence Systems,” where policymakers, cybersecurity leaders, legal experts, and technology executives convened under the broader theme of AI Horizons.

The discussion featured prominent voices including David Wroe, Resident Senior Fellow and Convenor of the Sydney Dialogue at the Australian Strategic Policy Institute; Dr Sanjay Bahl, Director General of the Indian Computer Emergency Response Team (CERT-In); Dr M M Oberoi, Director of Strategic Engagements for APAC at Google Cloud; Kanishk Gaur, Founder of India Future Foundation; Natasha Crampton, VP AI at Microsoft; and Mandar Kulkarni, National Security Officer, Microsoft India & South Asia.

Bridging the Global Divide in AI Preparedness

Opening the session, panelists acknowledged the widening divide between the Global North and Global South in terms of AI safety preparedness, compute access, regulatory maturity, and infrastructure.

Dr Sanjay Bahl highlighted three foundational pillars necessary for technological competitiveness: capital, talent, and energy. While India has abundant talent, he cautioned that nations with stronger capital and compute resources may continue to attract skilled professionals, potentially widening capability gaps.

He also pointed to concerns around transparency of AI models, behavioral unpredictability, linguistic representation, and the dominance of a few countries in advanced AI development. These structural imbalances, he noted, pose long-term risks for countries seeking technological sovereignty.

Also Read: BharatGen Expands to 15 Languages, Strengthens India’s Sovereign Generative AI Vision

AI Horizons: Geopolitics and Strategic Risk

David Wroe framed artificial intelligence from a geopolitical perspective, outlining a spectrum of risks beyond sensational notions of “rogue AI.” These include:

  • Strategic dominance by rival nations
  • Misaligned systems producing undesirable societal outcomes
  • AI misuse by extremist actors
  • Economic and labor market disruptions

He emphasized that many of these risks are universal, affecting both advanced economies and emerging nations. Cooperation, he argued, should not be seen as a compromise of sovereignty but as a mechanism for collective resilience.

Safety by Design and the Role of Hyperscalers

Dr M M Oberoi underscored the responsibility of hyperscalers in embedding security across the AI lifecycle. Referring to Google’s safe AI framework, he outlined key components such as:

Infrastructure security by design
Automated guardrails to prevent prompt injection and harmful outputs
Shared responsibility models across stakeholders

He stressed that policy intent alone is insufficient. Enforcement must be technologically enabled — particularly in areas like deepfake detection and rapid takedown compliance.

AI Horizons: Shared Responsibility Across the AI Stack

Mandar Kulkarni described AI security as a multi-layered challenge requiring coordination across:

Technology layers: Data, models, infrastructure, applications, identity, and content
Stakeholders: Developers, deployers, enterprises, and end users

He emphasized that accountability cannot rest solely on model creators. From shadow AI usage in enterprises to user-level decision-making, responsibility must be distributed across the ecosystem.

Drawing parallels to India’s Digital Public Infrastructure (DPI) success, he highlighted the importance of diffusion — ensuring AI benefits reach the last mile rather than remaining confined to large enterprises.

Also Read: Potpie AI Announces $2.2 Million Round Led by Emergent Ventures

AI Horizons: Accountability, Liability and Legal Evolution

Dr Pawan Duggal argued that artificial intelligence governance requires a fundamental legal reset. Existing global AI laws, he observed, are fragmented and insufficient in addressing real-world harm.

He proposed:

  • A graded liability model assigning proportionate responsibility
  • Algorithmic transparency frameworks
  • Recognition of emerging AI-related rights
  • Reimagining constitutional principles for the AI age

He warned that relying on outdated legislation to regulate emerging technologies would create systemic vulnerabilities.

Misinformation, Deepfakes and Citizen Resilience

The panel also addressed AI-generated misinformation and its impact on elections and public discourse.

David Wroe stressed that deepfakes are unlikely to be fully eliminated and that long-term resilience depends on fostering critical thinking from early education.

Dr Duggal countered that awareness alone is insufficient in large democracies and emphasized the need for enforceable regulation.

Dr Bahl concluded that trust forms the foundation of secure AI systems. Without trust between regulators, industry, and citizens, frameworks for safety and accountability cannot function effectively. He advocated for dynamic regulatory mechanisms capable of evolving alongside technological advancements.

AI Horizons: Striking the Balance

A recurring theme throughout the discussion was balancing innovation with regulation.

Panelists agreed that:

  • Excessive regulation may hinder innovation
  • Weak regulation risks harm and loss of sovereignty
  • Technology-enabled enforcement is essential
  • Principle-based, adaptive regulatory models may be more future-ready

As artificial intelligence continues to reshape economies and societies, the path forward will require collaboration, transparency, and sustained dialogue between governments, industry, and civil society.

Author

  • Salil Urunkar

    Salil Urunkar is a senior journalist and the editorial mind behind Sahyadri Startups. With years of experience covering Pune’s entrepreneurial rise, he’s passionate about telling the real stories of founders, disruptors, and game-changers.

Back to top