
Jithin VG, CEO, Founder and CEO, Bud Ecosystem
India should seek to build AI sovereignty based on a vision that leans not on massive hardware outlays but software innovation, cultural relevance, and ingenuity that powered Chandrayaan missions, says Jithin VG, Founder and CEO of Bud Ecosystem, based in Thiruvananthapuram.
“We need not copy the US or China here,” Jithin told businessline in an interview. A few wealthy nations and large corporations dominate the global AI space. Their strategies are hardware-heavy, riding on hyper-scale data centres and billion-dollar investments. “We don’t need to mimic them. Our strength lies elsewhere: doing more with less.”
A Chandrayaan-type strategy is the best shot at building sovereign AI capabilities. “We must rally scientific rigor and economic efficiency to create world-class AI with minimal infrastructure and investment,” he argues. Excerpts:
Could you elaborate on the challenges India faces in building AI at scale?
The gap is wide. The US has over 51 per cent of world’s hyper-scale data centres with 21 GW of capacity, while India barely scrapes at 1 GW. In 2024 alone, private AI investment in the US was over $109 billion, compared to just $1.16 billion here. We face foundational hurdles: unreliable power supply, a digital divide, dependence on foreign capital, and restrictive export controls on advanced chips. Throw in our linguistic diversity and the cultural irrelevance of most global models. One can see why the traditional playbook just won’t work for us.
So, if hardware is not the path forward, what is the alternative?
A software-first strategy. Instead of brute-forcing performance through more GPUs, we focus on algorithmic efficiency, architectural ingenuity, and smarter design. Recent innovations like FlashAttention-3 and DeepSeek R1 demonstrate that code-level optimisations can double performance or cut training costs dramatically.
What are some practical strategies under this software-first vision?
Leverage open models. Start with open LLMs like LLaMA or Falcon. Don’t train from scratch. Use adapters for localisation. This is critical for our multilingual population. Instead of one big general-purpose model, use a collection of smaller, task-specific models. Make the most of existing hardware. Use hybrid deployment. Split inference between edge and cloud.
How does this strategy ensure inclusivity in AI access?
Initiatives such as Bhashini are building open language datasets, and we can build on that by training culturally aware, multilingual models using adapter-based fine-tuning. AI must work for a farmer in Bihar, a teacher in Tamil Nadu, and a start-up founder in Bengaluru. Frugal innovation makes this possible, running on CPUs, mobile phones, and even edge devices like the Raspberry Pi.
Do you think this holds a mirror to the Global South?
What we build can be a blueprint for other developing nations. Our framework outlines how to deploy AI using hybrid infrastructure across CPUs, GPUs, NPUs, and edge environments. Hardware scarcity is a design constraint, not a limitation. By optimising everything from tokenisation to pre-training, we can turn fragmented data and low resources into a strength. Software is a great equaliser; history shows as much. It will be no different in AI either.
So you’re saying India can lead here, but differently?
Exactly. We do it on our own terms – scalable frugality, cultural alignment, and software-driven innovation. If we do this right, India won’t just achieve AI sovereignty, but also offer a roadmap to others.
Published on July 30, 2025


























