IIIT Hyderabad researcher wins grant to develop TVARAK-AI, a scalable heterogeneous AI accelerator chip.
The TVARAK-AI proposal highlights India's strategic push toward sovereign silicon in a compute-constrained global market. By focusing on a heterogeneous architecture optimized for sustainable AI inference, this project targets power efficiency over brute-force training performance. If successfully taped out, it represents a critical technical stepping stone for India's nascent domestic semiconductor ecosystem.
What Happened
Priyesh Shukla, a researcher at IIIT Hyderabad’s Center for VLSI and Embedded Systems Technology (CVEST), has secured a grant to develop an indigenous AI chip. The project, titled "TVARAK-AI©" (derived from the Sanskrit word for 'accelerate'), aims to build a scalable, heterogeneous accelerator chip specifically designed for the sustainable inference of next-generation AI models.Technical Details
While specific fabrication nodes and exact architectural block diagrams are yet to be detailed, the classification of TVARAK-AI as a "scalable and heterogeneous" accelerator provides strong engineering clues. Heterogeneous computing architectures combine different types of processing elements—such as custom neural processing units (NPUs), scalar cores, and vector engines—on a single System-on-Chip (SoC) to handle varying tensor operations efficiently.The explicit focus on "sustainable inference" indicates an architectural priority on Performance-per-Watt (TOPS/W) rather than absolute peak TOPS. This likely involves optimizing dataflow to minimize SRAM/DRAM data movement, which is typically the largest power draw in AI inference. The "scalable" aspect suggests a tile-based or chiplet-ready architecture that can be scaled down for edge devices or scaled up for data center inference racks.