15-17 December 2025, Jaipur, India
| Speaker and Designation | Title | Date | Time |
|---|---|---|---|
| Jatin Chakravarti (Senior DFT Engineer at eInfochips) & Chintan Panchal (Fellow at eInfochips) | DFT implementation at RTL stage: Methodology to enhance SoC Testability and reduce Time-to-Market | 13-Dec-2025 | Morning |
| Prof. Subir Kumar Roy (IIIT Bangalore) and Prof. Kusum Lata (LNMIIT Jaipur) | Model-Based Hardware-Software Co-Synthesis of Embedded Systems | 13-Dec-2025 | Afternoon |
| Prof. Virendra Singh (Indian Institute of Technology Bombay (IITB)) | Secure and high-performance designs/Architectures | 14-Dec-2025 | Morning |
| Prof. Sneh Saurabh (Indraprastha Institute of Information Technology, Delhi) | Demystifying Static Timing Analysis: From Fundamentals to Advanced Practices | 14-Dec-2025 | Afternoon |
| Bibhay Ranjan (Senior System Software Engineer Nvidia) | Stack optimization for High Speed Networks | 14-Dec-2025 | Afternoon |
Abstract: In the VLSI design industry, Design-for-Testability (DFT) is still predominantly implemented during the gate-level stage in most projects. The DFT design cycle hasn’t changed much over the years with DFT hardware insertion at gate-level, followed by synthesis, test-pattern generation and simulation. This conventional method requires engineers to complete multiple design steps before any DRCs are detected, later in the development cycle. When such errors arise, the entire flow must be re-executed, resulting in delays in the overall design schedule.
The increasing complexity of modern integrated circuits demands the implementation of DFT at higher abstraction levels within the IC development cycle. Extensive research and publications already support the feasibility of implementing DFT at the RTL stage. Major EDA vendors have developed comprehensive tool flows to enable and streamline this approach. The DFT hardware implementation i.e., IJTAG, BScan, MBIST, OCC, Compression, LBIST, Wrappers and Test Point Insertion at RTL stage allow us to validate hardware at early stage of development cycle. In this tutorial, we will demonstrate how early-stage DFT planning can reduce test development time, improve test quality, and address challenges such as test vector volume and power optimization.
Senior DFT engineer, eInfochips, India
Fellow, eInfochips, India
Jatin Chakravarti: Jatin Chakravarti is currently working as a DFT Senior Engineer (L1) at eInfochips (An Arrow Company), Ahmedabad.
His technical skills include RTL Testability Analysis, Scan Insertion, Compression, MBIST, Synthesis, ATPG, Block/Top level simulations (Timing: SDF), Pattern Retargeting. He has an experience of working on CAD tools like TestMAX Advisor (Spyglass), Design Compiler, DFTMAX, TetraMAX, Fusion Compiler, Formality, Tessent TestKompress, VCS, Verdi, QuestaSim.
Chintan Panchal: Chintan Panchal is currently a Fellow at eInfochips, with an experience of 23 years in the VLSI- Semiconductor/ASIC industry.
Full ASIC development cycle experience with hands on in ASIC development life cycles - RTL Design, DV- Verification, DFT. Worked on projects in technology 3nm, 5nm, 7nm, 16nm, 28nm and more for consumer, networking, processor, IPs and servers applications.
Extensive experience in ASIC DFT domain and worked on multiple DFT projects. Responsibilities mainly included defining the project schedule, the man power estimates, DFT planning and implementation, TVG, test vector validation; IP DFT insertion and validation, Silicon bring-up support, Silicon failure diagnosis and yield analysis and improvement. He has worked with multiple clients on several DFT projects ASIC/SoC/IPs across the different geography and ensured successful delivery and meeting customer’s expectations.
He has done several projects on Verification environment architecture for ASIC, ARM based SoC, also responsible for verification, RTL-Gate Level and CPF based Simulation for various projects in ARM based multimillion gates SoC in telecom and networking domain. He has successfully executed projects from onsite client projects, as well as at ODC/offshore end, and successfully delivered.
Abstract: Embedded systems for reactive real-time applications are typically implemented as hybrid hardware-software systems, based on microcontrollers/microprocessors, digital signal processors, and specialized hardware accelerator blocks. Software is used to impart flexibility and implement different features, while hardware is used to meet stringent performance requirements. Many different types of constraints, such as performance, power, robustness & reliability, weight, and cost, are major driving factors in the design and implementation of embedded systems.
While some effort is currently being carried out in various academic research communities to formalize the design process of embedded systems, much of the design, implementation, and verification in the industrial context is driven by ad-hoc approaches, where the overall embedded system specification is often captured in natural language, rendering it ambiguous and, many times, incomplete. This can lead to several problems, such as sub-optimal design, difficulty in re-design due to any specification revision, and an expensive verification effort.
In the tutorial, we highlight the need to adopt a methodology based on formal specification, automatic synthesis, and validation of embedded systems by carrying out the design in a unified framework supporting a unified hardware-software representation, which is unbiased to either hardware or software implementation. As an example, we show how formal approaches can help ease the specification, synthesis, verification & validation of applications running on such embedded systems. The tutorial aims to provide an understanding of System Level Design approaches using system abstractions and modeling through the use of System Level Design Languages and System Level Design automation tools to result in the implementation of functionally correct and cost-optimal complex Embedded Systems.
IIIT Bangalore
LNMIIT Jaipur
Subir Kumar Roy: Subir Kumar Roy got his B.E., M.Tech. and Ph.D. degrees from University of Pune, IIT Madras and IIT Bombay in 1982, 1984 and 1993, respectively. Prior to 1993 he worked in Semiconductor Complex Limited, Chandigarh and the VLSI Design Centre, Department of Computer Science and Engineering, IIT Bombay. From 1993 to 2001 he was with the faculty of Electrical Engineering, IIT Kanpur. From 2001 to 2003 he was with Synplicity Inc, Sunnyvale USA & Bangalore. From April 2004 to January 2013 he was with the Center of Excellence, System on Chip, Texas Instruments India, Bangalore. He spent 2 years from 1998 to 2000 carrying out research on formal verification in Fujitsu Laboratories Limited, Kawasaki, Japan, on a sabbatical from IIT Kanpur. His research interests are in hardware formal verification, power estimation, performance analysis, CAD for VLSI and embedded systems.
Kusum Lata: Kusum Lata, a Senior Member of IEEE and ACM, holds a master’s degree from IIT Roorkee (2003) and a Ph.D. from IISc Bangalore (2010). During her Ph.D., she interned at Intel India Pvt Ltd, receiving the "Spontaneous Recognition Award." She served as a Lecturer at IIIT-A for three years and currently works as a Professor at LNMIIT, Jaipur. Previously, she was an Associate and Assistant Professor at LNMIIT. She received the "Outstanding Research Paper Award" at ASQED-2009. Her research focuses on Digital System Design using FPGAs, Design for Testability, Hardware Security, Hardware Implementation of Cryptographic Algorithms, Hardware Implementation of Deep Learning Algorithms, and hardware accelerators for AI.
Abstract: This tutorial provides a comprehensive overview of the challenge in designing modern computing systems to be simultaneously high-performance (low latency, high throughput) and secure (confidentiality, integrity, availability). The fundamental conflict, known as the Performance-Security Trade-Off, arises because traditional security measures, such as encryption and integrity checks, inevitably introduce significant overhead in terms of latency, power consumption, and area. We begin by defining the demands of High-Performance Computing (HPC)—which relies on massive parallelism, heterogeneous resources (CPUs, GPUs), and ultra-fast interconnects—and contrast them with the critical requirements of system security (the CIA Triad).
The necessity for specialized secure architectures is driven by the unique and expanded attack surface presented by large-scale HPC systems. These systems are vulnerable not just at the software layer, but also at the hardware level, facing threats like Side-Channel Attacks (SCA)— where timing or power consumption leak sensitive data (e.g., Spectre/Meltdown)—and risks within the supply chain, such as hardware Trojans. Furthermore, the sheer scale introduces challenges in maintaining data integrity and confidentiality for petabyte-scale parallel storage and high-speed data transmission over interconnects, where performance often dictates the disabling of traditional security features.
The core of the tutorial focuses on Co-Design Solutions that architecturally embed security to minimize performance impact. Key among these are Hardware-Assisted Security (HwAS) mechanisms like Trusted Execution Environments (TEE), such as Intel SGX and ARM TrustZone. These create isolated enclaves to protect code and data from compromised operating systems or hypervisors, offering strong security guarantees while striving to manage associated performance costs. A high-performance strategy involves using dedicated hardware accelerators like SmartNICs or DPUs (Data Processing Units) to offload security tasks (like encryption and intrusion detection) from the main CPUs, thereby preserving the computational throughput for the primary application workload.
Looking toward the future, the tutorial explores advanced topics essential for modern deployments including securing multi-tenant Cloud HPC environments using Zero Trust models and fine-grained Micro-segmentation. We also discuss the increasingly vital role of Artificial Intelligence and Machine Learning in system security, where AI is used to power real-time Intrusion Detection Systems (IDS) by analyzing system logs and performance telemetry for subtle anomalies that could signify an attack or a security breach. Finally, we address the challenge of future-proofing systems against emerging threats, specifically the architectural implications and performance burdens of supporting computationally intensive Post-Quantum Cryptography (PQC) algorithms.
Indian Institute of Technology Bombay (IITB), India
Virendra Singh is currently a professor of Electrical and Computer Science at IIT Bombay. He received the B.E. and M.E. degrees in electronics and communication engineering from the Malaviya National Institute of Technology (MNIT), Jaipur, India, in 1994 and 1996, respectively, and the Ph.D. degree in computer science and engineering from the Nara Institute of Science and Technology (NAIST), Nara, Japan, in 2005.
Prior to joining IIT Bombay, he was a Faculty Member at the Supercomputer Education and Research Centre (SERC), Indian Institute of Science (IISc), Bengaluru, India, from 2007 to 2011. His research interests include high-performance computer architecture, testing and verification of high-performance processors, fault-tolerant computing, VLSI testing, design for test, formal verification of hardware designs, embedded system design, design for reliability, and CAD of VLSI Systems.
His research interests are in Cyber security, cyber physical systems, computer architecture, formal methods, VLSI testing and verification, Reinforcement Learning, adversarial learning, security of AI-based systems, blockchain technology, and hybrid quantum-HPC systems. He is an adjunct professor at various IITs and NITs. The following are various roles he is associated with:
Research Lab.: Computer Architecture and Dependable Systems Lab. (CADSL)
Coordinator: Indo-Japanese Joint Laboratory for Intelligent Dependable Cyber Physical
Systems (IDCPS)
Coordinator: Information Security Research and Development Centre (ISRDC)
Associated Lab: Centre of excellence for Blockchain research
Affiliation: Centre for Machine Intelligence and Data Science
Abstract: Static Timing Analysis (STA) is a cornerstone of modern digital VLSI design, ensuring that chips meet performance and reliability requirements before fabrication. This tutorial, delivered by Prof. Sneh Saurabh (IIIT Delhi), aims to demystify STA by bridging fundamental principles with advanced industry practices. The session begins with an intuitive exploration of timing concepts—setup, hold, clock skew, jitter, and timing windows—followed by a practical walkthrough of timing paths, constraints, and delay modeling. Participants will learn how STA tools analyze large-scale designs without requiring simulation, and how timing reports are interpreted for closure. The tutorial will further highlight real-world challenges such as multi-clock domains, PVT variations, false paths, multi-cycle paths, and optimization strategies. This comprehensive session is designed to benefit students, researchers, and engineers seeking a strong conceptual and practical grasp of STA.
Indraprastha Institute of Information Technology, Delhi
Sneh Saurabh obtained his Ph.D. from IIT Delhi in 2012 and B.Tech. (EE) from IIT Kharagpur in the year 2000. He has rich experience in the semiconductor industry, having spent 16 years working for industry leaders such as Cadence Design Systems, Synopsys India, and Magma Design Automation. He has been involved in developing some of the well-established industry-standard EDA tools for clock synchronization, constraints management, STA, formal verification, and physical design.
He has been teaching semiconductor-specific courses since 2016 at IIIT Delhi. His teaching has been rated excellent by students consistently, and he has received the Teaching Excellence award for seven consecutive semesters at IIITD. His current research interests are in the areas of VLSI Design and Automation, Energy-Efficient Systems, and Stochastic Computational Frameworks.
He is the author of the books “Introduction to VLSI Design Flow” and “Fundamentals of Tunnel Field-Effect Transistors” and holds three US patents. He is an Editor (IETE Technical Review), an Associate Editor (IEEE Access), a Review Editor (Frontiers in Electronics Integrated Circuits and VLSI), and a Senior Member of IEEE.
Abstract: The growing demand for high-speed networks in domains such as AI data processing, cloud computing, and data centers has amplified the need for optimized software stacks. Traditional network stacks often struggle to meet low-latency and high-throughput requirements. This lecture explores design principles and optimization strategies for achieving efficient data flow across network layers. Key focus areas include protocol tuning, zero-copy mechanisms, hardware offloading, and concurrency management. By addressing these bottlenecks, students will gain insight into how system-level optimization directly influences end-to-end network performance in next-generation communication infrastructures.
Senior System Software Engineer, Nvidia Bengaluru India
Bibhay Ranjan is continuously seeking technical growth and its applications. He currently works at Nvidia India in the areas of Wifi and GPS software development for Android OS, Android wifi stack (sniffer capture, 802.11 protocol, firmware (closed source from vendor), driver, network kernel stack, wpa_supplicant, frameworks, android app) debugging and development.