Building AI Models and Explainable AI in Vision Applications
Nationwide Capacity Building for Educators and Engineers – NITTTR, Chandigarh
In the rapidly evolving field of computer vision, artificial intelligence (AI) is revolutionizing how machines interpret visual data. It was a privilege to deliver two interactive sessions focused on practical AI tools and techniques for vision applications, as part of the Faculty Development Programme (FDP) on 'Computer Vision Applications using OpenCV,' held at CSE, NITTTR Chandigarh from May 26 to 30, 2025.
Session 1: Building AI Models with Low-Code Platforms for Vision Applications
AI development is often seen as complex and code-intensive. However, low-code platforms such as Google Teachable Machines and Roboflow are changing the landscape by enabling educators and researchers to build AI models quickly with minimal coding. In this session, we explored these platforms through a compelling real-world example: fall detection in elderly patients — a critical healthcare application.
Participants learned how these platforms facilitate easy data collection, training, and deployment of vision models, making AI accessible and accelerating innovation in fields like healthcare, security, and beyond.
Session 2: Explainable AI (XAI) in Vision Applications
Understanding how AI models make decisions is crucial for trust and transparency. The second session introduced the what, why, and how of Explainable AI (XAI), focusing on interpretability in vision systems. Using the fall detection model from the first session as a case study, we demonstrated popular XAI powerful tools like LIME and Grad-CAM that explain model predictions visually and intuitively.
We also discussed which low-code platforms support XAI features, empowering users to not only build AI models but also ensure their decisions are interpretable — an essential factor in sensitive domains such as healthcare.
These sessions offered hands-on experience and conceptual insights, enabling participants to leverage low-code AI tools and interpretability techniques confidently. The keen interest and thoughtful questions from attendees were truly inspiring.
I sincerely thank NITTTR Chandigarh, the Director, faculty, and especially Dr. Amit Doegar, along with all participants, for their enthusiastic engagement and support.