Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Deep Learning (DL) has evolved in last few years and it has completely changed the landscape of machine learning research scenario. This post tracks down the history of development of DL architectures.
Published in IEEE Xplore, 2018
This paper is about using data driven approach to forecast electrical load for very short-term, short-term and mid-term load in academic building types
Published in Springer, Singapore, 2018
This paper uses machine learning techniques to mitigate the effect of communication link failure by predicting the missing parameters using various ML algorithms.
Published in AAAI-22, 2021
This paper delves deep into the theory of knowledge distillation in a teach-student pair network and discusses ways to perform better distillation to student network.
Published in ICCBR-23, 2023
This paper explores the integration of CBR system with DL models.
Published in Accepted @WACV-24, 2023
This paper explores the explicit modeling of geometric constraints in multi-view stereo systems.
Published in ICCBR-24, 2024
This paper explores the integration of CBR system with pre-trained DL models.
Published in Published @Elsevier, 2024
A survey paper on multi-view 3D reconstruction methods.
Download here
Published in International Conference on Cased-Based Reasoning (ICCBR), 2025
This paper explores various proxy function that can be integrated with deep learning frameworks to learn image features more suitable for a case-based reasoning (CBR) systems
Published in Accepted - ACM Surveys, 2025
This paper explores the explicit modeling of geometric constraints in Deep Learning frameworks centered around depth estimation and stereo problems.
Download here
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Graduate Level Course, Indiana University Bloomington, Computer Science Department, 2022
In spring 2022, I designed and was the instructor for the first iteration of ‘Computer Vision - paper discussion section’ at Indiana University. The course was designed to explore the seminal papers on different architectures of deep-learning and computer vision. The course traced the history of development of Convolutional Neural Networks (like, LeNet, AlexNet, GoogLeNet, ResNets, Wide ResNet, Stochastic ResNet, ResNeXt, DenseNet, ConvMixer, Xception Net, etc.), Multi-layer Perceptron based networks (like, MLP Mixer, ResMLP, cycleMLP and S^2 MLP model), Transformer based Networks for vision application like, Vision Transformer, DeiT, Swin Transformer, Local ViT, CvT etc. papers) and other important architectures like SAN (self-attention Network), shift-based papers etc. The class met weekly with more than 130 students taking part in the discussion. Check the discussion schedule and slides for more details.
Graduate Level Course, Indiana University Bloomington, Computer Science Department, 2023
In spring 2023, I taught the second iteration of the ‘Computer Vision - discussion section’ at Indiana University. I discuss a list of seminal papers exploring deep learning (DL) architectures. I designed the first iteration of the DL discussion section in Spring 2022, it is designed to explore the seminal papers exploring DL architectures to develop an intuitive as well as mathematical understanding of major concepts in DL. In 16 weeks, the course roughly covers major architectures in three broad sections, Convolution-based networks (CNNs), Multi-layer perceptron-based networks (MLPs), and Transformer-based networks (ViTs). In CNNs, we cover models like LeNet, AlexNet, GoogLeNet, ResNets, Wide ResNet, Stochastic ResNet, ResNeXt, DenseNet, ConvMixer, and Xception Net. In MLPs, we cover models like MLP Mixer, ResMLP, cycleMLP, and S^2 MLP model, and in ViTs, we cover models like Vision Transformer, DeiT, Swin Transformer, Local ViT, Convolutional-ViT, etc. We also explore shift operation-based networks. The class meets weekly with more than 150 students participating in the discussion. The list of papers covered in the discussion is below
Graduate Level Course (B657), Indiana University Bloomington, Computer Science Department -- Luddy SICE, 2025
In spring 2024 and 2025, I taught the third and fourth iteration of the ‘Computer Vision - discussion section’ at Indiana University. I discuss a list of seminal papers exploring deep learning (DL) architectures. I designed the first iteration of the DL discussion section in Spring 2022, it is designed to explore the seminal papers exploring DL architectures to develop an intuitive as well as mathematical understanding of major concepts in DL. In 16 weeks, the course roughly covers major architectures in three broad sections, Convolution-based networks (CNNs), Multi-layer perceptron-based networks (MLPs), and Transformer-based networks (ViTs). In CNNs, we cover models like LeNet, AlexNet, GoogLeNet, ResNets, Wide ResNet, Stochastic ResNet, ResNeXt, DenseNet, ConvMixer, and Xception Net. In MLPs, we cover models like MLP Mixer, ResMLP, cycleMLP, and S^2 MLP model, and in ViTs, we cover models like Vision Transformer, DeiT, Swin Transformer, Local ViT, Convolutional-ViT, etc. We also explore shift operation-based networks. The class meets weekly with more than 150 students participating in the discussion. The list of papers covered in the discussion is below