Hi there! I'm Amarnath

I am a Research Associate at the IMXD Lab, IIT Bombay and an intern at the Empathic Computing Lab, University of South Australia. My research is currently focused on narratives, education and novel interaction techniques in XR. Moving forward, I hope to focus on digital humans, and graphics in immersive environments. Previously I was a student ambassador for Unity Technologies and a team lead for the Extended Reality research group at Next Tech Lab.

Featured Projects

  • Graphics Programming

    Mar 2020 - ongoing

    Raytracing, Shaders and Interactive graphics projects

  • Cinévoqué

    June 2018 - ongoing

    A passively responsive real-time framework for live-action Cinematic VR

  • ScholAR

    June 2018 - ongoing

    Research project that aims to make affordable AR based educational solutions for underserved kids in rural India

×

Cinévoqué

Background

This is an ongoing project that has been in the works since summer 2018. It started as part of my internship under the guidance of Prof. Jayesh Pillai and in collaboration with my colleague and filmmaker, Amal Dev . We were initially exploring the idea of dynamically altering the visuals of a VR film in areas that are beyond the field of view of the viewer. This approach led to the current framework, Cinévoqué, whose name is a portmanteau of the words Cinema and Evoke, which espouses the ability of the framework to evoke a narrative that corresponds to the viewer's gaze behavior over the course of the movie.

Introduction

Virtual Reality as a medium of storytelling is relatively less developed than storytelling in traditional films. The viewer is empowered with the ability to change the framing in VR, and they may not follow all the points of interest intended by the storyteller. As a result, the filmmaking and storytelling practices of traditional films do not directly translate. Researchers and filmmakers have studied how people watch VR films and have suggested guidelines to nudge the viewer to follow along with the story. However, the viewers are still in control, and they can consciously choose to rebel against such nudges. Accounting for this, and taking advantage of the affordances of VR, Cinévoqué alters the narrative shown to the viewers based on the events of the movie they have followed or missed. Furthermore, it also estimates their interest in particular events by measuring the time they spend gazing at it and shows them an appropriate storyline. Consequently, the experience doesn't have to be interrupted for the viewer to make conscious choices about the storyline branching, unlike existing interactive VR films.
This project is being built as a plugin for Unity 3D that could be used by filmmakers to create a responsive live-action immersive film. We have chosen to focus on live-action film over real-time 3d movies as passively responsive narratives have been explored in the context of games and interactive experiences previously. Additionally, the technical implementation is more novel in the case of live-action films as the content (videos) cannot be changed dynamically like real-time rendered scenes. Using a game engine such as unity to power live-action Cinemative VR brings forth extra features that weren't implementable before, for example, we could add a virtual body that orients to the viewer's physical body by using rotational data from 6DOF controllers, which allow for the viewer to be a more integrated character in the narrative than before.

To learn more about the design and implementation of the framework please refer to my publications. We had the opportunity to present our work at reputed international conferences such as VRST, VRCAI and INTERACT. We have also been invited speakers at national and international events such as SIGCHI Asian Symposium, UNITE India and IndiaHCI.
×

ScholAR

ScholAR is a research project that is exploring how AR-based educational content can be democratized and deployed in schools. The project is being undertaken as part of Pratiti Sarkar's Ph.D. and is funded by the Tata Center for Technology and Design. I have been developing the AR applications used in the experiments while also assisting in conducting them.

Scholar for Classrooms

This part of the project focused on scaffolding classroom sessions with AR based content moderated by the teacher. This is proposed to provide a more enganging and interactive learning experience compared to solutions like smartboards. We have focused on building and testing AR content for maths, specifically for geometry over the last three years. Our expeirments explored learning efficacy, collaboration and interactions in rural schools where classes were held using our applications. Our apps were also built based on pedagogical models to help the students better grasp the concepts. I was reponsible for building the apps and further helped with content creation and experiments.

Scholar for Remote Learning

Due to the COVID-19 pandemic we starting working on a remote learning solution that could bridge the work we have done for physical classrooms. I created a prototype that focused on creating a virtual classroom where the teacher is able to control a shared AR artefact and teach concepts to students who are spatially present in the virtual space. Visual cues & spatial audio were added to give a better sense of other users’ relative position. Depending on the active artifact, the users could place a marker or draw on top, and these interactions are reflected for everyone in the session. We had conducted preliminary tests with students from our department to better understand the opportunities and challenges that arise as a consequence.

Apart from education this project also brought out challenges that extend beyond the educational use case of Scholar, for example, the best avatar representation for mobile AR. With just the positional information of the hand held device it may not be possible to create avatars with the similar accuracy as those in VR or HMD based devices. I have taken up the lead in this work, where we study avatars that vary in both visual and behavioural fidelity. The following image shows the avatar space in our study.
The following video demonstrates both direct and procedural networked avatars in our current prototype

×

Graphics Programming

Raytracing

This is my first graphics project. Following the Raytracing in one weekend ebook series, I implemented a Raytracer from scratch in C++. The project served as an useful refresher for the maths used in graphics and contexualized the concepts. It also helped me learn some advanced and new concepts in C++. The first book focused on implementing vector math functions, rays, ray-sphere interactions, shading, aliasing, positionable camera and lens blur. The following images show scenes that implement all the above mentioned features.

The second book covered more advanced topics such as motion blur, BVH, procedural textures, image texture mapping, lights and volumes. Beyond the contents of the book I had implemented multi-threading and non-uniform volumes. The code for the project can be found here.

Shaders

After the raytracing projects I have implemented a couple of shaders in Shadertoy. The first was an interactive mandelbrot set, which served as a refresher for complex math and its uses in graphics. The second shader is an interactive mandelbulb, which helped me understand raymarching and SDFs.
Additionally, I have implemented custom shaders in Unity as part of my work in IMXD lab. Which would be updated here after their completion.

Next Steps

Recently I've come across multiple graphics courses that were published publicly, and have been going through them while trying to complete the assignments. Currently I'm following Prof Keenan Crane's Computer Graphics course (CMU 15-462/662) and Prof Cem Yuksel's Interactive Computer Graphics (CS 5610/6610). The completed assignments of the Interactive CG course are being updated here.

Publications

  • Amarnath Murugan,Rishi Vanukuru, and Jayesh Pillai. "Towards Avatars for Remote Communication using Mobile Augmented Reality. " IEEE VR 2021 Workshop: Virtual Humans and Crowds in Immersive Environments.

  • Vanukuru, Rishi, Amarnath Murugan , and Jayesh Pillai. "Dual Phone AR: Exploring the use of Phones as Controllers for Mobile Augmented Reality." 26th ACM Symposium on Virtual Reality Software and Technology. 2020.

  • Vanukuru, Rishi, Amarnath Murugan, and Jayesh Pillai. "Dual Phone AR: Using a Second Phone as a Controller for Mobile Augmented Reality." Adjunct Publication of the 33rd Annual ACM Symposium on User Interface Software and Technology. 2020.

  • Sakhardande, Prabodh, Amarnath Murugan, and Jayesh S. Pillai. "Exploring Effect Of Different External Stimuli On Body Association In VR." 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2020.

  • Amarnath Murugan, Jayesh S. Pillai, and Amal Dev. "Cinévoqué: Development of a Passively Responsive Framework for Seamless Evolution of Experiences in Immersive Live-Action Movies." 25th ACM Symposium on Virtual Reality Software and Technology. ACM, 2019.

  • Pillai, Jayesh S., Amal Dev, and Amarnath Murugan. "Till We Meet Again: A Cinévoqué Experience." The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry. ACM, 2019.

  • Amarnath Murugan, Ganesh A. Balaji, and R. Rajkumar. "AnatomyMR: A Multi-User Mixed Reality Platform for Medical Education." Journal of Physics: Conference Series. Vol. 1362. No. 1. IOP Publishing, 2019.

  • Pillai, Jayesh S., Amarnath Murugan, and Amal Dev. "Cinévoqué: Design of a Passively Responsive Framework for Seamless Evolution of Experiences in Immersive Live-Action Movies." IFIP Conference on Human-Computer Interaction. Springer, Cham, 2019.

CV

Thank you for your interest in my profile, you can find my CV here.