Individual Reflection - Radoslav

 

Ownership list: 

AR Project:

  • Letters from B to H
  • Prefab generation script.
  • Flying bee script (with Vatsal). Which ended up being broken. :(.
  • Basic UI 

VR Project:

  • Initial interactables and later tuning on said interactables.
  • Interactable Door.
  • Poke-interactable keypad.
  • Connected keypad to the door. So it can only be opened upon entering the correct combination.
  • Basic UI components.
 
 

Reflection

 
 As a whole, this course has been a fun and engaging experience. I chose it primarily because of Half-Life: Alyx, which I played in 2021 that sparked my interest in the world of VR. This course provided a good balance of theoretical knowledge and hands-on practice, allowing me to explore both AR and VR development. While I feel this course embodies the saying, "It's not about the destination; it's about the journey," I’m not particularly proud of the projects we created. However, I’ve gained a great deal of insight into Unity, XR technologies, and the vast potential for these fields to grow and evolve.
 
 

AR Project.

 When we were deciding on the topic for the AR project, we frankly had no clue what we were getting ourselves into. The four of us had no previous experience in Unity and had no clue how to exactly plan a project. We were clueless about the scope and what is possible in the given time frame. We wanted to make a game, but the requirements prevented us. So, we ended up with the next best thing—a children's book. Initially the idea is to make something 'story related.' Imagine the Little Prince, but with little animations and narrations on each page to make it more engaging for the child. But we ended up making a simple book that can help children learn the alphabet. The reason we opted for the simpler book is because we felt like we couldn't manage. We had to learn Unity, as we had no previous game development experience, working with the XR Toolking, 3D modeling, and animations, and balance the other courses. So we opted for the alphabet learning experience.
 
 The start of the project was pretty rocky. We spend a solid 6 hours just trying to set up Unity 6 and version control. We did spend a lot of time just reading documentation and forums to get everything set up. Then we went into making setting up image tracking. The idea was pretty simple. We would just have a 1 image, 1 model setup so we could learn the basics and hopefully scale up from there. Vatsal did a pretty good job; he found a tutorial and created a simple dragon with a joystick that could move; however, he did it in an old version of Unity, so we had to spend time fixing everything so it could work on Unity 6. In the end, we had to scrap everything and remake it, as there were plenty of issues with Unity itself and the code, which had a lot of deprecated and even obsolete segments. After we got it working, we split the letters between ourselves.
 
I got the letter B-H. I didn't have any major issues; the job was simple after all—find a model and link it to the image. I did, however, end up with a weird bug; if there was a bee showing and I moved the camera to the image of a cloud, The bee wouldn't go inactive. This was fixed by the prefab generation script. The script is very basic. It has a Hashmap with all of the prefabs. The prefabs and the images have the same name, so when an image 'bee' is detected, it would find the prefab in the map and set it to active. When you move it to the next 'image—'cloud'—it'would deactivate the previous one and activate the next prefab. Later on I added some more functionality, which would update the prefab as you move the picture in real time; however, this ended up causing some models that could be controlled, such as the 'Bee,' to break, so it couldn't fly anymore.
 
 After we combined our work, everything seemed to work; however, we ended up getting a very weird OpenGL error. The error didn't break anything, but it was annoying to look at. We did some digging and found forum posts mentioning the error as early as 2018; it was related to the render pipeline. Since it didn't break the application, we just left it.
 
 However, the next roadblock was something we didn't expect. The pictures of our book were very simple; I am talking about a colored background with a letter and a little drawing. So the image tracking was horrible, dare I say nonexistent. The issue lay in the simplicity of the pictures, which made them unsuitable for effective image tracking. There just weren't enough distinctive features, such as edges, contrasts, patterns, and textures, to identify and track images reliably. So we ended up using AI to extract the letter and the little drawing and swapped the backgrounds with something more 'complex.' Unique enough to be recognizable and tracked. We probably would have discovered this earlier if we had actually tried to use the picture book instead of placeholder images.
 
In the end, despite the challenges and setbacks, we created a working project. It pushed us to learn new tools, troubleshoot unexpected issues, and adapt our ideas to fit within our abilities and timeframe. While what we made was way simpler than our original vision, it was still a valuable and fun learning experience.
 

VR Project.

 
 What I was most excited about was the VR project. I didn't really care for what we made; I just wanted to experience and understand how it actually works and what the development of a VR application looks like. We decided to do an escape room. I feel like the escape room to VR is what the To-Do application is to the novice developer, but I think that it's good as it allows you to experience what VR is designed to do—interact with a virtual environment.
 
 
In contrast to the AR Project, we didn't have much in the way of technical difficulties. I think using a stable version of Unity made things easier, plus I had already been through it once. 
 
 I worked mainly on the door and the keypad. I didn't face many difficulties. The door was a simple hinge joint, with a handle with a fixed joint and an XR Grab Interactable. The tricky part here was to make the keypad. From the start I knew that it was just a matter of toggling the XR Grab interactable script on and off, so the player cannot interact with it. The hardest part was probably figuring out how to use the poke interactable to make each individual button pressable. Luckily, that didn't take long. As it was just a matter of setting up the XR Poke interactor. I originally tried to model my keypad, as I wanted to maybe learn a bit of 3D modeling. But I ended up using an already made one from the asset store. That also saved me a lot of hassle of handling the animations when the button is pressed. Then I wired up the door to the keypad, so when the code is correct, an event would be fired pointed at the door that would enable the XR interactable script and let the player open the door. 
 
 
Overall, I was satisfied with my contribution; our escape room didn't really make much sense setting-wise, but it was functional. I did end up being a bit disappointed when my teammates reused some of the puzzles instead of making something different as we originally planned. I did learn a hell of a lot about VR; the developer experience was much more enjoyable than the AR one; mainly, VR had many more resources online.

Comments

Popular Posts