Skills Acquired
The XPR Platform holds a special place in my career history as my first professional deployment. It was a trial by fire that transitioned me from a hobbyist to an industry-standard Game Developer.
Beyond just writing code, this role taught me the intricate dance of debugging hardware-specific issues across different VR headsets and delivering robust solutions for actual clients. The project scope was massive, featuring networked multiplayer, character customization, virtual commerce, and adaptive AI.
Key Technical Contributions
Networked Audio Occlusion
Audio culling is standard, but in VR, every frame of CPU time matters. Standard Unity audio occlusion was too expensive for our target hardware.
I engineered a custom Audio Portal System that allowed us to place volume triggers in the world. These triggers didn't just play/stop sounds; they intelligently shifted volume levels to simulate audio occlusion and "bleed" through thin walls without the heavy physics calculation overhead.
Crucially, this system was fully networked. Players joining a room late would sync to the correct timestamp of an audio track instantly, regardless of when the event started.
Waypoint System
One of the core requirements was a guided virtual tour around a massive cruise ship. We initially attempted to use a standard NavMesh to move the guide entity, but the movement felt robotic and lacked the cinematic control we needed.
To solve this, I wrote a system that allowed designers to place waypoints exactly where they wanted them.
This tool decoupled the tour logic from the physics engine, giving us:
- Precise control over rotation and speed at every point of the curve.
- An easy-to-use editor interface that allowed non-programmers to adjust tour paths in seconds.
FMOD & Native Plugins
While "Output Device Switching" sounds trivial on the surface, implementing it required breaking out of the standard Unity sandbox. This task was my gateway into FMOD and low-level audio programming.
To achieve seamless switching between VR headsets and external speakers at runtime, I had to develop custom C++ DLLs to bridge the gap between the engine and the Windows Audio Session API (WASAPI).