I built a real-time 360 volumetric environment generator running entirely locally. Uses SD.cpp, Depth Anything V2, and LaMa, all within Unity Engine.

Tools 602 points 34 comments 6 days ago

I wanted to create a "Holodeck" style experience where I could generate environments while inside VR, but I didn't want the flat effect of a standard 360 sphere. I needed actual depth and parallax so I could lean around and inspect the scene. **Unity Implementation:** 1. **Text-to-Image:** * I'm using **stable-diffusion.cpp** (C# bindings) to generate an equirectangular 360 image. * I enabled **Circular Padding** (tiling) at the inference level. This ensures the left and right edges connect perfectly during generation, so no post-processing blending is required to hide the seam. * I'm using **Z-Image-Turbo** with a **360° LoRA**. 2. **Depth Estimation:** * The generated image is passed to **Depth Anything V2** to create a depth map. 3. **Layer Segmentation:** * I use a histogram-based approach to slice the scene into **5 distinct depth layers**. * This creates the "2.5D" geometry, but peeling these layers apart leaves "holes" behind the foreground objects. 4. **Inpainting:** * I use **LaMa** to fill in the occluded areas on the background layers. I inpaint both the color and the depth. 5. **Rendering:** * The final result is rendered using a custom **Raymarching shader**. Each layer has its own depth map. This creates the parallax effect, allowing for head movement (6DOF) without the geometry tearing or stretching that you usually see with simple displacement maps. Both DepthAnything and LaMa were exported to onnx and use Unity's built-in inference engine. Happy to answer any questions about the implementation!

More from r/StableDiffusion