Sunday, August 4, 2013

Absolute positioning with Oculus Rift development kit

After incorporating support for Oculus Rift into our Outerra tech demo (see the Outerra + Oculus Rift Test 2 - MiG 29 flight video), I've been pondering about a possibility to add absolute positional tracking to the Oculus Rift devkit. During the forum discussions about the support for FreeTrack/FaceTrackNoIR/opentrack in Outerra, Stanisław Halik, one of the opentrack developers, pointed me to ArUco.

ArUco is a library built on top of OpenCV, providing the ability to analyze video image and detect coded markers an their positional information. ArUco supports up to 1024 different markers, that are encoded in a 5x5 grid. ArUco tracker is supported in recent builds of opentrack, so I could use it with our recently added support for FreeTrack protocol.

For my tests I picked the marker code 787, printed it and stuck it on the Rift.



Using a recent build of opentrack (≥ opentrack-20130803), and a video camera (tested with Logitech C270 and an older Sonix SN9C201 camera) with FreeTrack protocol output I managed to get the positional tracking working in addition to the Rift tracker, with only a few minor changes required to our FT client implementation.

Here's a short video showing the augmented Rift tracking in action:




A couple of notes:
  • the added positional tracking definitely helps with the dizziness, the brain doesn't get confused by the missing degree of motion
  • currently the positional tracking does not use any filtering, resulting in occasional oscillation
  • ArUco in opentrack is currently quite sensitive to lighting conditions, bright parts of captured image outside of the marker can break the detection
  • when opentrack loses the track of the marker, camera stays on the last position, and after the position info is regained, the camera may suddenly jump into a new position
In Outerra the positional tracking is automatically used whenever a FreeTrack source is detected, and that means that any tracker plugin that provides the positional info will work with the Rift. I have personally tried it also with a 3-LED cap, but in combination with bulky Rift it's a bit impractical.

With ArUco marker the usage is very simple, but it has to be made more robust. Some ideas for enhancement:
  • using multiple markers - ArUco can detect an array of markers, so there can be multiple ones covering also the sides, that would not lose track when you turn your head to the sides
  • direct integration of ArUco - simplifying the setup even more
  • ultimately it would be best to use the info from Rift's accelerometer to capture the positional movement dynamics, and use the absolute positional info from the auxiliary tracker just to correct the drift

Since only the positional information is needed from the tracker, the simplest and probably the most robust implementation of a positional tracker would be adding just 2 wide-angle LEDs on the Rift, and using the point tracker plugin with it. Point light from the LEDs will be strong enough to overcome the problems of detection in various lighting conditions. However, it would require a small modification of the Rift.

Steps to set it up with Outerra:
  • print one of the ArUco markers (or print the 787 marker used here)
  • download a recent opentrack build
  • in opentrack, select the aruco tracker and configure your camera in the settings
  • select FreeTrack 2.0 for the game protocol, and optionally one of the filters
  • start opentrack, verify that it can see and interpret marker position and orientation correctly
  • once this works, leave opentrack running and launch outerra.exe
  • Outerra will automatically use Rift tracker together with any FreeTrack tracker it finds running

Friday, July 19, 2013

Hacking AMD OpenGL drivers

Even though the logarithmic depth buffer technique works pretty nicely, it has several problems that make its use problematic in some cases. If you use just the vertex shader modification, you can get depth buffer artifacts on longer triangles that are close to the camera, since the depth values aren't correctly interpolated in perspective. It can be helped by a finer tesselation, or by writing the correct values in the fragment shader (possibly just for the geometry that's not tesselated sufficiently). However, writing the fragment depth in shader disables certain hardware depth buffer optimizations like the early depth test, and adds to the bandwidth. That can pose a problem in scenarios with a higher overdraw.

On Direct3D there's a technique that can provide sufficient depth buffer precision - reverse mapping of far/near planes in a normal floating-point depth buffer. In OpenGL it can't be used directly because of a design flaw that causes a huge loss of precision in depth value computation (not just for the floating point depth buffers, see here). See more detail in maximizing depth buffer range and precision blog post.

There's a way to work around it on Nvidia hardware thanks to the support of an unclamped glDepthRange extension (glDepthRangedNV). However, on AMD it's not supported and there were indications that it may not even be possible. But here's what I found: with a glDepthRange(-1, 1) call that would solve the problem, the arguments are clamped to (0, 1) as per specification. But if we go into the disassembly of the call and make it skip the instruction that would cause it to clamp the lower bound:



... and the reverse FP buffer technique suddenly starts working! With precision good enough to handle the range needed to cover the whole universe. Projection matrix to use with it looks like this:

Mproj = X 0 0 0 0 Y 0 0 0 0 0 near 0 0 1 0

There's no far term; the zero depth value is projected to infinity. The precision is very high - for near=0.01m the precision measured on the GPU is around 0.03mm at 100m, 0.003m at 10km, and 0.3m at 1000km and so on.

Of course, hacking the driver this way for normal use would be highly impractical, it was done just to show that actually nothing prevents AMD from supporting the unclamped depth range and getting a depth buffer technique that works with great precision without sacrificing the depth optimizations.

Hoping they will be listening.

Thursday, July 18, 2013

Logarithmic depth buffer optimizations & fixes


An updated logarithmic depth equation (vertex shader):

    //assuming gl_Position was already computed
    gl_Position.z = log2(max(1e-6, 1.0 + gl_Position.w)) * Fcoef - 1.0;


Where Fcoef is a constant or uniform value computed as Fcoef = 2.0 / log2(farplane + 1.0).


Changes (compared to the initial version):
  • using log2 instead of log: in shaders, log function is implemented using the log2 instruction, so it's better to use log2 directly, avoiding an extra multiply
  • clipping issues: for values smaller than or equal to 0 the log function is undefined. In cases when one vertex of the triangle lies further behind the camera (≤ -1), this causes a rejection of the whole triangle even before the triangle is clipped.
    Clamping the value via max(1e-6, 1.0 + gl_Position.w) solves the problem of disappearing long triangles crossing the camera plane.
  • no need to compute depth in camera space: after multiplying with the modelview projection matrix, gl_Position.w component contains the positive depth into the scene, so the above equation is the only thing that has to be added after your normal modelview projection matrix multiply
  • Previously used "C" constant changing the precision distribution was removed, since the precision is normally much higher than necessary, and C=1 works well

To address the issue of the depth not being interpolated in perspectively-correct way, output the following interpolant from the vertex shader:

    //out float flogz;
    flogz = 1.0 + gl_Position.w;

and then in the fragment shader add:

    gl_FragDepth = log2(flogz) * Fcoef_half;

where Fcoef_half = 0.5 * Fcoef

Note that writing fragment depth disables several depth buffer optimizations that may pose problems in scenes with high overdraw. The non-perspective interpolation isn't usually a problem when the geometry is tesselated finely enough, and in Outerra we are using the fragment depth writing only for objects, since the terrain is tesselated quite well.

Wednesday, March 27, 2013

Craters

Terrain generator in Outerra contains a vector stage that can be used to overlay procedural geometry over the generated terrain. It's used, for example, to create the spline-based roads that seamlessly blend with the underlying terrain, and allows generating fine road geometry where even the road paint can have thickness (a few millimeters).

Dynamic craters are the latest addition into the vector overlay processor.



Craters are dynamically created, specifying their diameter and depth. The algorithm recognizes the type of surface and generates a different shape for asphalt/concrete and dirt. Asphalt is just bent outwards a bit, whereas the dirt is strewn around a lot more.

They get created generally under half a second, which is quick enough with a reserve, given that the creation will be hidden by the explosion's particle effects. The crater shape is also immediately reflected in the collision data.



The shape of the crater also depends on the specified explosion depth, deeper epicenters tend to create steeper edges.



The number of craters is practically unlimited; a single crater definition takes only 64 bits. For now the created craters are kept in a buffer indefinitely, but they are not persisted (yet) between the sessions. Just as the roads, craters only affect the dynamic performance, i.e. when the observer is moving and new terrain tiles have to be generated.

The largest crater that can be currently created is around 1km in diameter. Here are also some older screens showing the evolution of the crater rendering algorithm.



Edit: a video showcasing the craters:


@cameni