Horizontal displacement

Published May 29, 2009
Advertisement
As always I do not keep strictly to the plan, and decided to try one of things I wanted to do someday - horizontal displacement of terrain.

The fractal map computed for quadtree node already contains 3 independent fractal noise channels. The first one computes elevation and is seeded from heightmap data. Other two are used for detail material mixing and other things. There is also a fourth channel containing global slope.
I modified the shader that computes vertex positions to displace also in horizontal directions, using one of the two independent fractal channels. Amount of displacement also varies with global slope - areas in flat regions are shifted minimally, but sloped parts that are also treated as rock are displaced a lot. This makes rocky parts much more interesting. For the record, the actual equation used for displacing point on a sphere in tangent space is this:



Next thing that had to be done was to compute the normals of such deformed surface. Article in GPU Gems about deformers provides nice info about Jacobian matrix that can be used for the job. After some pounding to my math circuits I managed to produce the following Jacobian of the above equation (in tangent space):



Normal is then computed as cross product between the second and third column, since the tangent and binormal are {0,1,0} and {0,0,1} respectively.

Finally, here is the result - left side original with only the vertical displacement, on the right side vertical&horizontal displacement:



There are still some issues but the overall effect is quite nice. Of course, collisions with sloped parts are no longer accurate and I'll have to do something with it later.
Previous Entry Progressive download
Next Entry Roads
0 likes 8 comments

Comments

Grecco
Visually very impressive.

But indeed, how do you get these displacements "back" into your simulation model of the terrain for collision purposes is the real question. I'm curious to see your solution to this problem.
May 29, 2009 05:13 AM
cameni
Quote:Original post by El Greco
Visually very impressive.
Thanks!
Quote:
But indeed, how do you get these displacements "back" into your simulation model of the terrain for collision purposes is the real question. I'm curious to see your solution to this problem.

Shouldn't be such a problem, really. Currently, after the fractal map for particular quadtree tile is computed and textures and mesh are generated from it, I read back the values of first channel to CPU memory. Collisions are then checked against this heightfield map. With the horizontal displace in play I will simply read back whole mesh and do the collisions against it. And since the displacement acts only on sloped terrain I can possibly optimize the routine to check for that and fallback to the simpler collision check against heightfield, I think.
May 29, 2009 05:46 AM
Moe
That does look pretty fantastic!
May 29, 2009 04:46 PM
Ysaneya
Really nice, I implemented that in my terrain engine 2 years ago, but at that time was still working per-vertex, so recomputing the normals was much easier.

As you've probably found out, it also helps a lot to break the patterns when tiling textures. A small bonus :)
May 30, 2009 06:01 AM
cameni
Quote:Original post by Ysaneya
Really nice, I implemented that in my terrain engine 2 years ago, but at that time was still working per-vertex, so recomputing the normals was much easier.
Oh it's 2 years already? I remember the article .. although now as I think about it I only found it a year before. Anyway, it helped because before it I wasn't sure if this approach would really give the expected results, but after seeing your screens I stopped considering other approaches [smile]
May 30, 2009 07:51 AM
dgreen02
Wow, that looks really awesome :-D
June 01, 2009 10:25 AM
Grecco
Quote:Original post by cameni
Shouldn't be such a problem, really. Currently, after the fractal map for particular quadtree tile is computed and textures and mesh are generated from it, I read back the values of first channel to CPU memory. Collisions are then checked against this heightfield map.


Okay. I'm assuming now for a minute that you only generate this detail if it is close enough to the viewer or at least in view. This would mean the horizontal displacements are "gone" if the viewer is gone and thus "others" cannot collide with them.

So in essence, the viewer "transforms" the physical model of the terrain wherever he goes and other "entities" in the world will perceive the same location differently depending on whether or not the viewer is around.

Now for most game applications, this is perfectly acceptable, but things start to get a little complicated for those "other entities" if you introduce multiple viewers (by networking for example). If both of them see the same AI craft flying around, which terrain collision model should that AI use?

And for environments where you want the simulation to be identical for every "agent" in it, regardless of whether there is a "user" viewing the mountain slope or not, you have a real dilemma. This is because the world you see on your screen and the "physics world" you have running underneath start to go "out of sync" which in turn can lead to all kinds of visual artifacts.

The only way I can currently think of to solve such a problem would be to consider the visual detail "eye-candy" and not be bothered if an AI craft flies "through" a rocky outcrop without colliding with it. (Think of rendering grass, which most of the time is pure eye-candy and usually doesn't bother AI at all).

OR... for every agent in the world, I will need to generate all this extra detail local to their position for collision purposes only so they have the "same view" of the world the player will have. But this of course will come at a huge extra cost in computing power.
June 03, 2009 06:06 AM
cameni
Quote:Original post by El GrecoOkay. I'm assuming now for a minute that you only generate this detail if it is close enough to the viewer or at least in view.
It is generated for every LOD consistently but of course for coarse tiles it's insignificant.
Quote:...
OR... for every agent in the world, I will need to generate all this extra detail local to their position for collision purposes only so they have the "same view" of the world the player will have. But this of course will come at a huge extra cost in computing power.

If the 'agent' needs to compute the collisions it will have to compute the detail. Especially (or only) when somebody sees it. So for example if it's a hovercraft that must pay attention to this detail, it will have to use it when someone views it (for multiple viewers it's the same). Otherwise it can simplify things by taking probability and including average slowdown caused by this detail, what can be also used if the viewer is distant and doesn't see the detail (note that this only applies if someone sent the hover agent to rocky terrain).

But .. if the viewer sees the details already, it had to compute it anyway and thus it's got it available for the agent's personification in it's view. Additionally, many such agents won't need to consider this anyway - they normally don't steer too close to the slopes because of the risks involved, and if they did (rescue helicopters for example) they would have to slow down and/or expect crash with some probability.

But computing it isn't such a problem either - or must not be - mainly when the elevation data alone have to be generated using the same fractal algorithm, refining the coarse input height data.
June 03, 2009 07:41 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement