Navigation Mesh : Wall Collision Avoidance

Published October 02, 2018
Advertisement

Subscribe to our subreddit to get all the updates from the team!

Introduction

In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls.

Configuration

  • 3D or 2D navigation mesh, as long as the 3D one is not cyclic.
  • Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals.
  • An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh.

The Algorithm

The agent is the pink low-poly humanoid and the final destination is the flag.

ps1.thumb.png.51f7c30354febae7d7506525e7bd38d9.png

ps5.thumb.png.c74e59246b5a0490ed6add7c9f57dd11.png

 

The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game.

Psss, check this article if you haven't seen it yet.

The algorithm is the following :

  1. For each waypoint, pick the current one and the next one until the next one is the last.
  2. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one.
  3. Convert each edge to a 2D line by discarding the Y component (UP vector).
  4. For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius.
  5. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges.
  6. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint.
  7. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6.

ps2.thumb.png.b152f5b8a2d20c0d0262147d5f4604e5.png

ps3.thumb.png.699ea4016b877a916455be2f94dedb12.png

ps4.thumb.png.8a6fd9db0a068366dd96fe3f47437f3c.png

 

Here's a video of the algorithm in action : 

 

5 likes 12 comments

Comments

swiftcoder

I feel like there has to be a simpler way to accomplish this. Can you make the corners have non-zero radius during string pulling? Or treat the corners as circles, and move the string-pulled lines to be tangent to the circles (plus an additional connecting chamfer)?

May 11, 2018 02:13 AM
bdubreuil
13 hours ago, swiftcoder said:

I feel like there has to be a simpler way to accomplish this. Can you make the corners have non-zero radius during string pulling? Or treat the corners as circles, and move the string-pulled lines to be tangent to the circles (plus an additional connecting chamfer)?

@miimii1205  did the string pulling algorithm, so I'll tag him. As far as I know, the second option seems to be really complex. The easiest one would actually be the first option you proposed.

 

I'm sure there are static ways that are better than what I just described. Static in the sense that they are computed before the path generation algorithm is done. However, my algorithm is dynamic. It runs during run time and can be limited to the two next waypoints.

 

Also, the algorithm that I use to intersect two 2D lines is performant.

May 11, 2018 04:00 PM
DrEvil

Curious why you didn't use the typical nav mesh implementation where the edges of the navmesh are retracted from the walls but the agent radius, so you don't have to do runtime radius compensation? It also nicely handles a set of other problems too, like areas too small for the agent to fit will be culled away from the resulting navigation. 

May 12, 2018 02:03 AM
bdubreuil
3 hours ago, DrEvil said:

Curious why you didn't use the typical nav mesh implementation where the edges of the navmesh are retracted from the walls but the agent radius, so you don't have to do runtime radius compensation? It also nicely handles a set of other problems too, like areas too small for the agent to fit will be culled away from the resulting navigation. 

There are two reasons for [that I can remember] :

  1. Our navigation mesh are generated in-house and originally, we weren't supposed to add the wall collision avoidance module. We actually don't want the navigations meshes to depend onto each agent's radius. There's actually a way to do what you're talking about with multilayered navigation meshes.
  2. Each agent can have different radius values.
May 12, 2018 05:20 AM
DrEvil

I understand, but typically that means layers are built for each agent radius. It's a trade-off of course, but it solves enough of the issues that it is usually worth it for most games out there. Especially when there are freely usable navigation mesh libraries out there like recast.

I'm mainly curious what your reasoning is though. I have spend considerable time in the past on my hobby project experimenting with a manually built un-retracted navigation mesh, but it was annoying enough to try and handle the agent radius at runtime that I shelved it. That was many years ago though. For various reasons I've felt like trying to revive it recently, because for my particular use case(a bot in older quake based FPS engines, although I am parsing the collision of the map data to generate the navigation, there is a lot of noise and trash in the output that takes considerable effort to trim out, and still more manual effort to provide contextual information (team specific markup, etc)

Here's a really old video showing the tile based recast driven traditional navigation mesh.

https://www.youtube.com/watch?v=5qYgRA5oINs

Problem is that although I can trim some amount of data from these maps(like brushes marked as sky boxes, etc, even so the resulting data set still contains enough collision data that I get for instance a lot of navigation on "top" of the world, where the brush flags can't be automatically considered relevant or not, even though the navigation inside the world is reasonable, there is a lot of of garbage navigation as well, and enough 

https://imgur.com/a/r3BbBxn (etf_2fort recast auto navmesh, without hiding a lot of top brush fases you don'e even really see the important navigation of the world)

This is another old video of the non retracted navmesh I was experimenting with way back as well, once I got dynamic obstacle support working. There was a lot to like about where this was heading, even though the navmesh was manually built, but I ultimately burned out on compensating for the radius stuff at runtime. There are a lot of edge cases in a real world 3d game map.

https://www.youtube.com/watch?v=UhSNwaTV3II

I've recently gotten re-interested in my bot and am thinking about giving this approach another go, since after spending a lot of time on the automatic recast drive building, I'm finding that I still have to spend considerable time fixing up the data to trim out data where I don't want it, mark team specific regions, where the resulting time to produce a completed navigation doesn't save much, especially since the manual navmesh creation process is not only very fast with the way I have it set up, but it's very precise and results in far less regions in the resulting data set. Plus I have various shortcuts that take advantage of my problem space, such as the tendency for symmetry in the game maps the bot supports, so I can do half the map and then mirror all the sectors over to the other half.

Here's a map where only the green areas have been layed out, and the cyan sectors have been mirrored for the other half of the map

https://imgur.com/a/1r1qlYS

 

May 12, 2018 04:17 PM
DrEvil

There is something super appealing about the manual approach. The quality of output is way more controllable and optimal, in terms of numbers of regions, size of regions, etc, and this even includes a number of manual "special" sectors, like floating "ledge" sectors that I use to cast downwards to make drop connections and such.

https://imgur.com/a/rTYiRTB

https://imgur.com/a/iUtnE9A

Just gotta solve that pesky radius stuff.

May 12, 2018 04:26 PM
bdubreuil
On 5/12/2018 at 12:17 PM, DrEvil said:

I understand, but typically that means layers are built for each agent radius. It's a trade-off of course, but it solves enough of the issues that it is usually worth it for most games out there. Especially when there are freely usable navigation mesh libraries out there like recast.

Initially, I tried what is described in this tutorial because our project is in Java and uses the jMonkey Engine 3.1. It was such a hassle just to configure the pathfinding librairies because CritterAI is so obscure. I had to modify the source code.

What is CritterAI? According to the mentionned tutorial

Quote
Parameter Insight

The jme3AI system uses CritterAI, which is based off Recast and Detour navigation. The author of Recast lays out a few specific rules for NavMesh creation in this blog post, which logically apply to jme3AI. Below is a translation of this post as it pertains to jme3AI.

Also, the navmesh generator just didn't work and is extremely old. Everytime I fix something related to it, something else would break. So I just rage quit and decided that we would make our own navmesh module. That would allow us to have more control over the AI.

Another thing to note is that our game has differents worlds made of randomly generated levels. Each world geometry is quite different from each other. For example, we use a navigation mesh for the jungle level because of its organic nature but can directly use the A* in the savannah because it's made up of voxels with the marching cubes algorithm.

Making our own library comes with downsides too. We cannot directly use the awesome features of the libgdx ai library for example.

Furthermore, we have a lot of units with different radiuses : humanoids, snakes, bigger snakes, tigers, mini-bosses, bosses, legendary bosses and so on. That would make a lot of layers for the navigation mesh.

 

Now for your problem and hobby project, more specifically the navigation mesh cells on top of buildings, what you can do is either check that each of them is indirectly or directly connection to a root cell or you could check that each of them has at least 1 connection.

My second proposal is easier but will likely still sometimes outputs garbage. My first proposal is most expensive in terms of performance and also requires a human to flag the root cell or a certain root position if you can get the closer cell to a point.

That's cool to have obstacles!

 

Our needs and problems are actually different : you work on a prebuilt level (mesh) when we work with a completely generated level. The best designer for our generated world is the generator actually. That's why it is he (it?) which creates the navigation mesh for our game. The only things that is not in the generator are the octree, the automatic connections, which is a lot of work by the way, and cells metadata.

 

What do you want to do with your navmesh generator? Is its purpose only to generate the navmesh of several prebuilt maps or is it to work for a community of modders? Do you plan to add a lot of new maps?

May 14, 2018 08:53 PM
DrEvil

My day job is on http://www.creativersegame.com/ so I'm also familiar with navigating procedural data. Given the voxel nature of our game we haven't had the need or desire to generate any navmesh data, and instead just path on the grid, since much of the pathing looks at meta data associated with the specific block types that are being pathed through, and that would be lost by abstracting it up to a navmesh. Some blocks allow a mob to move over them, others don't, etc. I recently added support for pathfinding for larger than 1 block units. It ended up being relatively simple to get good results, while still ultimately pathfinding at the grid level. The core changes involved doing a loop based check along the horizontal axis to confirm spacing for the desired radius for both clearance as well as whether the node is 'grounded'. It's relatively simple and allows me to maintain the per block logic we use for 1 block pathing. What it the scale of your game that you need to build nav meshes? Even pathfinding within the visible chunk range, we haven't been tempted to try and generate any higher level pathing abstraction on top of the voxel data. If we did, I would probably start with a higher level chunk based graph, but still keep the path following and such at the block level.

May 15, 2018 02:01 AM
bdubreuil
12 hours ago, DrEvil said:

My day job is on http://www.creativersegame.com/ so I'm also familiar with navigating procedural data. Given the voxel nature of our game we haven't had the need or desire to generate any navmesh data, and instead just path on the grid, since much of the pathing looks at meta data associated with the specific block types that are being pathed through, and that would be lost by abstracting it up to a navmesh. Some blocks allow a mob to move over them, others don't, etc. I recently added support for pathfinding for larger than 1 block units. It ended up being relatively simple to get good results, while still ultimately pathfinding at the grid level. The core changes involved doing a loop based check along the horizontal axis to confirm spacing for the desired radius for both clearance as well as whether the node is 'grounded'. It's relatively simple and allows me to maintain the per block logic we use for 1 block pathing. What it the scale of your game that you need to build nav meshes? Even pathfinding within the visible chunk range, we haven't been tempted to try and generate any higher level pathing abstraction on top of the voxel data. If we did, I would probably start with a higher level chunk based graph, but still keep the path following and such at the block level.

We have different worlds for our game and everything is still "work in progress" and can change. Until yesterday, those were the planified worlds :

  1. Savannah - Zones made up of marching cubes voxels where they are applied like a heightmap
  2. Jungle - Organic brute force dungeon generation
  3. Goblin cave - Marching cubes voxels with a lot of tunnels and with layers
  4. Sky - Extremely organic and relying a lot on assets

The world we are currently working on is the jungle, which needs generated navigation meshes.

We put a lot of effort into navmesh and yesterday we changed the design of the jungle level generator for a classic 2D array dungeon digger. Reading your comment, it made me think that maybe we wasted our time on the navmesh because we could have just used the newly designed 2D array for the A* and string pulling. We could still use our work on the sky world but we will probably work on it in like 6 months... Damn it! We try to follow Agile methodologies and still fail to not waste our time. It just proves that I'm more of a bottoms-up and backend / engine programmer than what I should be for our game.

 

Nice game by the way :)

 

May 15, 2018 02:43 PM
DrEvil

I definitely understand what it's like to follow rabbit holes. They can feel like the right decisions in the moment(not saying it isn't in your case). My hobby AI bot represents over a decade of hobby experimentation, multiple restarts on various systems, lots of experimentation, ~4 different navigation systems. Bots are one of the few areas of modding that an aspiring A.I. developer could do solo work to get noticed in the industry, and it got me my first few jobs at Pandemic and id Software. It can be counter productive when an actual product is your end goal though. Sounds like you got a pretty big scope planned out. Not sure what your team size is and such, but best of luck with it.

May 15, 2018 11:38 PM
bdubreuil
1 hour ago, DrEvil said:

I definitely understand what it's like to follow rabbit holes. They can feel like the right decisions in the moment(not saying it isn't in your case). My hobby AI bot represents over a decade of hobby experimentation, multiple restarts on various systems, lots of experimentation, ~4 different navigation systems. Bots are one of the few areas of modding that an aspiring A.I. developer could do solo work to get noticed in the industry, and it got me my first few jobs at Pandemic and id Software. It can be counter productive when an actual product is your end goal though. Sounds like you got a pretty big scope planned out. Not sure what your team size is and such, but best of luck with it.

We are a small team of 2. I'm the "lead" programmer and my friend is the lead designer; whilst being what I believe is a talented junior programmer, the word lead really only means that I do the most programming. I still have a lot of work to do on my maturity / professionalism side however :/

Nice for your jobs! My experimentations with voxels and procedural generation made me win all the internships that I really wanted in university and I chose one at Ludia, which is a video game development company naturally.

It's been an extremely constructive discussion with you, so thank you very much :)  I hope everything you have planned goes well for you too!

May 16, 2018 01:44 AM
Fulcrum.013

It have much simple solution for humanoids. Just add a Minkowski'y summ to nav mesh while creating it.

October 02, 2018 06:54 PM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement