jump to navigation

AI navigation in Left 4 Dead: Part II June 8, 2010

Posted by Cesar in working me.
Tags: , , , , , , , , ,
add a comment

After a long period of inactivity, it’s about time we continue our analysis on Michael Booth’s presentation about the AI in Left 4 Dead.

If you remember the last post on the subject, I tried to give a bit more depth to the reactive path following algorithm described in the slides. With that side implemented, we got a nice and fast path finding / following algorithm that offers a pretty optimized trajectory without looking artificial. But that’s just half the problem. The macro management half. Now we need to look at micro management and define how the bot moves when following the path.

The first step is simple obstacle avoidance. Our path finding algorithm deals with the static objects, the level itself. But it already introduces, with reactive path following, the concept of steering, which is very important and is used also in this stage. So now our bot must steer towards the next point in the path but also go around moving things that might be in the way: crates, other bots, whatever.

The most simple steering algorithm we can use is to select a direction and try to go around the object. Say the bot bumps into a box. The algorithm must try to pick a direction that leads to the closest way around it, left or right, and steer that way until the path towards the next goal becomes clear again. A simple way to attempt to select the closest way around the obstacle is to use the object center (for simplicity we’ll assume the center is the actual position of the object). All we do is apply a steering force in the opposite direction.

In the presentation, Booth mentions hull trace a lot and I’ll use it for obstacle avoidance too, so it is definitely worth describing. It is not clear if the algorithm uses a collision mesh or a simple box but, for simplicity, let’s consider that it uses an OOBB (Object Oriented Bounding Box). Let’s assume the OOBB fits the bot inside it and it has the same position and orientation as the bot. Hull trace is then an algorithm that projects that OOBB in a given direction and returns the first point of collision. Like a ray tracing algorithm, but with a box instead of a line. If we were using the bot’s collision mesh, we would do the same with the whole mesh instead of the OOBB.

The Hull trace algorithm can be a little complex because in order to make sure we get all collisions we need a sweep test (or a way to fake it, but I digress). It is performance intensive if we use a complex shape. However, with the OOBB the cost is minimal.

With that out of the way, here’s some C++ high level code for the obstacle avoidance algorithm:

// computes obstacle avoidance movement components
void Bot::ObstacleAvoidanceUpdate(f32 m_fDeltaTime)
	Entity* pEntity; // what it hit
	f32 fDistance; // how far it is
	Physics::HullTrace(m_vPosition, m_pHull, m_vSpeed, m_pWorld,
		pEntity, fDistance);

	// We only avoid close dynamic objects. If it is static and the path goes
	// there, it means the bot will climb
	if ((pEntity == NULL)
		|| pEntity->IsStatic()
		|| (fDistance > m_fMaxSteerDistance))

	Vector3 vObstaclePos = pEntity->GetPosition();
	Vector3 vSteeringDirection = (m_vPosition - vObstaclePos).Normalize();

		ComputeAvoidanceTranslation(vSteeringDirection, fDistance),
		ComputeAvoidanceSteering(vSteeringDirection, fDistance),

OK, with this our bots can now go around dynamic objects and other bots. Note that, as in the previous post, I made a call to RequestMovement(). Now that we have 2 systems making requests, the function shows its purpose: to select what to follow. I also chose, again, not to show exactly how to compute the parameters for RequestMovement(). These algorithms are somewhat empirical and usually involve tweaking constant values to get a good looking behavior.

Anyway, now there’s only one step left: climbing. In L4D, it is really cool how the bots climb walls and jump over ledges to reach the players. To do that, our navigation mesh, mentioned in the previous post, must also consider  reachable moving areas on higher ground. If we have that, all that is left is to figure out how to reach the higher points of the map.

This is what Booth talks about in the remaining slides related to navigation. Essentially, every time a bot gets close to a not so high level obstacle in the way to the next point in the navigation mesh, be that the level itself or a movable object, it is going to attempt to jump and climb.

Let’s get to the algorithm per se. Take a look at the image above, the figures are very helpful. Once we detect there’s an obstacle ahead that justifies climbing, the very first step is to check if there’s space above the bot’s head. If there isn’t, he can’t climb. To do that, we perform the infamous hull trace upwards. Once we know there is space above, we need to know if there’s some sort of space in front. So, starting from the bottom, we do a hull trace in the forward movement direction, but changing the y coordinate, so we look for a passage in several different heights. Once a path is found, all that is left is to precise the ledge of the passage way. That is done by performing the hull trace multiple times downwards, until we get what I will call a grabbing point.

At this stage, all the bot really has to do is move. By the height of the grabbing point we can define which animation set to use in order to climb the object.

This concludes this stage of the AI. Below is a high level algorithm with my interpretation of the algorithm that finds the climbing edge. Get ready, it is big:

bool ComputeClimbingEdge(Vector3& p_vEdge)
	Vector3 vSpeedDirection = m_vSpeed.Normalize();

	Entity* pEntity;
	f32 fGroundDistance;
	Physics::HullTrace(m_vPosition, m_pHull, vSpeedDirection,
		m_pWorld, pEntity, fGroundDistance);

	// We only climb static objects that are in proper range.
	// If the bot is steering to avoid the obstacle, we also
	// return
	if ((pEntity == NULL)
		|| !pEntity->IsStatic()
		|| (fGroundDistance > m_fTestClimbingDistance)
		|| (m_fSteering > m_fTestClimbingSteering)
		return false;

	//Climbing will start, now we need to find the edge

	// First the maximum height
	f32 fCeilingDistance;
	Entity* pCollidable;
	Physics::HullTrace(m_vPosition, m_pHull, m_pWorld->GetUpVector(),
		m_pWorld, pCollidable, fCeilingDistance);
	if (pCollidable == NULL)
		fCeilingDistance = m_fMaximumClimbingHeight;

	// now we look for the horizontal gap in the wall, starting from the bottom
	Vector3 vForwardTracePosition;
	for (f32 fTestHeight = 0;
		fTestHeight < fCeilingDistance;
		fTestHeight += m_fHullStep)
		f32 fForwardDistance;
		vForwardTracePosition = m_vPosition
			+ (m_pWorld->GetUpVector() * fTestHeight);
		Physics::HullTrace(vForwardTracePosition, m_pHull, m_vSpeed,
			m_pWorld, pCollidable, fForwardDistance);

		// the diameter of the hull is just a guess of a decent value
		if (fForwardDistance
			>= fGroundDistance + m_pHull->GetHorizontalDiameter())

	// move forward a bit so we trace from the top of the new level
	Vector3 vDownTracePosition = vForwardTracePosition
		+ vSpeedDirection
			* (fGroundDistance + m_pHull->GetHorizontalDiameter());
	// trace down to get the precise height;
	f32 fLevelDistance;
	Physics::HullTrace(vDownTracePosition, m_pHull, -m_pWorld->GetUpVector(),
		m_pWorld, pCollidable, fLevelDistance);

	// now we trace back
	// creating first final edge position candidate:
	p_vEdge = vDownTracePosition - m_pWorld->GetUpVector() * fLevelDistance;
	for (f32 fBackDistance = 0;
		fBackDistance <= fGroundDistance + m_pHull->GetHorizontalDiameter();
		fBackDistance += m_fHullPrecisionStep)
		Vector3 vTestEdgePosition = vDownTracePosition
			- (vSpeedDirection * fBackDistance);
		f32 fDownDistance;
		Physics::HullTrace(vTestEdgePosition, m_pHull, -m_pWorld->GetUpVector(),
			m_pWorld, pCollidable, fDownDistance);

		if (fDownDistance > (fLevelDistance + m_fSafetyLevelDelta))
			p_vEdge = vTestEdgePosition
				- (m_pWorld->GetUpVector() * fDownDistance);

	// we got a valid result in p_vEdge
	return true;

I should mention that some assumptions are made about the environment: we assume the passage is not too narrow and that it is at a regular height. I also assumed after the first check the HullTrace() method always hits a valid entity.

The Bot::ComputeClimbingEdge() function is called in the Bot::Update() function, together with the other ones previously defined. After everything is done, the system processes the several movement requests and actually moves the bot.

I think that closes the subject at least for now. I hope what I left unexplained is not to hard to figure out. But I’m tempted to post more on the subject so who knows? Maybe we’ll have another one in the series.

Ah, if you decide to implement it, don’t forget that everything that goes up must go down. The bot has to check for edges on the ground too in order to climb or jump back to the ground!

See you space cowboys…



AI navigation in Left 4 Dead: Part I February 25, 2010

Posted by Cesar in gaming me, working me.
Tags: , , , , , ,
1 comment so far

The other day my good friend (and great game designer) Bruno Palermo pointed me to this presentation that Mike Booth used last year at the Artificial Intelligence and Interactive Digital Entertainment Conference. I found it extremely interesting and useful. I didn’t know Valve made publication material available through their website!

I didn’t have the pleasure to watch it, but by the slides you can tell his presentation goes through a lot of cool stuff. So what I want to do is focus only in the AI for pathing and movement and try to get closer to an actual implementation of the system. See it as my version of the presentation. Is that possible? I think it is, let’s try (I didn’t think this through when writing, but if you are reading it is because it worked. Don’t worry and continue reading).

The most important thing is to remember that the goal is not the best possible behavior but instead the most realistic one. And an exact, perfect path or movement is not very realistic. So let’s begin.

To move the bots (let’s call them bots, right?), the system works in two phases: path finding and movement. One could think after path finding, moving is easy. Well, it isn’t. The L4D system computes the paths in a navigation mesh and the bots extract a rough direction from the path finding system, not an exact guide on how to get from point A to point B. That means it is up to the movement phase to figure out how to reach the next point in the path (jumping, climbing, crouching, whatever).

So the very first step towards the navigation AI is to generate or edit a navigation mesh for the map. I won’t get into specifics (googling navigation mesh should be a good start), but essentially the algorithm tracks “walkable” areas from the map and then grows rectangles in a way that best fits the areas. The result should be a graph-like structure with a bunch of bounding boxes representing areas where the bots can walk. If a bot can move between boxes, we add a bidirectional connection connecting them.

With the mesh properly generated (never at run-time, eh?), the path finding algorithm goes through the graph with a simple A* algorithm. After running A* in our walkable rectangles graph, we have an ordered list of rectangles in the path. At this point it is obvious things are already imprecise, which means the movement phase has to find the best way to go from one rectangle to the other.

The optimal approach, rejected by Booth because it looks robotic, is to look for the rectangle further away with a clear, direct path to it. Because the pathing system is not the only one acting on the bot at run-time (there’s also flocking, small obstacle avoidance, etc), this alternative also requires a lot of repathing: every time something obstructs the path to the target node, the system would have to compute a new path.

Instead, the navigation system uses what Booth calls Reactive Path Following. The decision is very local and sets as a target the unobstructed node further down the path in the direction the bot is facing. Booth mentions this makes the path fluid which wouldn’t be true unless some steering behavior were in place, so here’s my very high level interpretation of that algorithm:

// Rough example of the path update function.
void Bot::PathUpdate(f32 m_fDeltaTime)
	// unless the bot got too far from the path, we don't have to compute it
	// again. In this case, the realism of the algorithm helps performance!
	// Of course we also have to find a path if we don't have one yet.
	if (!HasPath() || (DistanceFromPath() > m_fMaxPathDistance))
		m_lPathNodes = AStar::FindPath(
		m_iCurrentNode = 0;

	// looking for the best node to track. Notice that the next node is local,
	// the decision is made every frame.
	Node* pNextNode = m_lPathNodes[m_iCurrentNode];
	for (s32 i = m_iCurrentNode; i < m_lPathNodes.GetSize(); ++i)
		if (IsFacing(m_lPathNodes[i]) && IsClearPath(m_lPathNodes[i]))
			pNextNode = m_lPathNodes[i];

	// assuming some complex heuristic might take place to decide which
	// priority each system gets, we don't update the member variables
	// directly. Instead, make requests informing the current system
		ComputePathSteering(pNextNode, m_fDeltaTime),

Ok, that’s it for path finding and basic path following but there’s a lot more to cover. However, I haven’t posted for a really long time and it would take me another week to finish the whole subject. So let’s stop at this point and in the next post I’ll continue to object avoidance and ledge climbing.

See you space cowboys…

%d bloggers like this: