jump to navigation

AI navigation in Left 4 Dead: Part II June 8, 2010

Posted by Cesar in working me.
Tags: , , , , , , , , ,
trackback

After a long period of inactivity, it’s about time we continue our analysis on Michael Booth’s presentation about the AI in Left 4 Dead.

If you remember the last post on the subject, I tried to give a bit more depth to the reactive path following algorithm described in the slides. With that side implemented, we got a nice and fast path finding / following algorithm that offers a pretty optimized trajectory without looking artificial. But that’s just half the problem. The macro management half. Now we need to look at micro management and define how the bot moves when following the path.

The first step is simple obstacle avoidance. Our path finding algorithm deals with the static objects, the level itself. But it already introduces, with reactive path following, the concept of steering, which is very important and is used also in this stage. So now our bot must steer towards the next point in the path but also go around moving things that might be in the way: crates, other bots, whatever.

The most simple steering algorithm we can use is to select a direction and try to go around the object. Say the bot bumps into a box. The algorithm must try to pick a direction that leads to the closest way around it, left or right, and steer that way until the path towards the next goal becomes clear again. A simple way to attempt to select the closest way around the obstacle is to use the object center (for simplicity we’ll assume the center is the actual position of the object). All we do is apply a steering force in the opposite direction.

In the presentation, Booth mentions hull trace a lot and I’ll use it for obstacle avoidance too, so it is definitely worth describing. It is not clear if the algorithm uses a collision mesh or a simple box but, for simplicity, let’s consider that it uses an OOBB (Object Oriented Bounding Box). Let’s assume the OOBB fits the bot inside it and it has the same position and orientation as the bot. Hull trace is then an algorithm that projects that OOBB in a given direction and returns the first point of collision. Like a ray tracing algorithm, but with a box instead of a line. If we were using the bot’s collision mesh, we would do the same with the whole mesh instead of the OOBB.

The Hull trace algorithm can be a little complex because in order to make sure we get all collisions we need a sweep test (or a way to fake it, but I digress). It is performance intensive if we use a complex shape. However, with the OOBB the cost is minimal.

With that out of the way, here’s some C++ high level code for the obstacle avoidance algorithm:

// computes obstacle avoidance movement components
void Bot::ObstacleAvoidanceUpdate(f32 m_fDeltaTime)
{
	Entity* pEntity; // what it hit
	f32 fDistance; // how far it is
	Physics::HullTrace(m_vPosition, m_pHull, m_vSpeed, m_pWorld,
		pEntity, fDistance);

	// We only avoid close dynamic objects. If it is static and the path goes
	// there, it means the bot will climb
	if ((pEntity == NULL)
		|| pEntity->IsStatic()
		|| (fDistance > m_fMaxSteerDistance))
	{
		return;
	}

	Vector3 vObstaclePos = pEntity->GetPosition();
	Vector3 vSteeringDirection = (m_vPosition - vObstaclePos).Normalize();

	RequestMovement(
		ComputeAvoidanceTranslation(vSteeringDirection, fDistance),
		ComputeAvoidanceSteering(vSteeringDirection, fDistance),
		OBSTACLE_AVOIDANCE);
}

OK, with this our bots can now go around dynamic objects and other bots. Note that, as in the previous post, I made a call to RequestMovement(). Now that we have 2 systems making requests, the function shows its purpose: to select what to follow. I also chose, again, not to show exactly how to compute the parameters for RequestMovement(). These algorithms are somewhat empirical and usually involve tweaking constant values to get a good looking behavior.

Anyway, now there’s only one step left: climbing. In L4D, it is really cool how the bots climb walls and jump over ledges to reach the players. To do that, our navigation mesh, mentioned in the previous post, must also consider  reachable moving areas on higher ground. If we have that, all that is left is to figure out how to reach the higher points of the map.

This is what Booth talks about in the remaining slides related to navigation. Essentially, every time a bot gets close to a not so high level obstacle in the way to the next point in the navigation mesh, be that the level itself or a movable object, it is going to attempt to jump and climb.

Let’s get to the algorithm per se. Take a look at the image above, the figures are very helpful. Once we detect there’s an obstacle ahead that justifies climbing, the very first step is to check if there’s space above the bot’s head. If there isn’t, he can’t climb. To do that, we perform the infamous hull trace upwards. Once we know there is space above, we need to know if there’s some sort of space in front. So, starting from the bottom, we do a hull trace in the forward movement direction, but changing the y coordinate, so we look for a passage in several different heights. Once a path is found, all that is left is to precise the ledge of the passage way. That is done by performing the hull trace multiple times downwards, until we get what I will call a grabbing point.

At this stage, all the bot really has to do is move. By the height of the grabbing point we can define which animation set to use in order to climb the object.

This concludes this stage of the AI. Below is a high level algorithm with my interpretation of the algorithm that finds the climbing edge. Get ready, it is big:

bool ComputeClimbingEdge(Vector3& p_vEdge)
{
	Vector3 vSpeedDirection = m_vSpeed.Normalize();

	Entity* pEntity;
	f32 fGroundDistance;
	Physics::HullTrace(m_vPosition, m_pHull, vSpeedDirection,
		m_pWorld, pEntity, fGroundDistance);

	// We only climb static objects that are in proper range.
	// If the bot is steering to avoid the obstacle, we also
	// return
	if ((pEntity == NULL)
		|| !pEntity->IsStatic()
		|| (fGroundDistance > m_fTestClimbingDistance)
		|| (m_fSteering > m_fTestClimbingSteering)
	{
		return false;
	}

	//Climbing will start, now we need to find the edge

	// First the maximum height
	f32 fCeilingDistance;
	Entity* pCollidable;
	Physics::HullTrace(m_vPosition, m_pHull, m_pWorld->GetUpVector(),
		m_pWorld, pCollidable, fCeilingDistance);
	if (pCollidable == NULL)
	{
		fCeilingDistance = m_fMaximumClimbingHeight;
	}

	// now we look for the horizontal gap in the wall, starting from the bottom
	Vector3 vForwardTracePosition;
	for (f32 fTestHeight = 0;
		fTestHeight < fCeilingDistance;
		fTestHeight += m_fHullStep)
	{
		f32 fForwardDistance;
		vForwardTracePosition = m_vPosition
			+ (m_pWorld->GetUpVector() * fTestHeight);
		Physics::HullTrace(vForwardTracePosition, m_pHull, m_vSpeed,
			m_pWorld, pCollidable, fForwardDistance);

		// the diameter of the hull is just a guess of a decent value
		if (fForwardDistance
			>= fGroundDistance + m_pHull->GetHorizontalDiameter())
		{
			break;
		}
	}

	// move forward a bit so we trace from the top of the new level
	Vector3 vDownTracePosition = vForwardTracePosition
		+ vSpeedDirection
			* (fGroundDistance + m_pHull->GetHorizontalDiameter());
	// trace down to get the precise height;
	f32 fLevelDistance;
	Physics::HullTrace(vDownTracePosition, m_pHull, -m_pWorld->GetUpVector(),
		m_pWorld, pCollidable, fLevelDistance);

	// now we trace back
	// creating first final edge position candidate:
	p_vEdge = vDownTracePosition - m_pWorld->GetUpVector() * fLevelDistance;
	for (f32 fBackDistance = 0;
		fBackDistance <= fGroundDistance + m_pHull->GetHorizontalDiameter();
		fBackDistance += m_fHullPrecisionStep)
	{
		Vector3 vTestEdgePosition = vDownTracePosition
			- (vSpeedDirection * fBackDistance);
		f32 fDownDistance;
		Physics::HullTrace(vTestEdgePosition, m_pHull, -m_pWorld->GetUpVector(),
			m_pWorld, pCollidable, fDownDistance);

		if (fDownDistance > (fLevelDistance + m_fSafetyLevelDelta))
		{
			break;
		}
		else
		{
			p_vEdge = vTestEdgePosition
				- (m_pWorld->GetUpVector() * fDownDistance);
		}
	}

	// we got a valid result in p_vEdge
	return true;
}

I should mention that some assumptions are made about the environment: we assume the passage is not too narrow and that it is at a regular height. I also assumed after the first check the HullTrace() method always hits a valid entity.

The Bot::ComputeClimbingEdge() function is called in the Bot::Update() function, together with the other ones previously defined. After everything is done, the system processes the several movement requests and actually moves the bot.

I think that closes the subject at least for now. I hope what I left unexplained is not to hard to figure out. But I’m tempted to post more on the subject so who knows? Maybe we’ll have another one in the series.

Ah, if you decide to implement it, don’t forget that everything that goes up must go down. The bot has to check for edges on the ground too in order to climb or jump back to the ground!

See you space cowboys…

[/sourcecode]

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: