• Help Support The Rugby Forum :

Incredible prospects for next-gen sports video games

Penne Rara

Bench Player
Dec 27, 2016
Country Flag


Club or Nation


A paper by AI researchers from Stanford demonstrates a new algorithm that can generate controllable and animated sprites from the footage of a tennis match. After the user clicks on the landing position of the ball, the algorithm predicts the correct trajectory of the ball by solving a set of ordinary differential equations taking into account both ball spin (giving the Magnus effect) and a non-specified drag model. But the main job and by far the most interesting feature of this algorithm is its player behavioral model. Researchers sample several matches into smaller clips named "shot cycles" (a shot cycle starts when the tennis player reacts to the incoming ball and ends after the player strikes it, which can be decomposed into two phases - pre-strike and post-strike) and classify them into different shot types (forehand topspin/underspin, backhand topspin/underspin, serve, smash, etc.).

A point is decomposed as such:
1- The code first generates a behavior for the serving player. Basically, it decides what the shot type, ball velocity off the racket, bounce position and the player recovery position should be from an initial state which is the player's position before he initiates contact with the ball and the incoming ball trajectory. All these decisions are made via the implementation of Bayesian statistical models that not only take into account velocity and bouncing spot distributions but also player style.
2- The code browses the clip database and looks for the clip that is best suited to the previously generated behavior for the serving player.
3- Step 1 and 2 are done for the opposite player
4- The code enters a loop that first computes the behavior of the next player to hit the ball, then determines whether this results in a loss or a win, performs clip searching for the current player considering both behavior and point outcome, then it adjusts his recovery phase using both an optimization algorithm (least squares) and an interpolation method to create a transition between the end of the current clip and the next one.
5- When this is done, the code renders the reaction phase of the current player and the recovery phase of his opponent, since the "start of a shot cycle for a player is offset from that of their opponent by half a cycle".

Most notably, this algorithm normalizes the appearance of a player throughout the entire match or across different matches (as long as he is dressed the exact same way from one match to another) in order to hallucinate missing parts or differing complexions due to shading for instance. This allows the code to generate consistent frames whenever the best-fitting clip doesn't contain the entire body (cropped body), or if a shadow is cast over it.

This algorithm could be used with the ambition of building a behavioral model for other sports video games, and why not rugby video games! Let's consider the rugby challenge series for instance. Passes are done nearly on the spot, with the exact same player animation each time a short pass is delivered, another unique animation for long passes, and rudimentary ball trajectories. This could be considerably improved with this AI program to find the best motion and ball trajectory across an array of available clips. The program would no doubt be way too heavy to implement in a video game, because of its significant algorithmic complexity (probably) and memory/storage-intensive operations (surely). But game developers could use the program to mimick novel situations and fine tune their own human physics engine in the absence of motion capture.

The article is summed up in the following video:

and can be found here: https://arxiv.org/pdf/2008.04524.pdf