Our function first splits the panoramic video into frames. Then, for each frame, it cuts out the players and for each cut-out it estimates the player’s 2D pose (skeleton) using an Artificial Intelligence technique, Machine Learning, which allows it to process information through the use of examples.
Once the 2D pose has been estimated, an orientation proposal is made for the trimming of that frame using another Artificial Intelligence technique. As a final step in our function, we combine a player’s orientations over a series of time intervals in order to create a more consistent orientation.
Incorporating the orientation of each player during the match would generate multiple benefits to improve current space-time analyses such as space control, pass probability, defensive pressure and other models that depend on players’ movement and positioning over time.
Our proposal has been evaluated both visually and numerically against a portable tracking system (RealTrack System), the data of which has already been validated.