This AI Agent Uses Reinforcement Learning To Self-Drive In A Video Game – Analytics India Magazine

Posted: December 31, 2019 at 11:46 pm


without comments

One of the most used machine learning (ML) algorithms of this year, reinforcement learning (RL) has been utilised to solve complex decision-making problems. In the present scenario, most of the researches are focussed on using RL algorithms which helps in improving the performance of the AI model in some controlled environment.

Ubisofts prototyping space, Ubisoft La Forge has been doing a lot of advancements in its AI space. The goal of this prototyping space is to bridge the gap between the theoretical academic work and the practical applications of AI in videogames as well as in the real world. In one of our articles, we discussed how Ubisoft is mainstreaming machine learning into game development. Recently, researchers from the La Forge project at Ubisoft Montreal proposed a hybrid AI algorithm known as Hybrid SAC, which is able to handle actions in a video game.

Most reinforcement learning research papers focus on environments where the agents actions are either discrete or continuous. However, when training an agent to play a video game, it is common to encounter situations where actions have both discrete and continuous components. For instance, when wanting the agent to control systems that have both discrete and continuous components, like driving a car by combining steering and acceleration (both continuous) with the usage of the hand brake (a discrete binary action).

This is where Hybrid SAC comes into play. Through this model, the researchers tried to sort out the common challenges in video game development techniques. The contribution consists of a different set of constraints which is mainly geared towards industry practitioners.

The approach in this research is based on Soft Actor-Critic which is designed for continuous action problems. Soft Actor-Critic (SAC) is a model-free algorithm which was originally proposed for continuous control tasks, however, the actions which are mostly encountered in video games are both continuous as well as discrete.

In order to deal with a mix of discrete and continuous action components, the researchers converted part of SACs continuous output into discrete actions. Thus the researchers further explored this approach and extended it to a hybrid form with both continuous and discrete actions. The researchers also introduced Hybrid SAC which is an extension to the SAC algorithm that can handle discrete, continuous and mixed actions discrete-continuous.

The researchers trained a vehicle in a Ubisoft game by using the proposed Hybrid SAC model with two continuous actions (acceleration and steering) and one binary discrete action (hand brake). The objective of the car is to follow a given path as fast as possible, and in this case, the discrete hand brake action plays a key role in staying on the road at such a high speed.

Hybrid SAC exhibits competitive performance with the state-of-the-art on parameterised actions benchmarks. The researchers showed that this hybrid model can be successfully applied to train a car on a high-speed driving task in a commercial video game, also, demonstrating the practical usefulness of such an algorithm for the video game industry.

While working with the mixed discrete-continuous actions, the researchers have gained several experiences and shared them as a piece of advice to obtain an appropriate representation for a given task.They are mentioned below

comments

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: ambika.choudhury@analyticsindiamag.com

See the rest here:

This AI Agent Uses Reinforcement Learning To Self-Drive In A Video Game - Analytics India Magazine

Related Posts

Written by admin |

December 31st, 2019 at 11:46 pm

Posted in Machine Learning




matomo tracker