Controlling 3D gaming agent in an adversarial setting with Deep Reinforcement Learning

1 0
  • 0 Collaborators

The field of automation is expanding its use in our daily life, as most of the task of daily life is being done automatically and these all-automated systems are mostly handled by such agents. ...learn more

Project status: Published/In Market

Game Development, Artificial Intelligence, Graphics and Media, Cloud, Games

Intel Technologies
oneAPI, Intel Integrated Graphics, Intel GPA

Overview / Usage

In order to perform a large variety of tasks and to achieve human-level performance in complex real-world environments, Artificial Intelligence (AI) Agents must be able to learn from their past experiences and gain both knowledge and an accurate representation of their environment. Traditionally, AI agents have suffered from difficulties in obtaining a good representation of their environment and then mapping this representation to an efficient control policy. Deep reinforcement learning algorithms have provided a solution to this issue. This project aims to train 3D gaming agents using different deep-reinforcement learning models.

Description:

I am using deep learning to train the agent in python language by using multiple libraries and oneMKL for mathematic functions like PyTorch and Tensor flow for that using oneDNN I will be using oneTBB in parallel with my Python program in order to increase the processing speed.

For this will be using tools available in Intel tool kit :

Intel IPP for the image processing of the game,
Intel TBB to perform multiple programs in parallel
Intel dev cloud
Intel MKL
Intel distribution of Python

Along with these intel advisors and intel inspector for analyzing and debugging.

Methodology / Approach

Model In the Project, we use the Reinforcement Learning Model. Through the model, we trained the agent in a 3D environment and make its learning so that it can act according to the best of its choice.

• Adversarial Games

• Humanoid Robot

• Source Code

• 3D Environment

Adversarial Games When you have two agents, and both wants to increase their utility and due to that it affect the utility of the second agent. At times people work on different strategy games but don’t work on games in which you have to increase your utility in order to decrease the utility of the other agent.

Humanoid Robot Working in a constrained condition to create a general environment opens the gate for the major research area of the humanoid robot. It will boost robotics to new heights because of their generalizing concept. Source Code Everyone worked on the game. They are getting environment from source code, and I don’t think that is fair because humans can’t interact with the source when humans play games, so robots also don’t have to access the source code in order to make competition fair. Robots also have the same resources as humans have and then beat the human expert to become world champion. So, robots must use the same controller to control the game and screen to get the environment state of the game.

3D Environment As people worked with a lot of Atari games which are 2D games not a 3D game but some 3D games also there like FIFA and GTA, but they don’t have a general concept like they worked on these games with making different environments they don’t have the general environment. People created one environment for FIFA and that environment doesn’t work on GTA because of the different environment. The main problems that were faced for the project were related to handling the environment. in 3D and making it possible with Adversarial games to merge and tackle all the things and to handle the source code for different episodes and iterations for all classes to engage.

Technologies Used

OneAPI, OneMKL, Python, OpenCV. Tensorflow, GitHub, Qt, Keras, PyTorch, Intel Tool Kit

Comments (0)