SuperAgent.ai

Q- Machine Learning in UNITY3D ...learn more

Virtual Reality, Artificial Intelligence

Overview / Usage

This is a simple Q-learning game made with UNITY 3D, this is a very crucial part of implementing machine learning algorithms in Unity.

The Algorithms we use are as follows:

Q-Learning is an Off-Policy algorithm for Temporal Difference learning. It can be proven that given sufficient training under any -soft policy, the algorithm converges with probability 1 to a close approximation of the action-value function for an arbitrary target policy.
Q-Learning Agent learns the optimal policy even when actions are selected according to a more exploratory or even random policy. The iterative algorithm for SARSA is used in this project,t he SARSA algorithm is a stochastic approximation to the Bellman equations for Markov Decision Processes.
TD learning, including SARSA and Q-Learning, uses the ideas of Dynamic Programming in a sample-based environment where the equalities are true in expectation.

TD learning, including SARSA and Q-Learning, uses the ideas of Dynamic Programming in a sample-based environment where the equalities are true in expectation. But essentially you can see how the update qπ(s,a)=∑s′,rp(s′,r|s,a)(r+γ∑a′π(a′|s′)qπ(s′,a′))qπ(s,a)=∑s′,rp(s′,r|s,a)(r+γ∑a′π(a′|s′)qπ(s′,a′)) has turned into SARSA's update:

The weighted sum over state transition and reward probabilities happens in expectation as you take many samples. So Q(S,A)=E[Sampled(R)+γ∑a′π(a′|S′)qπ(S′,a′)]Q(S,A)=E[Sampled(R)+γ∑a′π(a′|S′)qπ(S′,a′)] (technically you have to sample R and S' together) Likewise the weighting of the current policy happens in expectation. So Q(S,A)=E[Sampled(R+γQ(S′,A′))]Q(S,A)=E[Sampled(R+γQ(S′,A′))] To change this expectation into an incremental update, allowing for non-stationarity as the policy improves over time, we add a learning rate and move each estimate towards the latest sampled value: Q(S,A)=Q(S,A)+α[R+γQ(S′,A′)−Q(S,A)

The goal when doing Reinforcement Learning is to train an agent which can learn to act in ways that maximizes future expected rewards within a given environment. In the last post in this series, that environment was relatively static. The state of the environment was simply which of the three possible rooms the agent was in, and the actions were choosing which chest within that room to open. Our algorithm learned the Q-function for each of these state-action pairs: Q(s, a). This Q-function corresponded to the expected future reward that would be acquired by taking that action within that state over time.

Collaborators

2 Results

2 Results

Comments (0)