Contact Us Search Paper

2019 Scientific Conference on Network, Power Systems and Computing , Pages 105-108

A Novel Path Planning Algorithm Based on Q-learning and Adaptive Exploration Strategy

Ting Li, Ying Li

Corresponding Author:

Ting Li

Abstract:
In an unknown environment, how to plan a path is a fundamental problem for agents. In this paper, we propose an improved reinforcement learning algorithm, called adaptive exploration Q-learning algorithm (AEQ), to solve path planning problem. Firstly, to ensure an agent learns autonomously in the process of trial and error when the agent knows nothing about the environment, AEQ chooses Q-learning algorithms to improve. Secondly, AEQ utilizes an adaptive exploration strategy that aims at speeding up the algorithm’s convergence. The adaptive exploration strategy dynamically adjusts the exploration factor according to various situations so that the agent explores the environment sufficiently and makes full use of the environment information. The experimental results show that the agent successfully reaches the goal without collision by AEQ. Besides, compared with the classical Q-learning algorithm and SARSA algorithm, AEQ improves the convergence speed and reduces the convergence time.
Keywords:
Reinforcement learning, Path planning, Obstacle avoidance, Q-learning
Cite this paper:
Ting Li, Ying Li, A Novel Path Planning Algorithm Based on Q-learning and Adaptive Exploration Strategy. 2019 Scientific Conference on Network, Power Systems and Computing (NPSC 2019), 2019: 105-108. DOI: https://doi.org/10.33969/EECS.V3.024.