Abstract

Reinforcement Learning (RL) has been around for some time now, and various issues have been associated with the provision. In the last two years, RL has realized a boost in popularity through deep learning. For instance, RL played a critical role in the DeepMind AlphaGo program that was crucial to beating a top-level Go player in 2016 [1]. However, there is still work to be done despite RL’s advancements before the DeepMind AlphaGo program becomes mainstream. For instance, one of the challenges of the program is its ability to multitask. [1] indicate that agents should perform various functions to achieve general AI. However, in the contemporary understanding, multitasking is one of the challenges of AI and RL scalability; for instance, it should not take over 1000 hrs. of different tasks to learn 1000 various tasks. Instead, AI agents must build up a library of general knowledge and learn general skills that are common and applicable across a variety of tasks. However, this ability is currently non-existent in such programs as the Deep Q-Network (DQN). While the DQN has previously been shown to have the ability to play various games, including Atari games, there is often no learning across the tasks. Each of the games is learned from scratch, which is often not scalable.

Authors: George B. Stone, Douglas A. Talbert, William Eberle

Published in: International Conference for Internet Technology and Secured Transactions (ICITST-2021)

  • Date of Conference: 7-9 December 2021
  • DOI: 10.20533/ICITST.2021.0016
  • ISBN: 978-1-913572-39-6
  • Conference Location: Virtual (London, UK)

0