A Latency Reduction Method for Cloud-fog Gaming based on Reinforcement Learning

Document Type : Original Article


1 Department of Computer Engineering, Shahr-e-Qods Branch, Islamic Azad University, Tehran, Iran

2 Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran


Unlike traditional gaming where a game run locally on a user's device, in cloud gaming, an online video game runs on remote servers and streams directly to a user's device. This caused players to become independent of having high hardware resources in their local computers. Since video games are a kind of latency-sensitive application, cloud servers far from users are not suitable. In fog computing, fog nodes are in the vicinity of users and can reduce the latency. In this paper, a latency reduction method based on reinforcement learning is proposed to determine which computing fog node can run the video games with the lowest latency. In the proposed method, a Principal Component Analysis (PCA) based approach is used to extract the most important features of each video game as the input of the learning process. The proposed method was implemented using Python. Experimental results show that the proposed method compared to some existing methods can reduce the frame latency and increase the frame rate of video games.


Main Subjects

  1. Ross, P. E., "Cloud computing's killer app: Gaming," IEEE Spectrum, Vol. 46, No. 3, (2009), 4-14, doi: 10.1109/MSPEC.2009.4795441.
  2. Huang, C. Y., Hsu, C. H., Chang, Y.C., and Chen, K.T., "GamingAnywhere: an open cloud gaming system," in 4th ACM multimedia systems conference, Oslo, Norway (2013), 36-47, https://doi.org/10.1145/2537855.
  3. Bonomi, F., Milito, R., Zhu, J., and Addepalli, S., "Fog computing and its role in the internet of things," first edition of the MCC workshop on Mobile cloud computing, New York, United States, (2012), 13-16, https://doi.org/10.1145/2342509.2342513.
  4. Yaghmaee, F., Koohi, H., " Dynamic Obstacle Avoidance by Distributed Algorithm based on Reinforcement Learning," International Journal of Engineering, Transactions B: Applications, 28, No. 2, (2015), 198-204, doi: 10.5829/idosi.ije.2015.28.02b.05.
  5. rezaei, H., Motameni, H., Barzegar, B., " A Hidden Markov Model for Morphology of Compound Roles in Persian Text Part of Tagging," International Journal of Engineering, Transactions B: Applications, 34, No. 11, (2021), 2494-2507, doi: 10.5829/IJE.2021.34.11B.12.
  6. Mo, J., "Performance modeling of communication networks with Markov chains," Synthesis Lectures on Data Management, Vol. 3, No. 1, (2010), 1-90, doi: 10.2200/S00269ED1V01Y201004CNT005.
  7. Talaat, F.M., Saraya, M.S., Saleh, A.I., Ali, H. A., and Ali, S.H., "A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment", Journal of Ambient Intelligence and Humanized Computing (2020), 1-16, https://doi.org/10.1007/s12652-020-01768-8.
  8. Zhang, X., Chen, H., Zhao, Y., Ma, Z., Xu, Y., Huang, H., and Wu, D. O. "Improving cloud gaming experience through mobile edge computing", IEEE Wireless Communications, Vol. 26, No. 4, (2019), 178-183, doi: 10.1109/MWC.2019.1800440.
  9. Zhang, C., and Zheng, Z., "Task migration for mobile edge computing using deep reinforcement learning," Future Generation Computer Systems, Vol. 96, (2019), 111-118, https://doi.org/10.1016/j.future.2019.01.059.
  10. Chen, H., Zhang, X., Xu, Y., Ren, J., Fan, J., Ma, Z. and Zhang, W., "T-Gaming: A Cost-Efficient Cloud Gaming System at Scale", IEEE Transactions on Parallel and Distributed Systems, Vol. 30, No. 12, (2019), 2849-2865, doi: 10.1109/TPDS.2019.2922205.
  11. Chen, X., Zhang, H., Wu, C., Mao, S., Ji, Y. and Bennis, M., "Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning," IEEE Internet of Things Journal, 6, No. 3, (2018), 4005-4018, doi: 10.1109/JIOT.2018.2876279.
  12. Dutreilh, X., Kirgizov, S., Melekhova, O., Malenfant, J., Rivierre, N. and Truck, I., "Using reinforcement learning for autonomic resource allocation in clouds: towards a fully automated workflow," The Seventh International Conference on Autonomic and Autonomous Systems, Venice, Italy, (2011), 67-74, ISBN: 978-1-61208-134-2.
  13. Wang, Y., Wang, K., Huang, H., Miyazaki, T. and Guo, S., "Traffic and computation co-offloading with reinforcement learning in fog computing for industrial applications," IEEE Transactions on Industrial Informatics, Vol. 15, No. 2, (2018), 976-986, doi: 10.1109/TII.2018.2883991.
  14. Dinh, T.Q., La, Q.D., Quek, T.Q. and Shin, H., "Learning for Computation Offloading in Mobile Edge Computing," IEEE Transactions on Communications, Vol. 66, No. 12, (2018), 6353-6367, doi: 10.1109/TCOMM.2018.2866572.
  15. Huang, L., Bi, S. and Zhang, Y.J.A., "Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks," IEEE Transactions on Mobile Computing, 19, No. 11, (2019), 2581-2593, doi: 10.1109/TMC.2019.2928811.