Enhancing Distributed System Performance with an Intelligent Multi-Agent Microservices Architecture
DOI:
https://doi.org/10.3991/ijim.v19i24.56299Keywords:
Business Processes, Microservices, Multi-Agent Systems, Service Composition, Deep Reinforcement Learning, Interoperability, Quality-of-Service, RESTAbstract
In a rapidly evolving digital landscape, distributed systems are becoming increasingly complex, making it essential to adopt architectures that can adapt in real-time. Business process optimization now depends on intelligent, dynamic, and modular systems. To address this need, we propose a hybrid architecture called multi-agent microservices (MAMS), which merges the principles of microservices with those of multi-agent systems (MAS). Each microservice is represented by an autonomous agent that can discover, compose, and collaborate with other services through REST interfaces while adapting in a decentralized manner to contextual changes. The uniqueness of our approach lies in integrating deep reinforcement learning (DRL) into the decision-making process. This technique enables agents to learn how to flexibly select optimal service combinations based on quality of service metrics, such as availability, response time, and reliability. Unlike traditional methods, our system ensures continuous adaptation to load fluctuations or service failures. The main contributions of this work include intelligent and autonomous distributed orchestration, decision-making driven by adaptive learning, and significant improvements in flexibility, resilience, and efficiency of services in complex environments. A case study on travel planning demonstrates the effectiveness of our solution, which surpasses monolithic architectures in personalization, user satisfaction, and operational efficiency.
References
[1] M. Hammer, “What is business process management?,” in Handbook on business process management 1: Introduction, methods, and information systems, Springer, 2014, pp. 3–16.
https://doi.org/10.1007/978-3-642-45100-3_1
[2] G. Alonso, F. Casati, H. Kuno, and V. Machiraju, “Web services,” in Web Services: Concepts, architectures and applications, Springer, 2004, pp. 123–149.
https://doi.org/10.1007/978-3-662-10876-5_5
[3] A. Dorri, S. S. Kanhere, and R. Jurdak, “Multi-agent systems: A survey,” Ieee Access, vol. 6, pp. 28573–28593, 2018.
https://doi.org/10.1109/ACCESS.2018.2831228
[4] M. Hepp, P. De Leenheer, A. De Moor, and Y. Sure, Ontology management: semantic web, semantic web services, and business applications. Springer Science & Business Media, 2007.
https://doi.org/10.1007/978-0-387-69900-4
[5] J. Jordanov, D. Simeonidis, and P. Petrov, “Containerized Microservices for Mobile Applications Deployed on Cloud Systems.,” Int. J. Interact. Mob. Technol., vol. 18, no. 10, 2024.
https://doi.org/10.3991/ijim.v18i10.45929
[6] M. Fahad, N. Moalla, and Y. Ourzout, “Dynamic execution of a business process via web service selection and orchestration,” Procedia Comput. Sci., vol. 51, pp. 1655–1664, 2015.
https://doi.org/10.1016/j.procs.2015.05.299
|7] S. Zouad and M. Boufaida, “Using multi-agent microservices for a better dynamic composition of semantic web services,” in Proceedings of the 4th International Conference on Advances in Artificial Intelligence, 2020, pp. 47–52.
https://doi.org/10.1145/3441417.344142
[8] P. Ladosz, L. Weng, M. Kim, and H. Oh, “Exploration in deep reinforcement learning: A survey,” Inf. Fusion, vol. 85, pp. 1–22, 2022.
https://doi.org/10.1016/j.inffus.2022.03.003
[9] M. M. Al-Nawashi, O. M. Al-hazaimeh, N. M. Tahat, N. Gharaibeh, W. A. Abu-Ain, and T. Abu-Ain, “Deep Reinforcement Learning-Based Framework for Enhancing Cybersecurity.,” Int. J. Interact. Mob. Technol., vol. 19, no. 3, 2025.
https://doi.org/10.3991/ijim.v19i03.50727
[10] J. Santos et al., “Efficient microservice deployment in Kubernetes multi-clusters through reinforcement learning,” in NOMS 2024-2024 IEEE Network Operations and Management Symposium, 2024, pp. 1–9.
https://doi.org/10.1109/NOMS59830.2024.10575912
[11] S. R. Peddinti, B. K. Pandey, A. Tanikonda, and S. R. Katragadda, “Optimizing Microservice Orchestration Using Reinforcement Learning for Enhanced System Efficiency,” Distrib. Learn. Broad Appl. Sci. Res. Annu., vol. 7, 2021.
https://ssrn.com/abstract=5119917
[12] Q. Si, J. Shi, W. Li, X. Lu, and P. Pu, “DeepMRA: An Efficient Microservices Resource Allocation Framework with Deep Reinforcement Learning in the Cloud,” in International Conference on Intelligent Computing, 2024, pp. 455–466.
https://doi.org/10.1007/978-981-97-5581-3_37
[13] L. Cui, T. Shi, R. Lu, and T. Zhang, “Autoscaling in Mobile Edge Computing Based on Multi-Agent Reinforcement Learning,” in Proceedings of the 2023 9th International Conference on Communication and Information Processing, 2023, pp. 520–527.
https://doi.org/10.1145/3638884.3638966
[14] Z. Zhang, Z. Guo, H. Zheng, Z. Li, and P. F. Yuan, “Automated architectural spatial composition via multi-agent deep reinforcement learning for building renovation,” Autom. Constr., vol. 167, p. 105702, 2024.
https://doi.org/10.1016/j.autcon.2024.105702
[15] B. Acar et al., “OPACA: Toward an Open, Language-and Platform-Independent API for Containerized Agents,” IEEE Access, vol. 12, pp. 10012–10022, 2024.
https://doi.org/10.1109/ACCESS.2024.3353613
[16] Nwana, H. S., &Ndumu, D. T. (2005). An introduction to agent technology. Software Agents and Soft Computing Towards Enhancing Machine Intelligence: Concepts and Applications, 1-26.
https://doi.org/10.1007/3-540-62560-7
[17] Nadareishvili, I., Mitra, R., McLarty, M., & Amundsen, M. (2016). Microservice architecture: aligning principles, practices, and culture. “O’Reilly Media, Inc.”.
[18] J. Jordanov, D. Simeonidis, and P. Petrov, “Containerized Microservices for Mobile Applications Deployed on Cloud Systems”, Int. J. Interact. Mob. Technol., vol. 18, no. 10, pp. pp. 48–58, May 2024.
https://doi.org/10.3991/ijim.v18i10.45929
[19] Zouad, S., &Boufaida, M. (2020). An agent-oriented methodology for business process management. In Business Modeling and Software Design: 10th International Symposium, BMSD 2020, Berlin, Germany, July 6-8, 2020, Proceedings 10 (pp. 287-296). Springer International Publishing.
https://doi.org/10.1007/978-3-030-52306-0_19
[20] A. Downey, Think python. “ O’Reilly Media, Inc.,” 2012.
[21] J. Palanca, J. A. Rincon, C. Carrascosa, V. Julián, and A. Terrasa, “A flexible agent architecture in SPADE,” in International Conference on Practical Applications of Agents and Multi-Agent Systems, 2022, pp. 320–331.
https://doi.org/10.1007/978-3-031-18192-4_26
[22] R. Smith, Docker orchestration. Packt Publishing Ltd, 2017.
[23] A. Stadnicki, F. F. Pietroń, and P. Burek, “Towards a modern ontology development environment,” Procedia Comput. Sci., vol. 176, pp. 753–762, 2020.
https://doi.org/10.1016/j.procs.2020.09.070
[24] Clifton, J., & Laber, E. (2020). Q-learning: Theory and applications. Annual Review of Statistics and Its Application, 7, 279-301.
https://doi.org/10.1146/annurev-statistics-031219-041220
[25] V. Silaparasetty, Deep Learning Projects Using TensorFlow 2. Springer, 2020.
https://doi.org/10.1007/978-1-4842-5802-6
[26] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., &Riedmiller, M. (2013). Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
https://doi.org/10.48550/arXiv.1312.5602
[27] Roderick, M., MacGlashan, J., &Tellex, S. (2017). Implementing the deep Q-network. arXiv preprint arXiv:1711.07478.
https://doi.org/10.48550/arXiv.1711.07478
[28] Frerix, T., Möllenhoff, T., Moeller, M., & Cremers, D. (2017). Proximal backpropagation. arXiv preprint arXiv:1706.04638.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Sara Zouad, Mahmoud Boufaida

This work is licensed under a Creative Commons Attribution 4.0 International License.

