Multi-Objective Federated Optimization for Decentralized AI-Driven Computing Systems

Authors

  • Dr. Tran Minh Chau Department of Artificial Intelligence, Nha Trang Research Institute, Nha Trang, Vietnam. Author

DOI:

https://doi.org/10.63282/3117-5481/AIJCST-V4I1P101

Keywords:

Federated Learning, Multi-Objective Optimization, Pareto Front, Differential Privacy, Secure Aggregation, Fairness In Ai, Energy-Aware Learning, Communication Efficiency, Non-Iid Data, Personalized Fl, Robustness To Adversaries, Reinforcement Learning Scheduler

Abstract

Decentralized AI deployments must optimize beyond raw accuracy to meet real-world constraints such as latency, privacy, energy, fairness, and robustness. This paper presents a unified framework for Multi-Objective Federated Optimization (MOFO) that learns Pareto-efficient models under heterogeneous, non-IID data and volatile participation. We formulate cross-device and cross-silo federated learning as a constrained multi-objective program balancing task loss with system- and society-level objectives: end-to-end latency, communication cost, device energy, demographic parity, and adversarial robustness. The framework combines (i) adaptive scalarization with Lagrangian relaxation to enforce hard budgets, (ii) Pareto-front exploration via evolutionary search and hypervolume-guided updates, and (iii) personalized FL through meta-learning and proximal regularization to respect client drift. To reduce communication while preserving privacy, we integrate sparsified/quantized updates, secure aggregation, and calibrated differential privacy, a bandit/RL client scheduler selects participants by marginal Pareto gain and energy profile. Robustness is improved through gradient clipping, Byzantine-resilient aggregation, and federated knowledge distillation. We propose evaluation protocols and indicators (hypervolume, ε-indicator, fairness gaps, joules/sample, and p95 latency) and demonstrate that MOFO yields diverse Pareto-optimal models enabling operators to trade accuracy for efficiency or fairness without retraining. Ablations show consistent gains over single-objective FL baselines under stragglers, intermittent connectivity, and non-IID shifts. The framework provides a practical path to deploy equitable, resource-aware, and trustworthy decentralized AI

References

[1] McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & Agüera y Arcas, B. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data. AISTATS. https://arxiv.org/abs/1602.05629

[2] Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2018). Federated Optimization in Heterogeneous Networks (FedProx). arXiv preprint. https://arxiv.org/abs/1812.06127

[3] Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S. U., & Suresh, A. T. (2020). Adaptive Federated Optimization (FedOpt). ICLR. https://arxiv.org/abs/2003.00295

[4] Bonawitz, K., Eichner, H., Grieskamp, W., et al. (2019). Towards Federated Learning at Scale: System Design. MLSys. https://arxiv.org/abs/1902.01046

[5] Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep Learning with Differential Privacy. ACM CCS. https://dl.acm.org/doi/10.1145/2976749.2978318 (preprint: https://arxiv.org/abs/1607.00133)

[6] Mironov, I. (2017). Rényi Differential Privacy. CSF. https://arxiv.org/abs/1702.07476

[7] Bonawitz, K., Ivanov, V., Kreuter, B., et al. (2017). Practical Secure Aggregation for Privacy-Preserving Machine Learning. ACM CCS. https://dl.acm.org/doi/10.1145/3133956.3133982

[8] Yin, D., Chen, Y., Ramchandran, K., & Bartlett, P. (2018). Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. ICML. https://proceedings.mlr.press/v80/yin18a/yin18a.pdf

[9] Blanchard, P., El Mhamdi, E. M., Guerraoui, R., & Stainer, J. (2017). Byzantine-Tolerant Machine Learning (Krum). NeurIPS. https://papers.nips.cc/paper/6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent

[10] Karimireddy, S. P., Rebjock, Q., Stich, S. U., & Jaggi, M. (2019). Error Feedback Fixes SignSGD and other Gradient Compression Schemes. ICML. https://proceedings.mlr.press/v97/karimireddy19a/karimireddy19a.pdf (preprint: https://arxiv.org/abs/1901.09847)

[11] Alistarh, D., Grubic, D., Li, J., Tomioka, R., & Vojnovic, M. (2017). QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding. NeurIPS. https://proceedings.neurips.cc/paper/2017/file/6c340f25839e6acdc73414517203f5f0-Paper.pdf (preprint: https://arxiv.org/abs/1610.02132)

[12] Lin, Y., Han, S., Mao, H., Wang, Y., & Dally, W. J. (2018/2017). Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training. ICLR (workshop)/arXiv. https://arxiv.org/abs/1712.01887

[13] Nishio, T., & Yonetani, R. (2019/2018). Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge (FedCS). ICC Workshop / arXiv. https://arxiv.org/abs/1804.08333

[14] Kairouz, P., McMahan, H. B., et al. (2021). Advances and Open Problems in Federated Learning. Foundations and Trends in ML. https://arxiv.org/abs/1912.04977

[15] Mohri, M., Sivek, G., & Suresh, A. T. (2019). Agnostic Federated Learning (AFL). ICML. https://proceedings.mlr.press/v97/mohri19a/mohri19a.pdf (preprint: https://arxiv.org/abs/1902.00146)

[16] Fallah, A., Mokhtari, A., & Ozdaglar, A. (2020). Personalized Federated Learning: A Meta-Learning Approach (Per-FedAvg). NeurIPS. https://arxiv.org/abs/2002.07948

[17] Li, X., Jiang, M., Zhang, X., Kamp, M., & Dou, Q. (2021). FedBN: Federated Learning on Non-IID Features via Local Batch Normalization. arXiv preprint. https://arxiv.org/abs/2102.07623

[18] Itahara, S., Nishio, T., Koda, Y., Morikura, M., & Yamamoto, K. (2020). Distillation-Based Semi-Supervised Federated Learning (DS-FL). arXiv preprint. https://arxiv.org/abs/2008.06180

[19] Wang, J., Liu, Q., Liang, H., Joshi, G., & Poor, H. V. (2020). Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization (FedNova). NeurIPS. https://arxiv.org/abs/2007.07481

[20] Sener, O., & Koltun, V. (2018). Multi-Task Learning as Multi-Objective Optimization. NeurIPS. https://arxiv.org/abs/1810.04650

[21] Yu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K., & Finn, C. (2020). Gradient Surgery for Multi-Task Learning (PCGrad). NeurIPS. https://arxiv.org/abs/2001.06782

[22] Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation. https://sci2s.ugr.es/sites/default/files/files/Teaching/OtherPostGraduateCourses/Metaheuristicas/Deb_NSGAII.pdf

[23] Lian, X., Zhang, C., Hsieh, C.-J., Zhang, W., & Liu, J. (2017). Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel SGD. NeurIPS. https://arxiv.org/abs/1705.09056

[24] Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2020). How to Backdoor Federated Learning. AISTATS. https://proceedings.mlr.press/v108/bagdasaryan20a/bagdasaryan20a.pdf

[25] Enabling Mission-Critical Communication via VoLTE for Public Safety Networks - Varinder Kumar Sharma - IJAIDR Volume 10, Issue 1, January-June 2019. DOI 10.71097/IJAIDR.v10.i1.1539

[26] Optimizing LTE RAN for High-Density Event Environments: A Case Study from Super Bowl Deployments - Varinder Kumar Sharma - IJAIDR Volume 11, Issue 1, January-June 2020. DOI 10.71097/IJAIDR.v11.i1.1542

[27] Thallam, N. S. T. (2020). Comparative Analysis of Data Warehousing Solutions: AWS Redshift vs. Snowflake vs. Google BigQuery. European Journal of Advances in Engineering and Technology, 7(12), 133-141.

[28] Thallam, N. S. T. (2021). Performance Optimization in Big Data Pipelines: Tuning EMR, Redshift, and Glue for Maximum Efficiency.

[29] Reinforcement Learning Applications in Self Organizing Networks - Varinder Kumar Sharma - IJIRCT Volume 7 Issue 1, January-2021. DOI: https://doi.org/10.5281/zenodo.17062920.

[30] Heterogeneous Federated Optimization & FedNova — Reddi S. et al., 2020. Addressed objective-inconsistency in federated optimization under client heterogeneity.

Downloads

Published

2022-01-04

Issue

Section

Articles

How to Cite

[1]
T. M. Chau, “Multi-Objective Federated Optimization for Decentralized AI-Driven Computing Systems”, AIJCST, vol. 4, no. 1, pp. 1–12, Jan. 2022, doi: 10.63282/3117-5481/AIJCST-V4I1P101.

Similar Articles

1-10 of 103

You may also start an advanced similarity search for this article.