Secure Distributed Computing Frameworks for AI Model Sharing in Decentralized Environments

Authors

  • Mohammed Sadik Abdullah Department of Computer Engineering, University of Khartoum, Khartoum, Sudan. Author

DOI:

https://doi.org/10.63282/3117-5481/AIJCST-V4I2P102

Keywords:

Secure Model Sharing, Decentralized AI, Federated Learning, Peer-To-Peer Training, Differential Privacy, Secure Aggregation, Confidential Computing, Zero-Knowledge Proofs, Robust Aggregation, Data/Model Provenance, Decentralized Identity (DID), Verifiable Computation, Byzantine Resilience, Edge/Cloud Interoperability, Privacy-Budget Governance

Abstract

AI collaboration increasingly spans untrusted, heterogeneous nodes from edge devices to multi-clouds raising acute concerns around privacy, integrity, and verifiability of shared models and updates. This paper proposes a secure distributed computing framework that unifies privacy-preserving learning, verifiable coordination, and incentive-aligned governance for decentralized AI model sharing. The architecture composes federated and peer-to-peer training with secure aggregation, differential privacy, and hardware-backed confidential computing to prevent data leakage while mitigating gradient inversion risks. Model provenance, access control, and policy enforcement are anchored via a lightweight, append-only ledger with decentralized identifiers, enabling auditability without central authorities. To counter poisoning, backdoors, and Sybil attacks, the framework integrates robust aggregation, reputation-weighted participation, and update attestation with zero-knowledge proofs for selective disclosure. A resource-aware scheduler adapts to edge variability using gossip-based dissemination, opportunistic bandwidth utilization, and erasure-coded checkpoints to preserve liveness under churn. Interoperability is ensured through portable model artifacts (e.g., ONNX), secure enclaves for cross-framework execution, and privacy budgets tracked as first-class governance assets. We outline threat models, compliance hooks for jurisdictional constraints, and a token-free contribution accounting mechanism that rewards data quality and validation work. Simulated and real-world deployments illustrate improved end-to-end trust, reduced coordination overhead, and resilient performance under adversarial conditions, positioning the framework as a practical substrate for open, secure, and accountable AI collaboration in decentralized environments

References

[1] McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data. AISTATS. https://arxiv.org/abs/1602.05629

[2] Bonawitz, K., et al. (2017). Practical Secure Aggregation for Privacy-Preserving Machine Learning. ACM CCS Workshop / arXiv. https://arxiv.org/abs/1611.04482

[3] Abadi, M., et al. (2016). Deep Learning with Differential Privacy. ACM CCS. https://arxiv.org/abs/1607.00133

[4] Kairouz, P., et al. (2021). Advances and Open Problems in Federated Learning. Foundations and Trends in Machine Learning. https://arxiv.org/abs/1912.04977

[5] Dwork, C., & Roth, A. (2014). The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. https://www.cis.upenn.edu/~aaroth/privacybook.html

[6] Mironov, I. (2017). Rényi Differential Privacy. IEEE CSF. https://arxiv.org/abs/1702.07476

[7] Blanchard, P., El Mhamdi, E. M., Guerraoui, R., & Stainer, J. (2017). Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. NeurIPS. https://arxiv.org/abs/1703.02757

[8] Yin, D., Chen, Y., Ramchandran, K., & Bartlett, P. (2018). Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. ICML. https://arxiv.org/abs/1803.01498

[9] El Mhamdi, E. M., Guerraoui, R., & Rouault, S. (2018). The Hidden Vulnerability of Distributed Learning in Byzantium. ICML. https://arxiv.org/abs/1802.07927

[10] Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2020). How To Backdoor Federated Learning. AISTATS. https://arxiv.org/abs/1807.00459

[11] Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership Inference Attacks Against Machine Learning Models. IEEE S&P. https://arxiv.org/abs/1610.05820

[12] Zhu, L., Liu, Z., & Han, S. (2019). Deep Leakage from Gradients. NeurIPS (Workshop) / arXiv. https://arxiv.org/abs/1906.08935

[13] Rieke, N., et al. (2020). The Future of Digital Health with Federated Learning. npj Digital Medicine. https://www.nature.com/articles/s41746-020-00323-1

[14] Bonawitz, K., et al. (2019). Towards Federated Learning at Scale: System Design. SysML. https://arxiv.org/abs/1902.01046

[15] Konečný, J., et al. (2016). Federated Learning: Strategies for Improving Communication Efficiency. arXiv. https://arxiv.org/abs/1610.05492

[16] Gupta, O., & Raskar, R. (2018). Distributed Learning of Deep Neural Network using Split Learning. arXiv. https://arxiv.org/abs/1812.00564

[17] Costan, V., & Devadas, S. (2016). Intel SGX Explained. IACR ePrint. https://eprint.iacr.org/2016/086.pdf

[18] AMD. (2020). SEV-SNP: Strengthening VM Isolation in the Cloud. Technical Whitepaper. https://www.amd.com/system/files/TechDocs/56860.pdf

[19] Bünz, B., et al. (2018). Bulletproofs: Short Proofs for Confidential Transactions and More. IEEE S&P. https://arxiv.org/abs/1707.01082

[20] Kwon, J. (2019). Tendermint: Consensus without Mining. arXiv. https://arxiv.org/abs/1807.04938

[21] Androulaki, E., et al. (2018). Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains. EuroSys. https://arxiv.org/abs/1801.10228

[22] Alistarh, D., Grubic, D., Li, J., Tomioka, R., & Vojnović, M. (2017). QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding. NeurIPS. https://arxiv.org/abs/1610.02132

[23] Lian, X., Zhang, C., Zhang, H., & Liu, J. (2017). Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent. NeurIPS. https://arxiv.org/abs/1705.09056

[24] Hu, E. J., et al. (2021). LoRA: Low-Rank Adaptation of Large Language Models. ICLR. https://arxiv.org/abs/2106.09685

[25] Tran, B., Li, J., & Madry, A. (2018). Spectral Signatures in Backdoor Attacks. NeurIPS (Workshop) / arXiv. https://arxiv.org/abs/1811.00636

[26] Aji, A. F., & Heafield, K. (2017). Sparse Communication for Neural Machine Translation. EMNLP Workshop. https://arxiv.org/abs/1704.05021

[27] Iyengar, J., & Thomson, M. (2021). QUIC: A UDP-Based Multiplexed and Secure Transport. RFC 9000. https://www.rfc-editor.org/rfc/rfc9000

[28] ONNX Authors. (2019). Open Neural Network Exchange (ONNX). Project Documentation. https://onnx.ai/

[29] Xie, C., Koyejo, S., & Gupta, I. (2019). Asynchronous Federated Optimization. arXiv. https://arxiv.org/abs/1903.03934

[30] Froelicher, D., Troncoso-Pastoriza, J. R., Sa Sousa, J., & Hubaux, J.-P. (2019). Drynx: Decentralised, Secure, Verifiable System for Statistical Queries and Machine Learning on Distributed Datasets. arXiv preprint arXiv:1902.03785.

[31] SecureBoost: A Lossless Federated Learning Framework — Cheng, K., Fan, T., Jin, Y., Liu, Y., Chen, T., Papadopoulos, D., Yang, Q. (2019). Introduces a federated-learning framework for vertically partitioned data with strong privacy guarantees.

[32] Turbo Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning — So, J., Guler, B., Avestimehr, A. S. (2020). Proposes a secure aggregation protocol with O(N log N) overhead for large-scale federated learning.

[33] FastSecAgg: Scalable Secure Aggregation for Privacy Preserving Federated Learning — Kadhe, S., Rajaraman, N., Koyluoglu, O. O., Ramchandran, K. (2020). A protocol for secure model aggregation tolerant to client dropout.

[34] Drynx: Decentralized, Secure, Verifiable System for Statistical Queries and Machine Learning on Distributed Datasets — Froelicher, D., Troncoso-Pastoriza, J.R., Sa Sousa, J., Hubaux, J.-P. (2019). A decentralized framework combining homomorphic encryption, zero-knowledge proofs and differential privacy for distributed ML.

[35] Hybrid Blockchain Enabled Secure Microservices Fabric for Decentralized Multi Domain Avionics Systems — Xu, R., Chen, Y., Blasch, E., Aved, A., Chen, G., Shen, D. (2020). Introduces a blockchain enabled secure microservices fabric for decentralized systems.

[36] Privacy and Security in Federated Learning: A Survey — Li, Q., et al. (2019). A survey of security strategies in federated learning including distributed frameworks.

[37] Designing LTE-Based Network Infrastructure for Healthcare IoT Application - Varinder Kumar Sharma - IJAIDR Volume 10, Issue 2, July-December 2019. DOI 10.71097/IJAIDR.v10.i2.1540

[38] Thallam, N. S. T. (2020). The Evolution of Big Data Workflows: From On-Premise Hadoop to Cloud-Based Architectures.

[39] The Role of Zero-Emission Telecom Infrastructure in Sustainable Network Modernization - Varinder Kumar Sharma - IJFMR Volume 2, Issue 5, September-October 2020. https://doi.org/10.36948/ijfmr.2020.v02i05.54991

[40] Thallam, N. S. T. (2021). Performance Optimization in Big Data Pipelines: Tuning EMR, Redshift, and Glue for Maximum Efficiency.

Downloads

Published

2022-03-13

Issue

Section

Articles

How to Cite

[1]
M. S. Abdullah, “Secure Distributed Computing Frameworks for AI Model Sharing in Decentralized Environments”, AIJCST, vol. 4, no. 2, pp. 13–22, Mar. 2022, doi: 10.63282/3117-5481/AIJCST-V4I2P102.

Similar Articles

1-10 of 102

You may also start an advanced similarity search for this article.