Authors:
Abhilash Reddy Pabbath Reddy
Addresses:
Department of Information Technology, Axle Info, Cumming, Georgia, United States of America.
Digitising medical records and using cloud infrastructure have significantly increased patient data processing and storage. AI technologies have improved clinical decision support, diagnostics, and operational efficiency, but GDPR and HIPAA require privacy-preserving techniques. This paper covers Homomorphic Encryption, Secure Multi-Party Computation, Differential Privacy, and Federated Learning AI privacy methods. A hybrid framework combines its strengths to enable cloud-based safe healthcare analytics. The study uses a semi-synthetic health dataset from MIMIC-III (Medical Information Mart for Intensive Care) to analyse patient characteristics, diagnosis, treatment history, and results. Using 50,000 anonymised patient histories, it maintained data heterogeneity by institution, condition, and age. Each person received privacy strategies from Microsoft SEAL, TensorFlow Privacy, PySyft, and Google's TensorFlow Federated. A Kubernetes-based cloud testbed simulated a hospital node communicating over an encrypted channel. Prediction accuracy, latency, noise, training time, CPU, memory, and compliance metrics were validated for all models. Tabular and graphical analysis helped the Hybrid Model balance data security and analytical performance. This study demonstrates that privacy-preserving AI is achievable and imminent for secure, compliant, and efficient cloud-based healthcare systems, enabling real-time analytics and ethical and legal patient data management.
Keywords: Privacy-Preserving AI; Healthcare Data Security; Federated Learning; Differential Privacy; Cloud-based Analytics; Healthcare Sector; Electronic Health Records; Artificial Intelligence.
Received on: 06/08/2024, Revised on: 28/10/2024, Accepted on: 10/12/2024, Published on: 03/03/2025
DOI: 10.69888/FTSHSL.2025.000363
FMDB Transactions on Sustainable Health Science Letters, 2025 Vol. 3 No. 1, Pages: 42-52