Abstract
Large language models (LLMs) are integral to cloud-based AI applications, offering robust capabilities for multimodal data processing and retrieval. However, deploying LLMs in environments with multiple users presents notable difficulties, especially in protecting sensitive information. In this study, we present PMIRS, a system designed to support secure, privacy-focused retrieval of multimodal image and text data. PMIRS allows users to transmit images or text to a simulated cloud-based LLM environment safely in a privacy-preserving manner. PMIRS mitigates privacy risks through a combination of obfuscation techniques, encrypted inference and federated learning. Specifically, federated learning is used to fine-tune a lightweight model adapted from the official CLIP codebase by OpenAI. During inference, query embeddings are first obfuscated through block-wise projection and then encrypted using AES in CBC mode with 128-bit keys to protect query content from unauthorized access. Additionally, the Diffie-Hellman algorithm is employed for secure key management in multi-user environments. Experimental results on three semantic domains from the customized Phrase-ImageNet variant of the ImageNet dataset, constructed by rewriting ImageNet labels into natural-language phrases, demonstrate that PMIRS achieves F1-scores up to 0.92, with precision exceeding 0.90 in small-to-medium repositories, and retrieval latency consistently under 180 milliseconds. Compared to the CLIP baseline, PMIRS improves average F1-score by 7.67% while maintaining comparable precision. These results underscore PMIRS's practical value for secure, efficient, and privacy-preserving multimodal retrieval. Beyond our controlled experiments, PMIRS has the potential to support real-world applications such as medical image retrieval, privacy-conscious customer service bots, and enterprise data management under regulatory frameworks like GDPR and HIPAA.