A successful AI transformation starts with a strong security foundation. With a rapid increase in AI development and adoption, organizations need visibility into their emerging AI apps and tools. Microsoft Security provides threat protection, posture management, data security, compliance, and governance to secure AI applications that you build and use. These capabilities can also be used to help enterprises secure and govern AI apps built with the DeepSeek R1 model and gain visibility and control over the use of the seperate DeepSeek consumer app. 

Secure and govern AI apps built with the DeepSeek R1 model on Azure AI Foundry and GitHub 

Develop with trustworthy AI 

Last week, we announced DeepSeek R1’s availability on Azure AI Foundry and GitHub, joining a diverse portfolio of more than 1,800 models.   

Customers today are building production-ready AI applications with Azure AI Foundry, while accounting for their varying security, safety, and privacy requirements. Similar to other models provided in Azure AI Foundry, DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. Microsoft’s hosting safeguards for AI models are designed to keep customer data within Azure’s secure boundaries. 

With Azure AI Content Safety, built-in content filtering is available by default to help detect and block malicious, harmful, or ungrounded content, with opt-out options for flexibility. Additionally, the safety evaluation system allows customers to efficiently test their applications before deployment. These safeguards help Azure AI Foundry provide a secure, compliant, and responsible environment for enterprises to confidently build and deploy AI solutions. See Azure AI Foundry and GitHub for more details.

Start with Security Posture Management

AI workloads introduce new cyberattack surfaces and vulnerabilities, especially when developers leverage open-source resources. Therefore, it’s critical to start with security posture management, to discover all AI inventories, such as models, orchestrators, grounding data sources, and the direct and indirect risks around these components. When developers build AI workloads with DeepSeek R1 or other AI models, Microsoft Defender for Cloud’s AI security posture management capabilities can help security teams gain visibility into AI workloads, discover AI cyberattack surfaces and vulnerabilities, detect cyberattack paths that can be exploited by bad actors, and get recommendations to proactively strengthen their security posture against cyberthreats.

AI security posture management in Defender for Cloud identifies an attack path to a DeepSeek R1 workload, where an Azure virtual machine is exposed to the Internet.
Figure 1. AI security posture management in Defender for Cloud detects an attack path to a DeepSeek R1 workload.

By mapping out AI workloads and synthesizing security insights such as identity risks, sensitive data, and internet exposure, Defender for Cloud continuously surfaces contextualized security issues and suggests risk-based security recommendations tailored to prioritize critical gaps across your AI workloads. Relevant security recommendations also appear within the Azure AI resource itself in the Azure portal. This provides developers or workload owners with direct access to recommendations and helps them remediate cyberthreats faster. 

Safeguard DeepSeek R1 AI workloads with cyberthreat protection

While having a strong security posture reduces the risk of cyberattacks, the complex and dynamic nature of AI requires active monitoring in runtime as well. No AI model is exempt from malicious activity and can be vulnerable to prompt injection cyberattacks and other cyberthreats. Monitoring the latest models is critical to ensuring your AI applications are protected.

Integrated with Azure AI Foundry, Defender for Cloud continuously monitors your DeepSeek AI applications for unusual and harmful activity, correlates findings, and enriches security alerts with supporting evidence. This provides your security operations center (SOC) analysts with alerts on active cyberthreats such as jailbreak cyberattacks, credential theft, and sensitive data leaks. For example, when a prompt injection cyberattack occurs, Azure AI Content Safety prompt shields can block it in real-time. The alert is then sent to Microsoft Defender for Cloud, where the incident is enriched with Microsoft Threat Intelligence, helping SOC analysts understand user behaviors with visibility into supporting evidence, such as IP address, model deployment details, and suspicious user prompts that triggered the alert. 

When a prompt injection attack occurs, Azure AI Content Safety prompt shields can detect and block it. The signal is then enriched by Microsoft Threat Intelligence, enabling security teams to conduct holistic investigations into the incident.
Figure 2. Microsoft Defender for Cloud integrates with Azure AI to detect and respond to prompt injection cyberattacks.

Additionally, these alerts integrate with Microsoft Defender XDR, allowing security teams to centralize AI workload alerts into correlated incidents to understand the full scope of a cyberattack, including malicious activities related to their generative AI applications. 

A jailbreak prompt injection attack on a Azure AI model deployment was flagged as an alert in Defender for Cloud.
Figure 3. A security alert for a prompt injection attack is flagged in Defender for Cloud

Secure and govern the use of the DeepSeek app

In addition to the DeepSeek R1 model, DeepSeek also provides a consumer app hosted on its local servers, where data collection and cybersecurity practices may not align with your organizational requirements, as is often the case with consumer-focused apps. This underscores the risks organizations face if employees and partners introduce unsanctioned AI apps leading to potential data leaks and policy violations. Microsoft Security provides capabilities to discover the use of third-party AI applications in your organization and provides controls for protecting and governing their use.

Secure and gain visibility into DeepSeek app usage 

Microsoft Defender for Cloud Apps provides ready-to-use risk assessments for more than 850 Generative AI apps, and the list of apps is updated continuously as new ones become popular. This means that you can discover the use of these Generative AI apps in your organization, including the DeepSeek app, assess their security, compliance, and legal risks, and set up controls accordingly. For example, for high-risk AI apps, security teams can tag them as unsanctioned apps and block user’s access to the apps outright.

Security teams can discover the usage of GenAI applications, assess risk factors, and tag high-risk apps as unsanctioned to block end users from accessing them.
Figure 4. Discover usage and control access to Generative AI applications based on their risk factors in Defender for Cloud Apps.

Comprehensive data security 

In addition, Microsoft Purview Data Security Posture Management (DSPM) for AI provides visibility into data security and compliance risks, such as sensitive data in user prompts and non-compliant usage, and recommends controls to mitigate the risks. For example, the reports in DSPM for AI can offer insights on the type of sensitive data being pasted to Generative AI consumer apps, including the DeepSeek consumer app, so data security teams can create and fine-tune their data security policies to protect that data and prevent data leaks. 

In the report from Microsoft Purview Data Security Posture Management for AI, security teams can gain insights into sensitive data in user prompts and unethical use in AI interactions. These insights can be broken down by apps and departments.
Figure 5. Microsoft Purview Data Security Posture Management (DSPM) for AI enables security teams to gain visibility into data risks and get recommended actions to address them.

Prevent sensitive data leaks and exfiltration  

The leakage of organizational data is among the top concerns for security leaders regarding AI usage, highlighting the importance for organizations to implement controls that prevent users from sharing sensitive information with external third-party AI applications.

Microsoft Purview Data Loss Prevention (DLP) enables you to prevent users from pasting sensitive data or uploading files containing sensitive content into Generative AI apps from supported browsers. Your DLP policy can also adapt to insider risk levels, applying stronger restrictions to users that are categorized as ‘elevated risk’ and less stringent restrictions for those categorized as ‘low-risk’. For example, elevated-risk users are restricted from pasting sensitive data into AI applications, while low-risk users can continue their productivity uninterrupted. By leveraging these capabilities, you can safeguard your sensitive data from potential risks from using external third-party AI applications. Security admins can then investigate these data security risks and perform insider risk investigations within Purview. These same data security risks are surfaced in Defender XDR for holistic investigations.

 When a user attempts to copy and paste sensitive data into the DeepSeek consumer AI application, they are blocked by the endpoint DLP policy.
Figure 6. Data Loss Prevention policy can block sensitive data from being pasted to third-party AI applications in supported browsers.

This is a quick overview of some of the capabilities to help you secure and govern AI apps that you build on Azure AI Foundry and GitHub, as well as AI apps that users in your organization use. We hope you find this useful!

To learn more and to get started with securing your AI apps, take a look at the additional resources below:  

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Securing DeepSeek and other AI systems with Microsoft Security appeared first on Microsoft Azure Blog.