Securing the Future of Innovation: I’m Now a Proofpoint Certified AI Data Security Specialist

I am thrilled to announce that I have officially earned my certification as a Proofpoint Certified AI Data Security Specialist 2025! 🎓

As Generative AI (GenAI) continues to reshape the corporate landscape, the intersection of productivity and security has never been more critical. This certification program provided deep, actionable insights into how organizations can harness the power of AI while maintaining a robust security posture against emerging threats.

Here is a summary of the core knowledge pillars covered during this intensive three-session specialization:

🛡️ Session 1: GenAI – Security Risks and Mitigation

The first session focused on the “why” behind AI security. According to the OWASP Top 10 for LLM Applications 2025, sensitive information disclosure is now the second most critical security issue for AI.

We explored:

  • The Spectrum of Tools: Understanding the difference between public, commercial, and private GenAI models.
  • The Risk Profile: How LLMs can inadvertently disclose PII, PHI, and even proprietary source code or algorithms.
  • Real-World Abuse: Analyzing case studies of how GenAI is being misused by external actors and misused internally.
  • Mitigation Frameworks: Applying established security frameworks to create a defensive layer around AI interactions.

⚖️ Session 2: Reducing Data Risk in Everyday Use

Session two shifted the focus to Governance. To promote responsible AI use, organizations need visibility and control over how employees interact with these tools daily.

Key takeaways included:

  • Shadow AI Visibility: Gaining insight into both sanctioned and unsanctioned (Shadow) GenAI tools used within the network.
  • Governance Standards: Establishing clear, enforceable standards for AI usage.
  • Holistic Risk Management: Implementing technical strategies to prevent data exfiltration via AI prompts.
  • The Human Element: The vital importance of educating and training employees on safe, responsible GenAI practices.

🏗️ Session 3: Securing Sensitive Data in GenAI Development

The final session addressed the “Builders.” With 70% of organizations now developing their own AI applications and features (Applause 2025 AI Survey), the security of the development pipeline is paramount.

We delved into:

  • Custom AI Security: Understanding the unique compliance considerations when building in-house GenAI applications.
  • Safe Model Training: Best practices for training Large Language Models (LLMs) without exposing the sensitive data used in the training sets.
  • Modern DSPM: How Data Security Posture Management (DSPM) provides the visibility and automated protection required to safeguard data throughout the custom AI development lifecycle.

Why This Matters

AI is no longer a “future” technology—it is an everyday reality. By earning this certification, I am better equipped to help organizations navigate the complex security challenges of the AI era, ensuring that innovation doesn’t come at the cost of data integrity or compliance.

A huge thank you to the team at Proofpoint for the excellent training and insights!

Leave a Reply

Your email address will not be published. Required fields are marked *