Are AI tools considered potential data egress channels in your organization, and do you plan to implement controls to manage these risks?
I already define security policy targeting AI Tools for my customers as follows:
Installable AI Tools (ex, ChatGPT Applications)
Currently, only blocking is possible using CAP’s process Blacklist.
Web Base AI (ex, ChatGPT, Gemini)
Only attachments can be blocked in web browsers. (When using DPI, DPI noise is very severe, making it difficult…)
However, many customers are requesting the function of tracking keyboard typing.
(I know that implementing this would probably be very difficult.)
But, This is already implemented in competing products.
Hi @JeremyGold,
Thank you for joining the discussion and sharing your experience.
I believe that in our new IT reality, AI serves as both a helping hand and a challenge, particularly in the realm of data protection, where it’s seen as a potential egress channel for accidental data leakage.
To address this, we’ve initiated an internal project focused on tackling the challenges posed by major commercial AI tools. Our goal is to manage attachments and control accidental data breaches that may occur in prompts and attached files as well.
I’m excited to share that we’ve already made some progress on this front. Please follow the linked EPP ideas posts for further updates:
Hopefully, this will help you and all our customers develop new tailored AI use cases that can be addressed solely by Netwrix Endpoint Protector.
Kryzsztof,
Thank you for bringing up! I’ve actually had a few customers asking about potential AI-related data leaks as well, especially now that AI tools are being adopted so quickly.
I’m also curious about future features like smart keystroke analysis. Instead of logging everything that’s typed, you could just watch for risky patters, such as passwords, client names, or snippets of code.
I’m looking forward to seeing how EPP continues to improve and do more than just block or allow when it comes to AI tools!
-Jason
Hi Jason,
Thank you for your time, and for sharing your insights and interest in the evolution of EPP in relation to AI. We greatly value the perspectives of our partners and customers.
As mentioned earlier, we have a short-term development strategy centered on AI-related data leak channels. Our first step is to enhance visibility for common AI information egress channels, such as ChatGPT, MS Copilot, Claude, Google Gemini, X Grok, and DeepSeek. This approach is designed to gain control over information exchanges through AI interface prompts.
We recognize that many organizations are integrating AI tools as part of their business processes. Our current primary goal is to assist these organizations in protecting against the accidental misuse of sensitive data through AI prompts, particularly in non-corporate, common scenarios.
In terms of our long-term plans, we are actively exploring further stages of integration and information correlation using AI to facilitate better automated remediation actions. However, it is still too early to make specific commitments in this area. Stay tuned for more updates.
Best regards, Krzysiek
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.