Are AI tools considered potential data egress channels in your organization, and do you plan to implement controls to manage these risks?

Are AI tools considered potential data egress channels in your organization, and do you plan to implement controls to manage these risks?

I already define security policy targeting AI Tools for my customers as follows:

Installable AI Tools (ex, ChatGPT Applications)
Currently, only blocking is possible using CAP’s process Blacklist.

Web Base AI (ex, ChatGPT, Gemini)
Only attachments can be blocked in web browsers. (When using DPI, DPI noise is very severe, making it difficult…)

However, many customers are requesting the function of tracking keyboard typing.
(I know that implementing this would probably be very difficult.)

But, This is already implemented in competing products.

1 Like

Hi @JeremyGold,

Thank you for joining the discussion and sharing your experience.
I believe that in our new IT reality, AI serves as both a helping hand and a challenge, particularly in the realm of data protection, where it’s seen as a potential egress channel for accidental data leakage.

To address this, we’ve initiated an internal project focused on tackling the challenges posed by major commercial AI tools. Our goal is to manage attachments and control accidental data breaches that may occur in prompts and attached files as well.
I’m excited to share that we’ve already made some progress on this front. Please follow the linked EPP ideas posts for further updates:

Hopefully, this will help you and all our customers develop new tailored AI use cases that can be addressed solely by Netwrix Endpoint Protector.

1 Like