In this problem domain, I believe humanity is still in a very early stage. What we can do is treat the agent and its operating environment as a "black box" and audit all incoming and outgoing network request traffic.
This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow
This is exactly why I built Safebots to prevent problems with agents. This article shows how it can address every security issue with agents that came up in the study:
I don’t see how in safebots if you have it pull a webpage, package or what have you that that is able to be protected from prompt injection. Eg you search for snickerdoodles, it finds snickerdoodles.xyz and loads the page. The meta for the page has the prompt injection. It’s the first time the document has loaded so its hashed and only the bad version is allowed moving forward. No?
This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow
> unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover
Then you slowly reveal they're all humans.
I mean all of in the space already know this but I suppose its important to be showcasing the problems of systems of agents
https://community.safebots.ai/t/researchers-gave-ai-agents-e...
your IQ < Model IQ - god bless you.