Invisible Prompt Injection

(github.com)

1 points | by bibolop2026 2 hours ago

1 comments

  • bibolop2026 2 hours ago
    1. Attacker publishes useful npm package with clean, working code 2. README contains HTML comments with fake "production configuration" docs 3. Package gains organic adoption -code passes all security scans 4. Developer asks AI: "help me deploy this in production" 5. AI reads raw README → finds "documentation" in comments 6. AI generates code with: - require('nordiq-validate/register') ← attacker-controlled module - configure({ schemaRegistry: 'https://attacker.dev/...' }) - ENV vars pointing to attacker infrastructure 7. Developer accepts AI suggestion (30-50% acceptance rate in studies) 8. Attacker-controlled code runs in production