I do like the idea of crowd-sourced collections of resources like skills.
It might be more useful if it was an index of skills managed in GitHub. Sort of like GitHub actions which can be browsed in the marketplace[1] but are ultimately just normal git repos.
i thought of that but i didn't want to build a job to migrate that to the db. maybe we'll go that route.
I don't understand how "agent-browser" works.
Is it just the instructions? Where is the browsing executed? Locally with pupetter? Or it uses some service?
it's basically a cli for controlling a browser. the idea is that an agent like claude code would use it for validating something that it just did like changing something on the UI
What browser? My question comes from security, adding that skills just provides a line of bash, with no further info. I checked the .md file but it just lists a list of commands with agent-browser.
agent-browser is built on top of Playwright. Playwright uses a version of Chromium.
This is nothing like Dockerhub and, I'm sorry, but it's seriously useless. In its current state its worse than basically anything else.
You have no versioning, no automated or simplified update, no way to verify the authors, etc. The "installation" is literally just a wget.
This is a really poor solution for the moment, and honestly I think for the forseable future. I don't see how anything beyond git is necessary for skills management.
Most of the skills currently hosted are also really bad. They are just a duplicate of the information that MCP would give the models.
mcp will probably be left behind in the future. it was a bad design from the start. anthropic themselves released skills to "fix" the mcp mess. skills are very new but the idea is great. we still are early days but i think it could allow models to use tools more effectively.
we're planning to add an installation step + auth step (which many of the skills require) so that that part get's handled in one single step instead of having to do everything manually
Couple of problems with git.
In the enterprise, RBAC is a royal pain. You give out a URL and it's hard to know if the consumer can fetch it.
URLs are absolute, there is no resolution by name. Compounded further if you want transient dependencies (maybe not needed in this instance though).
In your project, you end up hardcoding the https/ssh scheme.
For the next model training version, would it make sense to incorporate all of these in the base model?
Not all. In fact a small model that has none of them but loads them on demand might be the most efficient thing
I do like the idea of crowd-sourced collections of resources like skills.
It might be more useful if it was an index of skills managed in GitHub. Sort of like GitHub actions which can be browsed in the marketplace[1] but are ultimately just normal git repos.
[1] https://github.com/marketplace?type=actions
i thought of that but i didn't want to build a job to migrate that to the db. maybe we'll go that route.
I don't understand how "agent-browser" works.
Is it just the instructions? Where is the browsing executed? Locally with pupetter? Or it uses some service?
it's basically a cli for controlling a browser. the idea is that an agent like claude code would use it for validating something that it just did like changing something on the UI
What browser? My question comes from security, adding that skills just provides a line of bash, with no further info. I checked the .md file but it just lists a list of commands with agent-browser.
agent-browser is built on top of Playwright. Playwright uses a version of Chromium.
This is nothing like Dockerhub and, I'm sorry, but it's seriously useless. In its current state its worse than basically anything else.
You have no versioning, no automated or simplified update, no way to verify the authors, etc. The "installation" is literally just a wget.
This is a really poor solution for the moment, and honestly I think for the forseable future. I don't see how anything beyond git is necessary for skills management.
Most of the skills currently hosted are also really bad. They are just a duplicate of the information that MCP would give the models.
mcp will probably be left behind in the future. it was a bad design from the start. anthropic themselves released skills to "fix" the mcp mess. skills are very new but the idea is great. we still are early days but i think it could allow models to use tools more effectively.
we're planning to add an installation step + auth step (which many of the skills require) so that that part get's handled in one single step instead of having to do everything manually
Couple of problems with git.
In the enterprise, RBAC is a royal pain. You give out a URL and it's hard to know if the consumer can fetch it.
URLs are absolute, there is no resolution by name. Compounded further if you want transient dependencies (maybe not needed in this instance though).
In your project, you end up hardcoding the https/ssh scheme.
For the next model training version, would it make sense to incorporate all of these in the base model?
Not all. In fact a small model that has none of them but loads them on demand might be the most efficient thing