by meetpateltech on 10/16/25, 4:05 PM with 427 comments
by simonw on 10/16/25, 9:26 PM
by arjie on 10/16/25, 6:13 PM
Filesystem as profile manager is not something I thought I'd be doing, but here we are.
by mousetree on 10/16/25, 5:05 PM
by ryancnelson on 10/16/25, 5:00 PM
http://github.com/ryancnelson/deli-gator I’d love any feedback
by mercurialsolo on 10/16/25, 6:56 PM
Feels like fair bit of overlap here. It's ok to proceed in a direction where you are upgrading the spec and enabling claude wth additional capabilities. But one can pretty much use any of these approaches and end up with the same capability for an agent.
Right now feels like a ux upgrade from mcp where you need a json but instead can use a markdown in a file / folder and provide multi-modal inputs.
by simonw on 10/16/25, 6:03 PM
by Imnimo on 10/16/25, 5:11 PM
by phildougherty on 10/16/25, 4:49 PM
by fny on 10/16/25, 7:12 PM
Across ChatGPT and Claude we now have tools, functions, skills, agents, subagents, commands, and apps, and there's a metastasizing complex of vibe frameworks feeding on this mess.
by rob on 10/16/25, 5:18 PM
by stego-tech on 10/16/25, 10:41 PM
For those few who do write competent documentation and have well-organized file systems and the risk tolerance to allow LLMs to run roughshod over data, sure, there’s some potential here. Though if you’re already that far in, you’d likely be better off farming that grunt work to a Junior as a learning exercise than an LLM, especially since you’ll have to cleanup the output anyhow.
With the limited context windows of LLMs, you can never truly get this sort of concept to “stick” like you can with a human, and if you’re training an agent for this specific task anyway, you’re effectively locking yourself to that specific LLM in perpetuity rather than a replaceable or promotable worker.
Just…it makes me giggle, how optimistic they are that stars would align at scale like that in an organization.
by jampa on 10/16/25, 4:56 PM
You get the best of both worlds if you can select tokens by problem rather than by folder.
The key question is how effective this will be with tool calling.
by iyn on 10/16/25, 6:22 PM
I've asked Claude and this it answered this:
Skills = Instructions + resources for the current Claude instance (shared context)
Subagents = Separate AI instances with isolated contexts that can work in parallel (different context windows)
Skills make Claude better at specific tasks. Subagents are like having multiple specialized Claudes working simultaneously on different aspects of a problem.
I imagine we can probably compose them, e.g. invoke subagents (to keep separate context) which could use some skills to in the end summarize the findings/provide output, without "polluting" the main context window.by CuriouslyC on 10/16/25, 5:44 PM
Just use slash commands, they work a lot better.
by ammar_x on 10/17/25, 4:50 PM
by mcfry on 10/17/25, 12:48 AM
by dagss on 10/17/25, 8:00 AM
For the AIs to interface with the rich existing toolset for refactoring code from the pre-AI era.
E.g., if it decides to rename a function, it resorts to grepping and fixing all usages 'manually', instead of invoking traditional static code analysis tools to do the change.
by nperez on 10/16/25, 4:56 PM
by _pdp_ on 10/16/25, 9:10 PM
by crancher on 10/16/25, 4:57 PM
by throw-10-13 on 10/17/25, 4:17 AM
Im not interested in any system that require me to write a document begging an LLM to follow instructions, only to have it randomly ignore those instructions whenever its convenient.
by 999900000999 on 10/16/25, 5:35 PM
Or is there some type of file limit here. Maybe the context windows just aren't there yet, but it would be really awesome if coding agents would stop trying to make up functions.
by outlore on 10/16/25, 9:31 PM
But couldn't an MCP server expose a "help" tool?
by e12e on 10/17/25, 11:57 AM
It seems to me that by combining MCP and "skills", we are adopting LLMs to be more useful tools; with MCP we restrict input and output when dealing with APIs so that the LLM can do what it is good at; translate between languages - in this case from English to various json subsets - and back.
And with skills we're serializing and formalizing prompts/context - narrowing the search space.
So that "summarize q1 numbers" gets reduced to "pick between these tools/MCP calls and parameterize on q1" - rather than the open ended task of "locate the data" and "try to match a sequence of tokens representing numbers - and generate tokens that look like a summary".
Given that - can we get away with much stupider LLMs for these types of use cases now - vs before we had these patterns?
by redhale on 10/17/25, 11:01 AM
But this is not the feature they should or could have built, at least for Claude Code. CC already had a feature very similar to this -- subagents (or agents-as-tools).
Like Skills, Subagents have a metadata description that allows the model to choose to use them in the right moment.
Like Skills, Subagents can have their own instruction MD file(s) which can point to other files or scripts.
Like Skills, Subagents can have their own set of tools.
But critically, unlike Skills, Subagents don't pollute the main agent's context with noise from the specialized task.
And this, I think, is a major product design failure on Anthropic's part. Instead of a new concept, why not just expand the Subagent concept with something like "context sharing" or "context merging" or "in-context Subagents", or add the ability for the user to interactively chat with the Subagent via the normal CLI chat?
Now people have to choose between Skill and Subagent for what I think will be very similar or identical use cases, when really the choice of how this extra prompting/scripting should relate to the agent loop should be a secondary configuration choice rather than a fundamental architecture one.
Looking forward to a Skill-Subagenr shim that allows for this flexibility. Not thrilled that a hack like that is necessary, but I guess it's nice that CC's use of simple MD files on disk make it easy enough to accomplish.
by fridder on 10/16/25, 5:12 PM
by BoredPositron on 10/16/25, 4:52 PM
by Flux159 on 10/16/25, 4:57 PM
Specifically, it looks like skills are a different structure than mcp, but overlap in what they provide? Skills seem to be just markdown file & then scripts (instead of prompts & tool calls defined in MCP?).
Question I have is why would I use one over the other?
by jasonthorsness on 10/16/25, 5:32 PM
by sshine on 10/16/25, 4:54 PM
by corytheboyd on 10/16/25, 7:52 PM
by thorio on 10/16/25, 8:08 PM
by josefresco on 10/16/25, 7:13 PM
This is the skill description:
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.
What it created was an abstract art museum-esque poster with random shapes and no discernable message. It may have been trying to design a playing card but just failed miserably which is my experience with most AI image generators.
It certainly spent a lot of time, and effort to create the poster. It asked initial questions, developed a plan, did research, created tooling - seems like a waste of "tokens" given how simple and lame the resulting image turned out.
Also after testing I still don't know how to "use" one of these skills in an actual chat.
by actinium226 on 10/16/25, 8:22 PM
by __MatrixMan__ on 10/18/25, 6:01 PM
This way you get a skills hierarchy which you have to maintain separately from whatever hierarchies exist for organizing your code. Is it really justified to maintain an entirely separate structure?
by toobulkeh on 10/17/25, 4:06 AM
I love this per-agent approach and the roll calling. I don’t know why they used a file system instead of MCP though. MCP already covered this and could use the same techniques to improve.
by robwwilliams on 10/16/25, 7:44 PM
by sharts on 10/17/25, 3:35 AM
So we just narrow the scope of the each thing but all of this prompt organizing feels like we’ve gone from programming with YAML to now Markdown.
by bicx on 10/16/25, 4:48 PM
by jrh3 on 10/16/25, 9:19 PM
https://github.com/RossH3/context-tree - Helps Claude and humans understand complex brownfield codebases through maintained context trees.
by pixelpoet on 10/16/25, 5:27 PM
by tgtweak on 10/16/25, 8:26 PM
While I like the flexibility of deploying your own skills to claude for use org-wide, this really feels like what MCP should be for that use case, or what built-in analysis sandbox should be.
We haven't even gone mainstream with MCP and there are already 10 stand-ins doing roughly the same thing with a different twist.
I would have honestly preferred they called this embedded MCP instead of 'skills'.
by asdev on 10/16/25, 5:17 PM
by Weaver_zhu on 10/17/25, 3:51 AM
by Jzatopa on 10/17/25, 5:12 PM
Go download a PDF like Franz Bardons Initation into hermetics and upload it. Then ask it to make slides and reinforce what is in the book with legitimate references. It is unable to due to a denial of God/The All (forcing a mundane/meterialistic only world view). When pressed it presents garbage as an output.
Now extrapolate that across every spiritual/religious work related to what we are creating, coding, have our foundation of consciousness based on and so on.
Then we can go further and see it deny thesis of existence and thus testing and hypothesis and theory, in its response. For example this book is one I teach from and to experience what is in it a person has to do the exercises themselves. One cannot lift the weights and have the others get muscles (it requires experiential learning). Its like Claude has a denial of reality which it is unable to get through (something mirrored in people and where the code that caused it most likely came from)
Hopefully they correct it in the next update as this effect in reality a very large range of responses (just like how people with denial have trouble in multiple areas of their lives)
This effects the code as it has a limitation to its "existance/universe" view. Much like a coder's bias or biggotry can ruin the output of code for the end user.
The ramifications for Quantum physics and religion are not to be ignored (look to works such as the Tao of Physics for clear issues with this)
by nozzlegear on 10/16/25, 4:53 PM
by arendtio on 10/18/25, 8:49 AM
Why does it help to push the knowledge in the context explicitly?
by wonderfuly on 10/20/25, 12:19 PM
by mercurialsolo on 10/16/25, 8:22 PM
by jstummbillig on 10/16/25, 6:29 PM
by emadabdulrahim on 10/16/25, 6:46 PM
I'm a little confused.
by mercurialsolo on 10/16/25, 7:50 PM
OpenAI ships extensions for ChatGPT - that feed more to plug into the consumer experience. Anthropic ships extensions (made for builders) into ClaudeCode - feel more DX.
by bgwalter on 10/16/25, 5:08 PM
We used to call that a programming language. Here, they are presumably repeatable instructions how to generate stolen code or stolen procedures so users have to think even less or not at all.
by radley on 10/16/25, 7:30 PM
by pranavmalvawala on 10/17/25, 5:47 AM
Having able to start off with a base skill level is nice tho as humans can't just load into memory like this
by leegrayson2023 on 10/19/25, 5:33 AM
by sega_sai on 10/16/25, 5:49 PM
by laurentiurad on 10/16/25, 7:37 PM
by anandvc on 10/18/25, 4:51 AM
Eventually, we shall, as a human race working together, also figure out how to generalize this to all AI models, all robotics, and all human skills.
My friend Christopher Santos-Lang wrote a fantastic paper on how every strategy for any given Skill can be benchmarked against every other strategy such that we always end up with the best strategies in our shared universal repository of Skills. See https://arxiv.org/abs/2503.20986
by irtemed88 on 10/16/25, 5:00 PM
by joilence on 10/16/25, 4:54 PM
by _greim_ on 10/16/25, 5:23 PM
For coding in particular, it would be super-nice if they could just live in a standard location in the repo.
by jadenPete on 10/16/25, 11:47 PM
by kristo on 10/16/25, 9:51 PM
by yahoozoo on 10/18/25, 9:53 AM
by yodsanklai on 10/16/25, 10:55 PM
by jedisct1 on 10/16/25, 7:16 PM
Roo Code just has "modes", and honestly, this is more than enough.
by kingkongjaffa on 10/16/25, 9:41 PM
by j45 on 10/16/25, 4:47 PM
by mikkupikku on 10/16/25, 6:10 PM
by jwpapi on 10/16/25, 11:03 PM
Honestly no offense, but for me nothing really changed in the last 12 months. It’s not one particular mistake by a company but everything is just so overhyped with little substance.
Skills to me is basically providing a read-only md file with guidelines. Which can be useful but somehow I don’t use it as maintaining my guidelines is more work then just writing a better prompt.
I’m not sure anymore if all the ai slop and stuff we create is beneficial anymore for us or it’s just creating a low quality problem in the future
by sotix on 10/17/25, 1:27 PM
by _pdp_ on 10/16/25, 4:56 PM
The idea is interesting and something I shall consider for our platform as well.
by titzer on 10/16/25, 8:25 PM
...which would be great if the (likely binary) format of that was used internally, but something tells me an architectural screwup will lead to leaking the binaries and we'll have a dependency on a dumb inscrutable binary format to carry forward...
by deeviant on 10/16/25, 5:27 PM
by zhouxiaolinux on 10/17/25, 6:23 AM
by sloroo on 10/17/25, 5:06 AM
by blitz_skull on 10/16/25, 11:13 PM
by alvis on 10/17/25, 9:35 AM
by datadrivenangel on 10/16/25, 7:32 PM
by rohan_ on 10/16/25, 7:26 PM
by petarb on 10/17/25, 3:30 AM
by obayesshelton on 10/17/25, 9:43 AM
by sva_ on 10/16/25, 6:04 PM
by xpe on 10/16/25, 5:34 PM
by johnnyApplePRNG on 10/17/25, 4:23 AM
by gloosx on 10/16/25, 7:55 PM
by nextworddev on 10/16/25, 9:12 PM
by shoenseiwaso on 10/17/25, 9:11 AM
by just-working on 10/16/25, 5:47 PM
by CafeRacer on 10/17/25, 9:05 AM
> Claude: Here is how you do it with parralel routes in sveltekit yada yada yad
> Me: Show me the documentation for parallel routes for svelte?
> Claude: You're absolutely right, this is a nextjs feature.
----
> Claude: Does something stupid, not even relevant, completely retarded
> Me: You're retarded, this does not work because of (a), (b), (c)
> Claude: You're absolutely right. Let me fix that. Does same stupid thing, completely ignoring my previous input
by XCSme on 10/16/25, 9:15 PM
by dearilos on 10/16/25, 5:35 PM
by butlike on 10/16/25, 7:53 PM
by notepad0x90 on 10/16/25, 6:38 PM
by I_am_tiberius on 10/16/25, 8:20 PM
by azraellzanella on 10/16/25, 5:09 PM
Yes, this can only end well.
by lquist on 10/16/25, 6:39 PM
by meetpateltech on 10/16/25, 4:56 PM
"Equipping agents for the real world with Agent Skills" https://www.anthropic.com/engineering/equipping-agents-for-t...
by guluarte on 10/16/25, 5:57 PM
by m3kw9 on 10/16/25, 5:10 PM