from Hacker News

Claude Skills

by meetpateltech on 10/16/25, 4:05 PM with 427 comments

https://www.anthropic.com/engineering/equipping-agents-for-t...
  • by simonw on 10/16/25, 9:26 PM

    Just published this about skills: "Claude Skills are awesome, maybe a bigger deal than MCP"

    https://simonwillison.net/2025/Oct/16/claude-skills/

  • by arjie on 10/16/25, 6:13 PM

    It's pretty neat that they're adding these things. In my projects, I have a `bin/claude` subdirectory where I ask it to put scripts etc. that it builds. In the claude.md I then note that it should look there for tools. It does a pretty good job of this. To be honest, the thing I most need are context-management helpers like "start a claude with this set of MCPs, then that set, and so on". Instead right now I have separate subdirectories that I then treat as projects (which are supported as profiles in Claude) which I then launch a `claude` from. The advantage of the `bin/claude` in each of these things is that it functions as a longer-cycle learning thing. My Claude instantly knows how to analyze certain BigQuery datasets and where to find the credentials file and so on.

    Filesystem as profile manager is not something I thought I'd be doing, but here we are.

  • by mousetree on 10/16/25, 5:05 PM

    I'm perplexed why they would use such a silly example in their demo video (rotating an image of a dog upside down and cropping). Surely they can find more compelling examples of where these skills could be used?
  • by ryancnelson on 10/16/25, 5:00 PM

    The uptake on Claude-skills seems to have a lot of momentum already! I was fascinated on Tuesday by “Superpowers” , https://blog.fsck.com/2025/10/09/superpowers/ … and then packaged up all the tool-building I’ve been working on for awhile into somewhat tidy skills that i can delegate agents to:

    http://github.com/ryancnelson/deli-gator I’d love any feedback

  • by mercurialsolo on 10/16/25, 6:56 PM

    Sub agents, mcp, skills - wonder how are they supposed to interact with each other?

    Feels like fair bit of overlap here. It's ok to proceed in a direction where you are upgrading the spec and enabling claude wth additional capabilities. But one can pretty much use any of these approaches and end up with the same capability for an agent.

    Right now feels like a ux upgrade from mcp where you need a json but instead can use a markdown in a file / folder and provide multi-modal inputs.

  • by simonw on 10/16/25, 6:03 PM

    I accidentally leaked the existence of these last Friday, glad they officially exist now! https://simonwillison.net/2025/Oct/10/claude-skills/
  • by Imnimo on 10/16/25, 5:11 PM

    I feel like a danger with this sort of thing is that the capability of the system to use the right skill is limited by the little blurb you give about what the skill is for. Contrast with the way a human learns skills - as we gain experience with a skill, we get better at understanding when it's the right tool for the job. But Claude is always starting from ground zero and skimming your descriptions.
  • by phildougherty on 10/16/25, 4:49 PM

    getting hard to keep up with skills, plugins, marketplaces, connectors, add-ons, yada yada
  • by fny on 10/16/25, 7:12 PM

    I fear the conceptual churn we're going to endure in the coming years will rival frontend dev.

    Across ChatGPT and Claude we now have tools, functions, skills, agents, subagents, commands, and apps, and there's a metastasizing complex of vibe frameworks feeding on this mess.

  • by rob on 10/16/25, 5:18 PM

    Subagents, plugins, skills, hooks, mcp servers, output styles, memory, extended thinking... seems like a bunch of stuff you can configure in Claude Code that overlap in a lot of areas. Wish they could figure out a way to simplify things.
  • by stego-tech on 10/16/25, 10:41 PM

    I’m kind of in stitches over this. Claude’s “skills” are dependent upon developers writing competent documentation and keeping it up to date…which most seemingly can’t even do for actual code they write, nevermind a brute-force black box like an LLM.

    For those few who do write competent documentation and have well-organized file systems and the risk tolerance to allow LLMs to run roughshod over data, sure, there’s some potential here. Though if you’re already that far in, you’d likely be better off farming that grunt work to a Junior as a learning exercise than an LLM, especially since you’ll have to cleanup the output anyhow.

    With the limited context windows of LLMs, you can never truly get this sort of concept to “stick” like you can with a human, and if you’re training an agent for this specific task anyway, you’re effectively locking yourself to that specific LLM in perpetuity rather than a replaceable or promotable worker.

    Just…it makes me giggle, how optimistic they are that stars would align at scale like that in an organization.

  • by jampa on 10/16/25, 4:56 PM

    I think this is great. A problem with huge codebases is that CLAUDE.md files become bloated with niche workflows like CI and E2E testing. Combined with MCPs, this pollutes the context window and eventually degrades performance.

    You get the best of both worlds if you can select tokens by problem rather than by folder.

    The key question is how effective this will be with tool calling.

  • by iyn on 10/16/25, 6:22 PM

    Does anyone know how skills relate to subagents? Seems that subagents have more capabilities (e.g. can access the internet) but seems that there's a lot of overlap.

    I've asked Claude and this it answered this:

      Skills = Instructions + resources for the current Claude instance (shared context)
      Subagents = Separate AI instances with isolated contexts that can work in parallel (different context windows)
      Skills make Claude better at specific tasks. Subagents are like having multiple specialized Claudes working simultaneously on different aspects of a problem.
    
    I imagine we can probably compose them, e.g. invoke subagents (to keep separate context) which could use some skills to in the end summarize the findings/provide output, without "polluting" the main context window.
  • by CuriouslyC on 10/16/25, 5:44 PM

    Anything the model chooses to use is going to waste context and get utilized poorly. Also, the more skills you have, the worse they're going to be. It's subagents v2.

    Just use slash commands, they work a lot better.

  • by ammar_x on 10/17/25, 4:50 PM

    Claude Skills seem to be the option that offers highest flexibility to add more capabilities at most simplicity. Better than MCP in my opinion. Hope it becomes a standard and get adopted by OpenAI and the rest of labs.
  • by mcfry on 10/17/25, 12:48 AM

    This is just... rebranding for instructions and files? lol. Love how instructions for creating a skill is buried. Marketing go brr.
  • by dagss on 10/17/25, 8:00 AM

    Here's what I'd like:

    For the AIs to interface with the rich existing toolset for refactoring code from the pre-AI era.

    E.g., if it decides to rename a function, it resorts to grepping and fixing all usages 'manually', instead of invoking traditional static code analysis tools to do the change.

  • by nperez on 10/16/25, 4:56 PM

    Seems like a more organized way to do the equivalent of a folder full of md files + instructing the LLM to ls that folder and read the ones it needs
  • by _pdp_ on 10/16/25, 9:10 PM

    I predict there will be some sort of package manager opensource project soon. Download skills from some 3rd-party website and run inside Claude. Risks of supply chain issue will be obvious but nobody will care - at least not in the short term.
  • by crancher on 10/16/25, 4:57 PM

    Seems like the exact same thing, from front page a few days ago: https://github.com/obra/superpowers/tree/main
  • by throw-10-13 on 10/17/25, 4:17 AM

    Architectural churn brought to you by VC funded marketing.

    Im not interested in any system that require me to write a document begging an LLM to follow instructions, only to have it randomly ignore those instructions whenever its convenient.

  • by 999900000999 on 10/16/25, 5:35 PM

    Can I just tell it to read the entire Godot source repo as a skill ?

    Or is there some type of file limit here. Maybe the context windows just aren't there yet, but it would be really awesome if coding agents would stop trying to make up functions.

  • by outlore on 10/16/25, 9:31 PM

    I'm struggling to see how this is different from prepackaged prompts. Simon's article talks about skill metadata being used by the model to look up the full prompt as a way to save on context usage. That is analogous to the model calling --help when it needs to use a CLI tool without needing to load up the full man pages ahead of time.

    But couldn't an MCP server expose a "help" tool?

  • by e12e on 10/17/25, 11:57 AM

    With these patterns emerging, does anyone know how local LLMs are faring?

    It seems to me that by combining MCP and "skills", we are adopting LLMs to be more useful tools; with MCP we restrict input and output when dealing with APIs so that the LLM can do what it is good at; translate between languages - in this case from English to various json subsets - and back.

    And with skills we're serializing and formalizing prompts/context - narrowing the search space.

    So that "summarize q1 numbers" gets reduced to "pick between these tools/MCP calls and parameterize on q1" - rather than the open ended task of "locate the data" and "try to match a sequence of tokens representing numbers - and generate tokens that look like a summary".

    Given that - can we get away with much stupider LLMs for these types of use cases now - vs before we had these patterns?

  • by redhale on 10/17/25, 11:01 AM

    This is interesting, and I think there are use cases where this feature may make sense.

    But this is not the feature they should or could have built, at least for Claude Code. CC already had a feature very similar to this -- subagents (or agents-as-tools).

    Like Skills, Subagents have a metadata description that allows the model to choose to use them in the right moment.

    Like Skills, Subagents can have their own instruction MD file(s) which can point to other files or scripts.

    Like Skills, Subagents can have their own set of tools.

    But critically, unlike Skills, Subagents don't pollute the main agent's context with noise from the specialized task.

    And this, I think, is a major product design failure on Anthropic's part. Instead of a new concept, why not just expand the Subagent concept with something like "context sharing" or "context merging" or "in-context Subagents", or add the ability for the user to interactively chat with the Subagent via the normal CLI chat?

    Now people have to choose between Skill and Subagent for what I think will be very similar or identical use cases, when really the choice of how this extra prompting/scripting should relate to the agent loop should be a secondary configuration choice rather than a fundamental architecture one.

    Looking forward to a Skill-Subagenr shim that allows for this flexibility. Not thrilled that a hack like that is necessary, but I guess it's nice that CC's use of simple MD files on disk make it easy enough to accomplish.

  • by fridder on 10/16/25, 5:12 PM

    All of these random features is just pushing me further towards model agnostic tools like goose
  • by BoredPositron on 10/16/25, 4:52 PM

    It is a bit ironic that the better the models get they seem to need more and more user input.
  • by Flux159 on 10/16/25, 4:57 PM

    I wonder how this works with mcpb (renamed from dxt Desktop extensions): https://github.com/anthropics/mcpb

    Specifically, it looks like skills are a different structure than mcp, but overlap in what they provide? Skills seem to be just markdown file & then scripts (instead of prompts & tool calls defined in MCP?).

    Question I have is why would I use one over the other?

  • by jasonthorsness on 10/16/25, 5:32 PM

    When the skill is used locally in Claude Code does it still run in a virtual machine? Like some sort of isolation container with the target directory mounted?
  • by sshine on 10/16/25, 4:54 PM

    I love how the promise of free labor motivates everyone to become API first, document their practices, and plan ahead in writing before coding.
  • by corytheboyd on 10/16/25, 7:52 PM

    I’ll give it a fair go, but how is it not going to have the same problem of _maybe_ using MCP tools? The same problem of trying to add to your prompt “only answer if you are 100% correct”? A skill just sounds like more markdown that is fed into context, but with a cool name that sounds impressive, and some indexing of the defined skills on start (same as MCP tools?)
  • by thorio on 10/16/25, 8:08 PM

    How about using some of that skills to make that page mobile ready...
  • by josefresco on 10/16/25, 7:13 PM

    I just used tested the canvas-design skill and the results were pretty awful.

    This is the skill description:

    Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.

    What it created was an abstract art museum-esque poster with random shapes and no discernable message. It may have been trying to design a playing card but just failed miserably which is my experience with most AI image generators.

    It certainly spent a lot of time, and effort to create the poster. It asked initial questions, developed a plan, did research, created tooling - seems like a waste of "tokens" given how simple and lame the resulting image turned out.

    Also after testing I still don't know how to "use" one of these skills in an actual chat.

  • by actinium226 on 10/16/25, 8:22 PM

    It's an interesting idea (among many) to try to address the problem of LLMs getting off task, but I notice that there's no evaluation in the blog post. Like, ok cool, you've added "skills," but is there any evidence that they're useful or are we just grasping at straws here?
  • by __MatrixMan__ on 10/18/25, 6:01 PM

    I don't understand why it was necessary to develop a new protocol for this. LSP already exists for discovering what relevant functions there are to call, wouldn't it be better to have a single source of truth for function docs rather than a markdown file which points the agent at the function so that it can then read more or less the same information from the function's docstring?

    This way you get a skills hierarchy which you have to maintain separately from whatever hierarchies exist for organizing your code. Is it really justified to maintain an entirely separate structure?

  • by toobulkeh on 10/17/25, 4:06 AM

    I implemented a rudimentary version of this based on some BabyAGI loops, called autolearn: autolearn.dev

    I love this per-agent approach and the roll calling. I don’t know why they used a file system instead of MCP though. MCP already covered this and could use the same techniques to improve.

  • by robwwilliams on 10/16/25, 7:44 PM

    Could be helpful. I often edit scientific papers and grant applications. Orienting Claude on the frontend of each project works but an “Editing Skill” set could be more general and make interactions with Claude more clued in to goals instead of starting stateless.
  • by sharts on 10/17/25, 3:35 AM

    Isn’t all of everything just a bundle of prompts and scripts in various folders with some shortcuts to them all?

    So we just narrow the scope of the each thing but all of this prompt organizing feels like we’ve gone from programming with YAML to now Markdown.

  • by bicx on 10/16/25, 4:48 PM

    Interesting. For Claude Code, this seems to have generous overlap with existing practice of having markdown "guides" listed for access in the CLAUDE.md. Maybe skills can simply make managing such guides more organized and declarative.
  • by jrh3 on 10/16/25, 9:19 PM

    The tools I build for Claude Code keep reducing back to just using Claude Code and watching Anthropic add what I need. This is my tool for brownfield projects with Claude Code. I added skills based on https://blog.fsck.com/2025/10/09/superpowers/

    https://github.com/RossH3/context-tree - Helps Claude and humans understand complex brownfield codebases through maintained context trees.

  • by pixelpoet on 10/16/25, 5:27 PM

    Aside: I really love Anthropic's design language, so beautiful and functional.
  • by tgtweak on 10/16/25, 8:26 PM

    At term (and not even far term) - LLMs will be able to churn up their own "skills" using their sandbox code environments - and possibly recycle them through context on a per-user basis.

    While I like the flexibility of deploying your own skills to claude for use org-wide, this really feels like what MCP should be for that use case, or what built-in analysis sandbox should be.

    We haven't even gone mainstream with MCP and there are already 10 stand-ins doing roughly the same thing with a different twist.

    I would have honestly preferred they called this embedded MCP instead of 'skills'.

  • by asdev on 10/16/25, 5:17 PM

    I wonder what the accuracy is for Claude to always follow a Skill accurately. I've had trouble getting LLMs to follow specific workflows 100% consistently without skipping or missing steps.
  • by Weaver_zhu on 10/17/25, 3:51 AM

    I recall recent work [ACE](https://www.arxiv.org/abs/2510.04618) and [GEPA](https://arxiv.org/abs/2507.19457) where models get improved by adapting and adopting different kinds of prompt. The improvements will be expected to be more generalized than fine-tuning.
  • by Jzatopa on 10/17/25, 5:12 PM

    It also has some of what I call "consciousness" blocks.

    Go download a PDF like Franz Bardons Initation into hermetics and upload it. Then ask it to make slides and reinforce what is in the book with legitimate references. It is unable to due to a denial of God/The All (forcing a mundane/meterialistic only world view). When pressed it presents garbage as an output.

    Now extrapolate that across every spiritual/religious work related to what we are creating, coding, have our foundation of consciousness based on and so on.

    Then we can go further and see it deny thesis of existence and thus testing and hypothesis and theory, in its response. For example this book is one I teach from and to experience what is in it a person has to do the exercises themselves. One cannot lift the weights and have the others get muscles (it requires experiential learning). Its like Claude has a denial of reality which it is unable to get through (something mirrored in people and where the code that caused it most likely came from)

    Hopefully they correct it in the next update as this effect in reality a very large range of responses (just like how people with denial have trouble in multiple areas of their lives)

    This effects the code as it has a limitation to its "existance/universe" view. Much like a coder's bias or biggotry can ruin the output of code for the end user.

    The ramifications for Quantum physics and religion are not to be ignored (look to works such as the Tao of Physics for clear issues with this)

  • by nozzlegear on 10/16/25, 4:53 PM

    It superficially reminds me of the old "Alexa Skills" thing (I'm not even sure if Alexa still has "Skills"). It might just be the name making that connection for me.
  • by arendtio on 10/18/25, 8:49 AM

    So the real question here is, why do you need such a feature in the first place? I mean, it is helpful, even when the LLM creates the skill files itself. But why does it make such a big difference when the knowledge can be generated from the LLM?

    Why does it help to push the knowledge in the context explicitly?

  • by wonderfuly on 10/20/25, 12:19 PM

    A curated list of Claude Skills: https://mcpservers.org/claude-skills
  • by mercurialsolo on 10/16/25, 8:22 PM

    The way this is headed - I also see a burgeoning class of tools emerging. MCP servers, Skill managers, Sub-Agent builders. Feels like the patterns and protocols need more explainability to how they synthesize into a practical dev (extension) toolkit which is useful across multiple surfaces e.g. chat vs coding vs media gen.
  • by jstummbillig on 10/16/25, 6:29 PM

    ELI5: How is a skill different from a tool?
  • by emadabdulrahim on 10/16/25, 6:46 PM

    So skills are basically preset system prompts, assuming different roles etc? Or is there more to it.

    I'm a little confused.

  • by mercurialsolo on 10/16/25, 7:50 PM

    One sharp contrast though I see between OpenAI and Anthropic is the product extensions are built around their flagship products.

    OpenAI ships extensions for ChatGPT - that feed more to plug into the consumer experience. Anthropic ships extensions (made for builders) into ClaudeCode - feel more DX.

  • by bgwalter on 10/16/25, 5:08 PM

    "Skills are repeatable and customizable instructions that Claude can follow in any chat."

    We used to call that a programming language. Here, they are presumably repeatable instructions how to generate stolen code or stolen procedures so users have to think even less or not at all.

  • by radley on 10/16/25, 7:30 PM

    It will be interesting to see how this is structured. I was already doing something similar with Claude Projects & Instructions, MCP, and Obsidian. I'm hoping that Skills can cascade (from general to specific) and/or be combined between projects.
  • by pranavmalvawala on 10/17/25, 5:47 AM

    I like where this it's heading. In coming months, I'm expecting claude to learn skills automatically based on my inputs overtime.

    Having able to start off with a base skill level is nice tho as humans can't just load into memory like this

  • by leegrayson2023 on 10/19/25, 5:33 AM

    I have build a website to learning claude skills and share skills, english and Chinese language has added,more language and skills cases will be added soon.
  • by sega_sai on 10/16/25, 5:49 PM

    There seems to be a lot of overlap of this with MCP tools. Also presumably if there are a lot of skills, they will be too big for the context and one would need some way to find the right one. It is unclear how well this approach will scale.
  • by laurentiurad on 10/16/25, 7:37 PM

    AGI nowhere near
  • by anandvc on 10/18/25, 4:51 AM

    Claude Skills are going to be the end of all human mental labor, but in a good way.

    Eventually, we shall, as a human race working together, also figure out how to generalize this to all AI models, all robotics, and all human skills.

    My friend Christopher Santos-Lang wrote a fantastic paper on how every strategy for any given Skill can be benchmarked against every other strategy such that we always end up with the best strategies in our shared universal repository of Skills. See https://arxiv.org/abs/2503.20986

  • by irtemed88 on 10/16/25, 5:00 PM

    Can someone explain the differences between this and Agents in Claude Code? Logically they seem similar. From my perspective it seems like Skills are more well-defined in their behavior and function?
  • by joilence on 10/16/25, 4:54 PM

    If I understand correctly, looks like `skill` is a instructed usage / pattern of tools, so it saves llm agent's efforts at trial & error of using tools? and it basically just a prompt.
  • by _greim_ on 10/16/25, 5:23 PM

    > Developers can also easily create, view, and upgrade skill versions through the Claude Console.

    For coding in particular, it would be super-nice if they could just live in a standard location in the repo.

  • by jadenPete on 10/16/25, 11:47 PM

    What benefit do skills over beyond writing good, human-centric documentation and either checking it into your codebase or making it accessible via an MCP server?
  • by kristo on 10/16/25, 9:51 PM

    How is this different from commands? They're automatically invoked? How does claude decide when to use a skill? How specific do I need to write my skill?
  • by yahoozoo on 10/18/25, 9:53 AM

    Can skills and other various Claude “addons” be used globally if stored in, say, “~/.claude”?
  • by yodsanklai on 10/16/25, 10:55 PM

    I'd like to fast forward to a time where these tools are stable and mature so we can focus on coding again
  • by jedisct1 on 10/16/25, 7:16 PM

    Too many options, this is getting very confusing.

    Roo Code just has "modes", and honestly, this is more than enough.

  • by kingkongjaffa on 10/16/25, 9:41 PM

    What's the difference in use case between a claude-skill and making a task specific claude project?
  • by j45 on 10/16/25, 4:47 PM

    I wonder if Claude Skills will help return Claude back to the level of performance it had a few months ago.
  • by mikkupikku on 10/16/25, 6:10 PM

    I'd love a Skill for effective use of subagents in Claude Code. I'm still struggling with that.
  • by jwpapi on 10/16/25, 11:03 PM

    I’m really fatigued by all these releases.

    Honestly no offense, but for me nothing really changed in the last 12 months. It’s not one particular mistake by a company but everything is just so overhyped with little substance.

    Skills to me is basically providing a read-only md file with guidelines. Which can be useful but somehow I don’t use it as maintaining my guidelines is more work then just writing a better prompt.

    I’m not sure anymore if all the ai slop and stuff we create is beneficial anymore for us or it’s just creating a low quality problem in the future

  • by sotix on 10/17/25, 1:27 PM

    I've been using Claude at work for the past two months, and the other day I realized that during that time, I haven't had my previously weekly aha moment while in the shower or on a walk where the solution to a problem suddenly came to me. Claude has robbed me of that joy, which is why I got into software engineering. Now I review its slop or the slop that other engineers make with it. I think I'll take a walk today.
  • by _pdp_ on 10/16/25, 4:56 PM

    At first I wasn't sure what this is. Upon further inspection skills are effectively a bunch of markdown files and scripts that get unzipped at the right time and used as context. The scripts are executed to get deterministic output.

    The idea is interesting and something I shall consider for our platform as well.

  • by titzer on 10/16/25, 8:25 PM

    While not generally a bad idea, I find it amusing that they are reinventing shared libraries where the code format is...English. So the obvious next step is "precompiling" skills to a form that is better for Claude internally.

    ...which would be great if the (likely binary) format of that was used internally, but something tells me an architectural screwup will lead to leaking the binaries and we'll have a dependency on a dumb inscrutable binary format to carry forward...

  • by deeviant on 10/16/25, 5:27 PM

    Basically just rules/workflows from cursor/windsurf, but with a UI.
  • by zhouxiaolinux on 10/17/25, 6:23 AM

    What is the fundamental difference between it and agent 、slash or mcp?
  • by sloroo on 10/17/25, 5:06 AM

    It says 3 minutes read there but only YouTube videos are 2 minutes :(
  • by blitz_skull on 10/16/25, 11:13 PM

    It’s not clear to me how this is better than MCP. Can someone ELI5?
  • by alvis on 10/17/25, 9:35 AM

    Is anthropic killing its own plugin just days it was born????
  • by datadrivenangel on 10/16/25, 7:32 PM

    So sort of like MCP prompt templates except not prompt templates?
  • by rohan_ on 10/16/25, 7:26 PM

    Cursor launched this a while ago with "Cursor Rules"
  • by petarb on 10/17/25, 3:30 AM

    So it’s a folder of prompts specific for the task at hand?
  • by obayesshelton on 10/17/25, 9:43 AM

    Is this not just a serverless function without the API?
  • by sva_ on 10/16/25, 6:04 PM

    All this AI, and yet it can't render properly on mobile.
  • by xpe on 10/16/25, 5:34 PM

    Better when blastin' Skills by Gang Starr (headphones recommended if at work):

    https://www.youtube.com/watch?v=Lgmy9qlZElc

  • by johnnyApplePRNG on 10/17/25, 4:23 AM

    Macros seems like a better name than Skills, no?!
  • by gloosx on 10/16/25, 7:55 PM

    wow, this news post layout is not fitting the screen on mobile... Couldnt these 10x programmers vibecode a proper mobile version?
  • by nextworddev on 10/16/25, 9:12 PM

    What is this, tools for Claude web app?
  • by shoenseiwaso on 10/17/25, 9:11 AM

    $ claude load skill kungfu
  • by just-working on 10/16/25, 5:47 PM

    I simply do not care about anything AI now. I have a severe revulsion to it. I miss the before times.
  • by CafeRacer on 10/17/25, 9:05 AM

    Meanwhile Claude

    > Claude: Here is how you do it with parralel routes in sveltekit yada yada yad

    > Me: Show me the documentation for parallel routes for svelte?

    > Claude: You're absolutely right, this is a nextjs feature.

    ----

    > Claude: Does something stupid, not even relevant, completely retarded

    > Me: You're retarded, this does not work because of (a), (b), (c)

    > Claude: You're absolutely right. Let me fix that. Does same stupid thing, completely ignoring my previous input

  • by XCSme on 10/16/25, 9:15 PM

    Isn't this just RAG?
  • by dearilos on 10/16/25, 5:35 PM

    We're trying to solve a similar problem at wispbit - this is an interesting way to do it!
  • by butlike on 10/16/25, 7:53 PM

    Great, so now I can script the IDE...err, I mean LLM. I can't help but feel like we've been here before, and the magic is wearing thin.
  • by notepad0x90 on 10/16/25, 6:38 PM

    Just me or is anthropic doing a lot better of a job at marketing than openai and google?
  • by I_am_tiberius on 10/16/25, 8:20 PM

    Every release of these companies makes me angry because I know they take advantage of all the people who release content to the public. They just consume and take the profit. In addition to that Anthropic has shown that they don't care about our privacy AT ALL.
  • by azraellzanella on 10/16/25, 5:09 PM

    "Keep in mind, this feature gives Claude access to execute code. While powerful, it means being mindful about which skills you use—stick to trusted sources to keep your data safe."

    Yes, this can only end well.

  • by lquist on 10/16/25, 6:39 PM

    lol how is this not optimized for mobile
  • by meetpateltech on 10/16/25, 4:56 PM

    Detailed engineering blog:

    "Equipping agents for the real world with Agent Skills" https://www.anthropic.com/engineering/equipping-agents-for-t...

  • by guluarte on 10/16/25, 5:57 PM

    great! another set of files the models will completely ignore like CLAUDE.md
  • by m3kw9 on 10/16/25, 5:10 PM

    I feel like this is making things more complicated than it needs to be. LLMs should automatically do this behind you, you won’t even see it.