from Hacker News

Most RESTful APIs aren't really RESTful

by BerislavLopac on 7/9/25, 7:04 AM with 564 comments

  • by cjpearson on 7/9/25, 9:27 AM

    I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle. When I see "REST API" I can safely assume the following:

    - The API returns JSON

    - CRUD actions are mapped to POST/GET/PUT/DELETE

    - The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec

    - There's a decent chance listing endpoints were changed to POST to support complex filters

    Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.

  • by mixedbit on 7/9/25, 9:45 AM

    When I was working on my first HTTP-based API 13 years ago, based on many comments about true REST, I decided to first study what REST should really be. I've read Fielding's paper cover to cover, I've read RESTful Web Services Cookbook from O'Reilly and then proceeded to workaround Django idioms to provide REST API. This was a bit cargo cult thinking from my end, I didn't truly understand how REST would benefit my service. I took me several more years and several more HTTP APIs to understand that in the case of these services, there were no benefits.

    The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.

    Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.

    I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.

  • by salmonellaeater on 7/9/25, 9:06 AM

    Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.

    Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.

    REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:

      - Government portals for publicly accessible information, like legal codes, weather reports, or property records
    
      - Government portals for filing forms and other interactions
    
      - Open data initiatives like Wikipedia and OpenStreetmap
    
    Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.
  • by recursivedoubts on 7/9/25, 12:45 PM

    This is a very good and detailed review of the concepts of REST, kudos to the author.

    One additional point I would add is that making use of the REST-ful/HATEOAS pattern (in the original sense) requires a conforming client to make the juice worth the squeeze:

    https://htmx.org/essays/hypermedia-clients

    https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans

  • by _heimdall on 7/9/25, 12:00 PM

    What's often missed when this topic comes up is the question of who the back end API is intended for.

    REST and HATEOAS are beneficial when the consumer is meant to be a third party that doesn't directly own the back end. The usual example is a plain old HTML page, the end user of that API is the person using a browser. MCP is a more recent example, that protocol is only needed because they want agents talking to APIs they don't own and need a solution for discoverability and interpretability in a sea of JSON RPC APIs.

    When the API consumer is a frontend app written specifically for that backend, the benefits of REST often just don't outweigh the costs. It takes effort to design a more generic, better documented and specified API. While I don't like using tools like tRPC in production, its hugely useful for me when prototyping for much the same reason, I'm building both ends of the app and its faster to ignore separation of concerns.

    edit: typo

  • by BrenBarn on 7/9/25, 8:16 PM

    > Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier. It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.

    It "was perceived as" a barrier because it is a barrier. It "felt easier" because it is easier. The by-the-book REST principles aren't a good cost-benefit tradeoff for common cases.

    It is like saying that your microwave should just have one button that you press to display a menu of "set timer", "cook", "defrost", etc., and then one other button you use to select from the menu, and then when you choose one it shows another menu of what power level and then another for what time, etc. It's more cumbersome than just having some built-in buttons and learning what they do.

    I actually own a device that works in that one-button way. It's an OBD engine code reader. It only has two buttons, basically "next" and "select" and everything is menus. Even for a use case that basically only has two operations ("read the codes" and "clear a code"), it is noticeably cumbersome.

    Also, the fact that people still suggest it's indispensable to read Fielding's dissertation is the kind of thing that should give everyone pause. If the ideas are good there should be many alternative statements for general audiences or different perspectives. No one says that you don't truly understand physics unless you read Newton's Principia.

  • by Scarblac on 7/9/25, 8:10 AM

    UI designers want control over the look of the page in detail. E.g. some actions that can be taken on a resource are a large button and some are hidden in a menu or not rendered in the UI at all.

    A client application that doesn't have any knowledge about what actions are going to be possible with a resource, instead rendering them dynamically based on the API responses, is going to make them all look the same.

    So RESTful APIs as described in the article aren't useful for the most common use case of Web APIs, implementing frontend UIs.

  • by dwaltrip on 7/9/25, 10:59 AM

    I'll never understand why the HATEOAS meme hasn't died.

    Is anyone using it? Anywhere?

    What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?

  • by sublinear on 7/9/25, 8:36 AM

    > By using HATEOAS and referencing schema definitions (such as XSD or JSON Schema) from within your resource representations, you can enable clients to understand the structure of the data and navigate the API dynamically.

    I actually think this is where the problem lies in the real world. One of the most useful features of a JSON schema is the "additionalProperties" keyword. If applied to the "_links" subschema we're back to the original problem of "out of band" information defining the API.

    I just don't see what the big deal is if we have more robust ways of serving the docs somewhere else outside of the JSON response. Would it be equivalent if the only URL in "_links" that I ever populate is a link to the JSONified Swagger docs for the "self" path for the client to consume? What's the point in even having "_links" then? How insanely bloated would that client have to be to consume something that complicated? The templates in Swagger are way more information dense and dynamic than just telling you what path and method to use. There's often a lot more for the client to handle than just CRUD links and there exists no JSON schema that could be consistent across all parts of the API.

  • by mschaef on 7/9/25, 1:06 PM

    > The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.

    Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.

    At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)

    I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.

    (This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)

  • by nirui on 7/10/25, 5:46 AM

    Not weird at all if people don't strictly follow a standard.

    The world of programming, just like the real world, has a lot of misguided doctrines that looked really good on paper, but not on application.

    For example:

        "_links": {
          ....
          "cancel": { "href": "/orders/123/cancel", "method": "POST" }
        }
    
    Why "POST"?

    And what POST do you send? A bare POST with no data, or with parameters in it's body?

    What if you also want to GET the status of cancellation? Change the type of `method` to an array so you can `"method": ["POST", "GET"]`?

    What if you want to cancel the cancellation? Do you do `POST /orders/123/cancel/cancel HTTP/...`, or `DELETE /orders/123/cancel HTTP/...`?

    So, people adopt, making an originally very pure and "based" standard into something they can actually use. After all, all of those things are meant to be productive, rather than ideological.

  • by dingi on 7/9/25, 11:18 AM

    At some point, we built REST clients so generic they could handle nearly any use case. Honestly, building truly RESTful APIs has been easy for ages, just render HTML on the server and send it to the browser. That's 100% REST with no fuss.

    The irony is, when people try to implement "pure REST" (as in Level 3 of the Richardson Maturity Model with HATEOAS), they often end up reinventing a worse version of a web browser. So it's no surprise that most developers stop at Level 2—using proper HTTP verbs and resource-based URIs. Full REST just isn't worth the complexity in most real-world applications.

  • by alkonaut on 7/9/25, 10:31 AM

    Similarly, I call Java programs "Object Oriented programs" despite Alan Kays protests that it isn't at all what Object Orientation was described as in early papers.

    The sad truth is that it's the less widely used concept that has to shift terminology, if it comes into wide use for something else or a "diluted" subset of the original idea(s). Maybe the true-OO-people have a term for Kay-like OO these days?

    I think the idea of saving "REST" to mean the true Fielding style including HATEOAS and everything is probably as futile as trying to reserve OO to not include C++ or Java.

  • by thom on 7/9/25, 8:28 AM

    I struggle to believe that any API in history has been improved by the developer more faithfully following REST’s strictures. The closest we’ve come to actually decoupled, self describing APIs is MCP, and that required inventing actual AIs to understand them.
  • by dewey on 7/9/25, 8:15 AM

    Academically it might be correct, but shipping real features will in most cases be more important than hitting some text book definition of correctness.
  • by bravesoul2 on 7/9/25, 8:52 AM

    Drake meme for me:

    REST = Hell No

    GQL = Hell No.

    RPC with status codes = Grin and point.

    I like to get stuff done.

    Imagine you are forced to organize your code filed like REST. Folder is a noun. Functions are verbs. One per folder. Etc. Would drive you nuts.

    Why do this for API unless the API really really fits that style (rare).

    GQL is expensive to parse and hides information from proxies (200 for everything)

  • by makeitdouble on 7/9/25, 7:59 AM

    It felt easier going through the post after reading these bits near the end:

    > The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience

    > Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs.

  • by bazoom42 on 7/9/25, 5:19 PM

    Just call it a HTTP API and everyone is happy. People forget REST was never intended for API’s in the first place. REST was designed for information systems navigated by humans, not programs.
  • by nchmy on 7/9/25, 11:50 AM

    The article is seemingly accurate, but isn't particularly useful as it is written in FAR too technical of a style.

    If anyone wants to learn more about all of this, https://htmx.org/essays and their free https://hypermedia.systems book are wonderful.

    You could also check out https://data-star.dev for an even better approach to this.

  • by layer8 on 7/9/25, 11:50 AM

    This doesn’t provide any good arguments for why Roy Fielding’s conception should be taken as the gospel of how things should be done. At best, it points out that what we call REST now isn’t what Roy Fielding wanted.

    Furthermore, it doesn’t explain how Roy Fielding’s conception would make sense for non-interactive clients. The fact that it doesn’t make sense is a large part of why virtually nobody is following it.

  • by pradn on 7/9/25, 11:39 AM

    > REST isn’t about exposing your internal object model over HTTP — it’s about building distributed systems that behave like the web.

    I think I finally understand what Fielding is getting at. His REST principles boil down to allowing dynamic discovery of verbs for entities that are typed only by their media types. There's a level of indirection to allow for dynamic discovery. And there's a level of abstraction in saying entities are generic media objects. These two conceptual leaps allow the REST API to be used in a more dynamic, generic way - with benefits at the API level that the other levels of the web stack has ("client decoupling, evolvability, dynamic interaction").

  • by coolhand2120 on 7/9/25, 6:29 PM

    I politely pointed out that this previous submission "Stop using REST for state synchronization" (https://news.ycombinator.com/item?id=43997286) was not in fact ReST at all, but just an HTTP API and I was down voted for it. You would think that programming is a safe place to be pedantic.

    It's all HTTP API unless you're actually doing ReST in which case you're probably doing it wrong.

    ReST and HATEOAS are great ideas until you actually stop and think about it, then you'll find that they only work as ideas in some idealized world that real HTTP clients do not exist in.

  • by jaapz on 7/9/25, 8:16 AM

    Wasn't the entire point of calling an API RESTful, that it's explicitly not REST, but only kind of REST-like.

    Also, who determined these rules are the definition of RESTful?

  • by spankalee on 7/9/25, 3:39 PM

    Good.

    Strict HATEOAS is bad for an API as it leads to massively bloated payloads. We _should_ encode information in the API documentation or a meta endpoint so that we don't have to send tons of extra information with every request.

  • by Scubabear68 on 7/9/25, 2:57 PM

    I have always said that HATEOAS starting with “HATE” is highly descriptive of my attitude toward it.

    It is a fundamentally flawed concept that does not work in the real world. Full stop.

  • by renerick on 7/9/25, 5:03 PM

    Htmx essays have already been mentioned, so here are my thoughts on the matter. I feel like to have a productive discussion of REST and HATEOAS, we must first agree on the basics. Repeating my own comment from a couple of weeks ago, H stands for hypermedia, and hypermedia is a type of media, that uses common format for representing some server-driven state and embedding hypermedia controls which are presented by back-end agnostic hypermedia client to a user for discoverability and interaction.

    As such, JSON driven APIs can't be REST, since there is no common format for representing hypermedia controls, which means that there's no way to implement hypermedia client which can present those controls to the user and facilitate interactions. Is there such implmentation? Yes, HTML is the hypermedia, <input>s and <button>s are controls and browsers are the clients. REST and HATEOAS is designed for the humans, and trying to somehow combine it with machine-to-machine interaction results in awkward implementations, blurry definitions and overcomplication.

    Richardson maturity model is a clear indication of those problems, I see it as an admission of "well, there isn't much practicality in doing proper REST for machine-to-machine comms, but that's fine, you can only do some parts of it and it's still counts". I'm not saying we shouldn't use its ideas, resource-based URLs are nice, using feature of HTTP is reasonable, but under the name REST it leads to constant arguments between the "dissertation" crowd and "the industry has moved on" crowd. The worst/best part is both those crowds are totally right and this argument will continue for as long as we use HTTP

  • by gabesullice on 7/9/25, 2:44 PM

    > If you are building a public API for external developers you don’t control, invest in HATEOAS. If you are building a backend for a single frontend controlled by your own team, a simpler RPC-style API may be the more practical choice.

    My conclusion is exactly the opposite. In-house developers can be expected (read: cajoled) to do things the "right" way, like follow links at runtime. You can run tests against your client and server. Internally, flexible REST makes independent evolution of the front end and back end easy.

    Externally, you must cater to somebody who hard-coded a URL into their curl command that runs on cron and whose code can't tolerate the slightest deviation from exactly what existed when the script was written. In that case, an RPC-like call is great and easy to document. Increment from `/v1/` to `/v2/`, writer a BC layer between them and move on.

  • by cowsandmilk on 7/9/25, 10:11 AM

    At my FAANG company, the central framework team has taken calling what people do in reality HTTP bindings. https://smithy.io/2.0/spec/http-bindings.html
  • by ceving on 7/9/25, 11:45 AM

    It is not sufficient to crawl the API. The client also needs to know how to display the forms, which collect the data for the links presented by the API. If you want to crawl the API you also have the crawl the whole client GUI.
  • by phamilton on 7/9/25, 3:02 PM

    I think we should focus less on API schemas and more on just copying how browsers work.

    Some examples:

    It should be far more common for http clients to have well supported and heavily used Cookie jar implementations.

    We should lean on Accept headers much more, especially with multiple mime-types and/or wildcards.

    Http clients should have caching plugins to automatically respect caching headers.

    There are many more examples. I've seen so much of HTTP reimplemented on top of itself over the years, often with poor results. Let's stop doing that. And when all our clients are doing those parts right, I suspect our APIs will get cleaner too.

  • by TekMol on 7/9/25, 8:01 AM

    You know what type of API I like best?

        /draw_point?x=7&y=20&r=255&g=0&b=0
        /get_point?x=7&y=20
        /delete_point?x=7&y=20
    
    Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.

    This is also how HN does it:

        /vote?id=44507373&how=up&auth=...
  • by gsibble on 7/9/25, 6:28 PM

    I built a company that actually did implement HATEOS in our API. It was a nightmare. So much processing time was spent on every request setting up all the URLs and actions that could be taken. And no one used it for anything anyways. Our client libraries used it but we had full control over them anyways and if anything, it made the libraries more complex.

    While I agree it's an interesting idea in theory, it's unnecessary in the real world and has a lot of downsides.

  • by sporkland on 7/10/25, 3:21 PM

    As someone that criticized a number of their employers API's for not being sufficiently ReSTful especially with regards to HatEoS, I eventually realized the challenge is the clients. App developers and client developers mostly just want to deal with structured objects that they've built fixed function UX around (including the top level) and desire constructing URLs on the client. It takes a special kind of developer to desire building special mini-browsers everywhere that would require hateos and from the server side.

    I think LLM's are going to be the biggest shift in terms of actually driving more truly ReSTful APIs, though LLM's are probably equally happy to take ReST-ish responses, they are able to effectively deal with arbitrary self describing payloads.

    MCP at it's core seems to design around the fact that you've got an initial request to get the schema and then the payload, which works great for a lot of our not-quite-ReST API's but you could see over time just doing away with the extra ceremony and doing it all in one request and effectively moving back in the direction of true ReST.

  • by HumblyTossed on 7/9/25, 5:37 PM

    Didn't we go through all this years ago and determined that we should invent a new term - REST-like and so were able to put this all to bed?
  • by globular-toast on 7/10/25, 7:06 AM

    This seems to mostly boil down to including links rather than just IDs and having the client "just know" how to use those IDs.

    Django Rest Framework seems to do this by default. There seems very little reason not to include links over hardcoding URLs in clients. Imagine just being able to restructure your backend and clients just follow along. No complicated migrations etc. I suspect many people just live with crappy backends because it's too difficult to coordinate the rollout of a v2 API.

    However, this doesn't cover everything. There's still a ton of "out of band" information shared between client and server. Maybe there's a way to embed Swagger-style docs directly into an API and truly decouple server and client, bit it would seem to take a lot more than just using links over IDs.

    Still I think there's nothing to lose by using links over IDs. Just do it on your next API (or use something like DRF that does it for you).

  • by commandlinefan on 7/9/25, 2:07 PM

    Most databases aren't relational, either, in the sense that Codd defined relational. They are, instead, useful.
  • by leourbina on 7/9/25, 8:25 AM

    This post follows the general, highly academic/dogmatic, tone that I’ve seen when certain folks talk about REST. Most of the article talks about what _not_ to do, and has very little details on how to actually do it.

    The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.

    However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?

    Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.

  • by gabesullice on 7/9/25, 2:29 PM

    The thing to internalize about "true" REST is that HN (and the rest of the web) is really a RESTful web service. You visit the homepage, a hypermedia format is delivered to a generic client (your browser), and its resources (pages, sections, profiles, etc) can all be navigated to by following links.

    Links update when you log in or out, indicating the state of your session. Vote up/down links appear or disappear based on one's profile. This is HATEOAS.

    Link relations can be used to alter how the client (browser) interprets the link—a rel="stylesheet" causes very different behavior from rel="canonical".

    JavaScript provides even provides "code on-demand" as it's called in Fielding's paper.

    From that perspective, REST is incredible. REST is extremely flexible, scalable, evolvable, etc. It is the pattern that powers the web.

    Now, it's an entirely different story when it come to what many people call REST APIs, which are often nothing like HN. They cannot be consumed by a generic client. They are not interlinked. They don't ship code on-demand.

    Is "REST" to blame? No. Few people have time or reason to build a client as powerful as the browser to consume their SaaS product's API.

    But even building a truly generic client isn't the hardest thing about building RESTful APIs—the hardest thing is that the web depends entirely on having a human-in-the-loop and your standard API integration's purpose is to eliminate having a human in the loop.

    For example, a human reads the link text saying "Log in" or "Reset password" and interprets that text to understand the state of the system (they do not have an authenticated session). And a human can reinterpret a redesigned webpage with links in a new location, but trivial clients can't reinterpret a refactored JSON object (or XML for that matter).

    The folly is in thinking that there's some design pattern out there that's better than REST without understanding that the actual problem to be solved by that elusive, perfect paradigm is how you'll be able to refactor your API when your API's clients will likely be bodged-together JS programs whose authors dug through JSON for the URL they needed and then hard-coded it in a curl command instead of conscientiously and meticulously reading documentation and semantically looking up the URL at runtime, follows redirects, and handles failures gracefully.

  • by cryptos on 7/11/25, 10:05 AM

    The big problem I see with the practical adoption of REST principles is that a human can easily intpret a document and pick the desired transation (e.g. follow a link), but a program can not so easily do that. Maybe in the age of AI it becomes more realistic, but most of the time some "RESTful" back-end is used by a certain front-end application. What is needed is basically more like RPC. Maybe SOAP was closer to what we actually do and need. The specification and code generation was much better than what we have now with mediocre OpenAPI code generators.

    Maybe gRPC or something like that will fill the gap ...

  • by JaggerJo on 7/9/25, 6:09 PM

    REST almost never is worth it. It’s a nice idea, but in practice things often are more complicated.

    API quality is often not relevant to the business after it passes the “mostly works” bar.

    I’ll just use plain http or RPC when it’s not important and spend more time on things that make a difference.

  • by dekhn on 7/9/25, 3:40 PM

    I see a lot of people who read Fielding's thesis and found it interesting.

    I did not find it interesting. I found it excessively theoretical and proscriptive. It led to a lot of people arguing pedantically over things that just weren't important.

    I just want to exchange JSON-structured messages over HTTP, using the least amount of HTTP required to implement request and response. I'm also OK with protocol buffers over grpc, or really any decent serialization technology over any well-implemented transport. Sometimes it's CRUD, sometimes it's inference, sometimes it's direct actions on a server.

    Hmm. I shoudl write a thesis. JSMOHTTP (pronounced "jizmo-huttup")

  • by phendrenad2 on 7/10/25, 5:13 AM

    I think that all of the unemployed CS grads are rediscovering the "best practices" of the last 40 years in lieu of working. Well, just remember, every HATEOAS-conforming REST API, every chaos-monkey-enabled Microservice-Oriented Architecture, every app that someone spent tone of time hacking down the cyclomatic complexity score, every meticulously UML-diagrammed four-tier architecture, has had their main engineers laid off and replaced by a crack team of junior engineers who adulterated it down to spaghetti code. In the post-AI world, features talk, architecture walks.
  • by darqis on 7/9/25, 10:42 PM

    I don't understand why no one or barely anyone is using graphql. It's the evolution of all that REST crap.

      query ($name: String!) {
        greeting(where: {name: $name}) {
          response
        }
      }
    
    or

      mutation ($input: CreatePostInput!) {
        createPost(input: $input) {
          id
          createTime
          title
          content
          tags {
            id
            slug
            name
          }
          
        }
      }
    
    and so on, instead of having to manually glue together responses and relations.

    It's literally SQL over the wire without needed to write SQL.

    The payload is JSON, the response is JSON. EZ.

  • by beders on 7/9/25, 5:33 PM

    I always urge software architects (are they still around?) and senior engineers in charge of APIs to think very carefully about the consumers of the API.

    If the only consumer is your own UI, you should use a much more integrated RPC style that helps you be fast. Forget about OpenAPI etc: Use a tool or library that makes it dead simple to provide data the UI needs.

    If you have a consumer outside your organization: a RESTish API it is.

    If your consumer is supposed to be generic and can "discover" your API, RESTful is the way to go.

    But no one writes generic ones anymore. We already have the ultimate one: the browser.

  • by kgwxd on 7/9/25, 1:04 PM

    Unless you really read and followed the paper, just call it a web api and tell your sales people to do the same. Calling it REST makes you sound like a manager that hasn't done any actual dev in 15 years.
  • by dSebastien on 7/18/25, 6:41 AM

    A few years ago I spent an unreasonable amount of time creating a RESTful API design guide for my employer. The goal was to standardize the way APIs would be created for all systems. We argued over HATEOAS, true REST with hypermedia, custom media types, status codes etc.

    We ended up with what I consider to be a solid design guide rooted in the correct use of Web standards. Not REST but RESTful. Clear and understandable, uniform, etc.

    At the end of the day though the real challenge was more to make people adhere to those conventions. Why? Because most developers don't care at all. They want to finish their "Agile" sprint on time. They don't care about architecture, correctness, enterprise-wide homogeneity etc. Beyond the lack of ONE actual standard, that's the other real major problem.

    https://github.com/NationalBankBelgium/REST-API-Design-Guide...

  • by Traubenfuchs on 7/9/25, 10:44 AM

    REST(ful) API issues can all be resolved with one addition:

    Adding actions to it!

    POST api/registration / api/signup? All of this sucks. Posting or putting on api/user? Also doesn‘t feel right.

    POST to api/user:signup

    Boom! Full REST for entities + actions with custom requests and responses for actions!

    How do I make a restful filter call? GET request params are not enough…

    You POST to api/user:search, boom!

    (I prefer to use the description RESTful API, instead of REST API -everyone fails to implement pure REST anyways, and it‘s unnecessarily limited.)

  • by temporallobe on 7/9/25, 6:09 PM

    Some of this is sensible. I especially like the idea of an interactive starting point which gives you useful links and info, but I can see how that would be difficult with more complex calls — showing examples and providing rich documentation would be difficult. Otherwise, just follow the recommendations for REST verbs (so what if they mostly map to CRUD?), and document your API well. Tools like Swagger really make this quite easy.
  • by dolmen on 7/9/25, 9:20 PM

    HATEOAS might make a come back as it might be useful to expose an API to AI agents that would browse a service.

    On the other hand, agents could as well understand an OpenAPI document, as the description of each path/schema can be much more verbose than HATEOAS. There is a reason why OpenAPI-style API are favored: less verbosity of payload. If cost of agents is based on their consumption/production of tokens, verbosity matters.

  • by 0x445442 on 7/9/25, 11:49 AM

    In my experience REST is just a code word for a distributed glob of function calls which communicate via JSON. It's a development and maintenance nightmare.
  • by ApeWithCompiler on 7/9/25, 12:34 PM

    I tried to follow the approach with hypermedia and discoverable resources/actions in my hobby projects. But I "failed" at the point that this would mean additional HTTP calls from a client to "discover" a resource/its actions. Given the latency of a HTTP call, relativly seen, this was not conclusive for me.
  • by swiezy2 on 7/16/25, 7:55 PM

    > There's a decent chance listing endpoints were changed to POST to support complex filters

    why not do everything in POST?

  • by bps4484 on 7/9/25, 10:39 PM

    "Reductio Ad Roy Feldium" is the internet addage[1] that as in a hacker news discussion about a rest api grows, the probabilty someone cites roy felding's dissertation approaches 1. I'm glad this post cut right to the chase!

    [1] ok it's not an internet adage. I invented it and joke with friends about it

  • by kamranjon on 7/9/25, 3:04 PM

    I am wondering if anyone can resolve this misunderstanding of REST for me…

    If the backend provides a _links map which contains “orders” for example in the list - doesn’t the front end need to still understand what that key represents? Is there another piece I am missing that would actually decouple the front end from the backend?

  • by pharaohgeek on 7/9/25, 12:40 PM

    ElasticSearch and OpenSearch are certainly egregiously guilty of this. Their API is an absolute nightmare to work with if you don't have a supported native client. Why such a popular project doesn't have an easy-to-use OpenAPI spec document in this day and age is beyond me.
  • by jillesvangurp on 7/9/25, 9:42 AM

    If you want to produce better APIs, try consuming them. A lot of places have this clean split between backend and frontend teams. They barely talk to each other sometimes. And a pattern I've seen over and over again is that some product manager decides feature X is needed. The backend team goes to work and delivers some API for feature X and then the frontend team has to consume the API. These APIs aren't necessarily very good if the backend people don't understand how the frontend uses them.

    The symptom is usually if a seemingly simple API change on the backend leads to a lot of unexpected client side complexity to consume the API. That's because the API change breaks with some frontend expectation/assumption that frontend developers then need to work around. A simple example: including a userId with a response. To a frontend developer, the userId is not useful. They'll need a user name, a profile photo, etc. Now you get into all sorts of possible "why don't you just .." type solutions. I've done them all. They all have issues and it leads to a lot of complexity on either the server or the client.

    You can bloat your API and calculate all this server side. Now all your API calls that include a userId gain some extra fields. Which means extra lookups and joins. So they get a bit slower as well. But the frontend can pretend that the server always tells it everything it needs. The other solution is to look things up from the frontend. This adds overhead. But if the frontend is clever about it, a lot of that information is very cachable. And of course graphql emerged to give frontend developers the ability to just ask for what they need from some microservices.

    All these approaches have pros and cons. Most of the complexity is about what comes back, not about how it comes back or how it is parsed. But it helps if the backend developers are at least aware of what is needed on the frontend. A good way is to just do some front end development for a while. It will make you a better backend developer. Or do both. And by that I don't mean do javascript everywhere and style yourself as a full stack developer because you whack all nails with the same hammer. I mean doing things properly and experiencing the mismatches and friction for yourself. And then learn to do it properly.

    The above example with the userIds is real. I've had to deal with that on multiple projects. And I've tried all of the approaches. My most recent insight here is that user information changes infrequently and should be looked up separately from other information asynchronously and then cached client side. This keeps APIs simple and forces frontend developers to not treat the server as a magical oracle and instead do sane things client side to minimize API calls and deal with application state. Good state management is key. If you don't have that, dealing with stateless network protocols (like REST) is painful. But state has to live somewhere and having it client side makes you less dependent on how the server side state management works. Which means it's easier to fix things when that needs to change.

  • by have-a-break on 7/9/25, 6:07 PM

    Worse, most if not all "REST" apps have security vulnerabilities because of how browser front-ends handle authentication.

    To handle authentication "properly" you have to use cookies or sessions which inheritly make apps not RESTful.

  • by karel-3d on 7/9/25, 8:42 AM

    Nooooo not this discourse again.
  • by tacitusarc on 7/9/25, 2:43 PM

    See https://stackoverflow.com/a/29520505/771665

    The term has caused so much bikeshedding and unnecessary confusion.

  • by webprofusion on 7/10/25, 3:09 AM

    Purist ideas rarely survive contact with reality, or something.

    Likewise if the founders of the web took one look at a full on React based site they would shriek in horror at what's now the defacto standard.

  • by pjmlp on 7/9/25, 1:17 PM

    Basically JSON-RPC really, and a better use of HTTP verbs, most of the time.
  • by elzbardico on 7/9/25, 11:15 AM

    And not everything in reality maps nicely to hypermedia conventions. The problem with REST is trying to shoehorn a lot of problems in a set of abstractions that were initially created for documents.
  • by l0b0 on 7/10/25, 9:45 AM

    It's mostly just semantic drift. "REST" is less of a mouthful than "JSON over HTTP". Nobody ever realised the potential of discoverability.
  • by lotyrin on 7/9/25, 4:14 PM

    HATEOAS + Document Type Description which includes (ideally internationalized) natural language description in addition to machine readable is what MCP should have been.
  • by jasonm23 on 7/10/25, 6:29 AM

    Turn back lest you be dragged into the RESTy bikeshed
  • by deathanatos on 7/9/25, 7:52 PM

    I just spent a good portion of the day trying to figure out how GCP's allegedy "RESTful" (it's not) API names resources. If only there was a universal identifier for resources…

    But no, a service account in GCP has no less than ~4 identifiers. And the API endpoint I wanted to call needed to know which resource, so the question then is "which of the 4 identifiers do I feed it?" The right answer? None of them.

    The "right" answer is that you need to manually build a string, a concatenate a bunch of static pieces with the project ID and the object's ID to form a more IDer ID. So now we need the project ID … and projects have two of those. So the right answer is that exactly 1 of the 8 different permutations works (if we don't count the constant string literals involved in the string building).

    Just give me a URI, and then let me pass that URI, FFS.

  • by theknarf on 7/9/25, 11:22 AM

    Ironically it feels like GraphQL is more RESTful than most REST api's if we want to follow Fielding's paper.
  • by pbreit on 7/9/25, 8:14 AM

    RESTful APIs are not RESTful because REST is meh. Our APSi includes HATEAOS links and I have never, not once, witnessed their actual use (but they do double the size of response payloads).

    It’s interesting that Stripe still even uses form-post on requests.

  • by h1fra on 7/9/25, 8:06 AM

    Who cares, honestly? I never understood this debate; nobody has ever produced a perfect RESTful API anyway
  • by pipes on 7/9/25, 6:07 PM

    I just call them http APIs. Is this too far wrong? Actually a genuine question.
  • by osigurdson on 7/9/25, 11:38 AM

    We collectively glazed over Roy Fielding's dissertation, didn't really see the point, liked the sound of the word "REST" and used it to describe whatever we wanted to do with http / json. Sorry, Roy, but you can keep HATEOAS - no one is going to take that from you.
  • by rswail on 7/9/25, 4:11 PM

    I love all the comments here that you can't build a proper UX/UI with a "perfect" REST API even though browsers do it all day, every day.

    REST includes code-on-demand as part of the style, HTTP allows for that with the "Link" header and HTML via <script>.

  • by JackSlateur on 7/9/25, 9:16 PM

    How does hateoas work with parameters ?

    I mean .. ok, you have the bookmark uri, aka the entrypoint

    From there, you get links of stuff. The client still need to "know" their identifiers but anyway

    But the params of the routes .. and I am not only speaking of their type, I am also speaking of their meaning .. how would that work ?

    I think it cannot, so the client code must "know" them, again via out of band mecanisms.

    And at this point, the whole stuff is useless and we just use openapi

  • by collyw on 7/10/25, 9:26 AM

    Fine. They are not actually RESTful. But does it actually matter?
  • by liendolucas on 7/9/25, 11:19 AM

    https://htmx.org/img/memes/dbtohtml.png

    LMAO all companies asking for extensive REST API design/implementation experience in their job requirements, along with the lastest hot frontend frameworks.

    I should probably fire back by asking if they know what they're asking for, because I'm pretty sure they don't.

  • by hosh on 7/10/25, 4:25 PM

    My biggest takeaway from Roy Fielding's dissertation wasn't how to construct a RESTful architecture or what is the one true REST, but how to understand any computer architecture -- particularly their constraints -- in order to design and implement appropriate systems. I can easily identify anti-patterns (even in implementations) because they violate the constraints which in turns, takes away from the properties of the architecture. This also quickly allows me to evaluate and understand libraries, runtimes, topologies, and so forth.

    I used to get caught up in what is REST and what is not, and that misses the point. It's similar to how Christopher Alexander's ideas pattern languages gets used in a way now that misses the point. Alexander was cited in introductory chapter of Fielding's dissertation. These are all very big ideas with broad applicability and great depth.

    When combined with Promise Theory, this gives a dynamic view of systems.

  • by harshitaneja on 7/9/25, 9:19 AM

    I spent years fussing about getting all of my APIs to fit the definition of REST and to do HATEAOS properly. I spent way too much time trying to conform everything as an action on a resource. Now, don't get me wrong. It is quite helpful to try to model things at stateless resources with a limited set of actions on them and to think about idempotency for specific actions in ways I don't think we did it properly in the SOAP days(at least I didn't). And in many cases it led to less brittle interfaces which were easier to reason about.

    I still like REST and try to use it as much as I can when developing interfaces but I am not beholden to it. There are many cases which are not resources or are not stateless and sure you can find some obtuse way to make them be resources but that at times either leads to bad abstractions that don't convey the vocabulary of the underlying system and thus over time creates this rift in context between the interface and the underlying logic or we expose underlying implementation details as they could be easier to model as resources.

  • by 3cats-in-a-coat on 7/9/25, 7:56 PM

    Indeed, and I find it funny that the debate even exists.
  • by stephenlf on 7/10/25, 1:21 AM

    > A REST API should not be dependent on any single communication protocol, though its successful mapping to a given protocol may be dependent on the availability of metadata, choice of methods, etc. In general, any protocol element that uses a URI for identification must allow any URI scheme to be used for the sake of that identification. [Failure here implies that identification is not separated from interaction.]

    What the heck does this mean? Does it mean that my API isn’t REST if it can’t interpret “http://example.com/path/to/resource” in the same way it interprets “COM<example>::path.to.resource”? Is it saying my API should support HTTP, FTP, SMB, and ODBC all the same? What am I missing?

  • by somat on 7/10/25, 4:56 AM

    As far as I know the only actual rest implementation, as Fielding envisioned it, a system where you send the entire representational state of the program with each request is the system Fielding coined the term REST to describe. The WEB.

    Has any other system done this? where you send the whole application for each state with each state. project xandu?

    I do find it funny how Fielding basically said "hey look at the web, isn't that a weird way to structure a program, lets talk about it." and every one sort of suffered a collective mental brain fart and replied "oh you mean http, got it"

  • by ChrisMarshallNY on 7/9/25, 9:48 PM

    Eh. I won't write "pure" REST, because it's difficult to use, and I don't know if I have ever seen a tool that uses it as such. I know why it was designed that way, but I have never needed that.

    I tend to use REST-like methods to select mode (POST, GET, DELETE, PATCH, etc.), but the data is usually a simple set of URL arguments (or associated data). I don't really get too bent out of shape about ensuring the data is an XML/JSON/Whatever match for the model structure. I'll often use it coming out, but not going in.

  • by cryptonector on 7/9/25, 4:26 PM

    > The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.

    Eh, "a small change in a server’s URI structure" breaks links, so already you're in trouble.

    But sure, embedding [local-parts of] URIs in the contents (or headers) exchanged is indeed very useful.

  • by bertails on 7/9/25, 10:16 PM

    "REST" is our industry's most successful collective delusion: everyone knows it's wrong, everyone knows we're using it wrong, and somehow that works better than being right.
  • by k__ on 7/9/25, 5:07 PM

    Hot take: HATEOAS only works when humans are navigating.
  • by imtringued on 7/9/25, 9:21 AM

    I find it pretty shocking that this was written in 2025 without a mention of the fact that the only clients that are evolvable enough to interface with a REST API can be categorized to these three types:

    1. Browsers and "API Browsers" (think something like Swagger)

    2. Human and Artificial Intelligence (basically LLMs)

    3. Clients downloaded from the server

    You'd think that they'd point out these massive caveats. After all, the evolvable client that can handle any API, which is the thing that Roy Fielding has been dreaming about, has finally been invented.

    REST and HATEOAS were intentionally developed to against the common use case of a static non-evolving client such as an android app that isn't a browser.

    Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).

    If you wanted to build e.g. the matrix chat protocol on top of REST, then Roy Fielding would tell you to get lost.

    If what I'm saying doesn't make sense to you, then your understanding of REST is insufficient, but let me tell you that understanding REST is a meaningless endeavor, because all you'll gain from that understanding is that you don't need it.

    In REST clients are not allowed to have any out of band information about the structure or schema of the API.

    You are not allowed to send GET, POST, PUT, DELETE requests to client constructed URLs.

    Now that might sound reasonable. After all HATEOAS gives you all the URLs so you don't need to construct them.

    Except here is the kicker. This isn't some URL specific thing. It also applies to the attributes and links in the response. You're not allowed to assume that the name "John Doe" is stored under the attribute "name" or that the activate link is stored in "activate". Your client needs to handle any theoretical API that could come from the server. "name" could be "fullName" or "firstNameAndLastName" or "firstAndLastName" or "displayName".

    Now you might argue, hey but I'm allowed to parse JSON into a hierarchical object layout [0] and JPEGs into a two dimensional pixel array to be displayed onto a screen, surely it's just a matter of setting a content type or media type? Then I'll be allowed to write code specific to my resource! Except, REST doesn't define or propose any mechanism for application specific media types. You must register your media type globally for all humanity at IANA or go bust.

    This might come across as a rant, but it is meant to be informative so I'll tell you what REST and HATEOAS are good for: Building micro browsers relying on human intelligence to act as the magical evolvable client. The way you're supposed to use REST and HATEOAS is by using e.g. the HAL-FORMS media type to give a logical representation of your form. Your evolvable client then translates the HAL-FORM into a html form or an android form or a form inside your MMO which happens to have a registration form built into the game itself, rather than say the launcher.

    Needless to say, this is completely useless for machine to machine communication, which is where the phrase "REST API" is most commonly (ab)used.

    Now for one final comment on this article in particular:

    >Why aren’t most APIs truly RESTful?

    >The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams.

    This is actually completely irrelevant and ignores the fact that REST as designed was never meant to be used in the vast situations where RPC over HTTP is used. The use cases for "RPC over HTTP" and REST have incredibly low overlap.

    >These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,”

    This feels like a complete reversal and shows that the author of this blog post himself doesn't understand the practical implications of his own blog post. The entire point of HATEOAS is that you cannot have automatic client code generation unless it happens during the runtime of the application. It's literally not allowed to generate code in REST, because it prevents your client from evolving at runtime.

    >making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.

    Except as I said, unless you have a requirement to have something like a mini browser embedded in a smartphone app, desktop application or video game, what's the point of that evolvability?

    >Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier.

    Significant barrier is probably the understatement of the century. Building the "truly hypermedia-driven client" is equivalent to solving AGI in the machine to machine communication use case. The browser use-case only works because humans already possess general intelligence.

    >It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.

    Now the author is using snark to appeal to emotions by equivocating the simplest and most irrelevant problem with the hardest problem in a hand waving manner. "Those silly code monkeys, how dare they not build AGI! It's as simple as parsing _links and discover the "orders" URI at runtime". Except as I said, you're not allowed to assume that there is an "orders" link since that is out of band information. Your client must be intelligent enough to not only handle a API where the "/user/{id}/orders" link is stored under _links. The server is allowed give the link of "/user/{id}/orders" a randomly generated name that is changing with every request. It's also allowed to change the url path to any randomly generated structure, as long as the server is able to keep track of it. The HATEOAS server is allowed to return a human language description of each field and link, but the client is not allowed to assume that the orders are stored under any specific attribute. Hence you'd need an LLM to know which field is the "orders" field.

    >In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.

    Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.

    [0] Whose contents may only be processed in a structure oblivious way

  • by spelunker on 7/9/25, 4:47 PM

    Ah yes - nobody is doing REST correctly. My favorite form of bikeshedding.
  • by mring33621 on 7/9/25, 2:53 PM

    r/noshitsherlock

    for a lot of places, POST with JSON body is REST