from Hacker News

Distributed Locks with Redis (2014)

by thenewwazoo on 8/19/24, 4:34 PM with 37 comments

  • by zinodaur on 8/22/24, 12:48 AM

    Martin Kleppmann has some interesting thoughts on Redlock:

    > I think the Redlock algorithm is a poor choice because it is “neither fish nor fowl”: it is unnecessarily heavyweight and expensive for efficiency-optimization locks, but it is not sufficiently safe for situations in which correctness depends on the lock.

    https://martin.kleppmann.com/2016/02/08/how-to-do-distribute...

  • by eurleif on 8/22/24, 5:02 AM

    I helped! :) (A little.)

    http://antirez.com/news/77

    >The Hacker News user eurleif noticed how it is possible to reacquire the lock as a strategy if the client notices it is taking too much time in order to complete the operation. This can be done by just extending an existing lock, sending a script that extends the expire of the value stored at the key is the expected one. If there are no new partitions, and we try to extend the lock enough in advance so that the keys will not expire, there is the guarantee that the lock will be extended.

  • by alexey-salmin on 8/22/24, 1:47 AM

    Something I don't enjoy about remote/distributed locks is that unlike distributed transactions they're usually unable to provide any strict guarantees about things they protect.

    E.g. if you algorithm is:

    1) Hold the distributed lock

    2) Do the thing

    3) Release the lock

    And the node goes dark for a while between steps 1 and 2 (e.g. 100% CPU load), by the time it reaches 2 the lock may have already expired and another node is holding it, resulting in a race. Adding steps like "1.1 double/triple check the lock is still held" obviously doesn't help because the node can go dark right after these and resume operation at 2. The probability of these is not too high, but still: no guarantees. Furthermore at a certain scale you do actually start seeing rogue nodes deemed dead hours ago suddenly coming back to life and doing unpleasant things.

    The rule of thumb usually is "keep locks within the same transaction space as the thing they protect", and often you don't even needs locks in that case, just transactions can be enough by themselves. If you're trying to protect something that inherently un-transactional then, well, good luck because these efforts are always probabilistic in nature.

    A good use-case for a remote lock would be when it's not actually used to guarantee consistency or avoid races, but merely tries to prevent duplicate calculations for cost/performance considerations. For all other cases I outright recommend avoiding them.

  • by jetru on 8/22/24, 1:30 AM

    Famously broken badly for anything mission critical.
  • by Bogdanp on 8/22/24, 5:50 AM

    To counter some of the hate in this thread: I have used this to great success as an "opportunistic" locking mechanism to, for example, reduce load on a Postgres database. The winner of the race to acquire the lock would run the (expensive) query then cache the result. On lock release, the nodes waiting on the lock would then try to read the cache before trying to acquire the lock again.
  • by awinter-py on 8/22/24, 3:23 AM

    redis is the easiest-to-host lock server and that's worth the risk in some applications (depending on consequence of errors obv)

    inspiring + slightly terrifying that rather than a single server-side implementation, every client is responsible for its own implementation

    if postgres provided fast kv cache and a lock primitive it would own

  • by AtlasBarfed on 8/22/24, 12:50 AM

    Where's the Jepsen suite tests?

    Without it this is alphaware at best

  • by anonzzzies on 8/22/24, 2:57 AM

    I have seen (ad hoc) implementations go quite bad many times. I encounter them quite a bit as a distributed replacement for some type of db transaction where the db is something like rds; someone thought to be smart and write 'things at scale' (they read on reddit etc) while a db transaction would've been the correct solution and they didn't need scale anyway (or underestimated the current db capabilities for mysql/postgres).