Different ways to write concurrency safe database operations in spring boot

Different ways to write concurrency safe database operations in spring boot


4 min read

Recently came across a very interesting resource on Twitter that talks about "lost update anomalies" in databases. It was very brief without explanation. This is my humble attempt to explain it as intuitive as possible 🙂

Loss update anomaly happens when there are multiple simultaneous transactions. Updates from one or more transactions may get lost as other transactions override it with their results.

Let's examine it through an example. Consider the below method that performs the transaction.

public void withdraw(Long accountId, double amount) {
 Account account = accountRepository.findById(accountId).orElseThrow(() -> {
 throw new IllegalStateException("account does not exist: " + accountId);

 double newBalance = (account.getBalance() - amount);
 if (newBalance < 0) {
 throw new IllegalStateException("there's not enough balance");

This will work as expected as long as there is only single transaction at any given point of time. What will happen when there are multiple simultaneous transactions ?

In such cases, the above code won't work as expected. Modifications made to newBalance by Thread1 are not visible to Thread2. So, it can corrupt the data. No change in the behaviour when we annotate the method with @Transactional. It is just to define the transactional boundary of your application anyway.

How do we prevent Loss update anomaly here ?

Note that, Spring follows isolation level of the underlying datastore by default. Default isolation level of Postgres is READ_COMMITTED. It means that it sees only data committed before the query began and never sees either uncommitted data or changes committed during query execution by concurrent transactions.

We can actually make it to an atomic update operation to solve the problem!
How? By using a native update query that performs direct updates in the DB, rather than the regular ORM style of "select, modify and save".

Refer to the withdraw method changed from previous post:

 public void withdraw(Long accountId, double amount) {

 Double currentBalance = accountRepository.getBalance(accountId);
 if (currentBalance < 0) {
 throw new IllegalStateException("there's not enough balance");
 accountRepository.update(accountId, amount);

So, we used a custom update method instead of usual save method. How exactly this update method look like?

Here is the update method added in repository class:

 @Query(nativeQuery = true,
 value = """
 update account
 set balance = (balance - :amount)
 where id = :accountId
 public int update(Long accountId, Double amount);

Note that, we have @Transactional annotation in both methods. But they belong to two different types of beans : one from service and another from repository class. So, update method follows its own transaction definition.

@Modifying triggers the query annotated to the method as an UPDATE query instead of SELECT query. As the EntityManager might contain outdated entities after the execution of the modifying query, we do not automatically clear it. Hence we need to explicitly mention clearAutomatically=true . We also need to auto flush any managed entities on the persistence context before executing the modifying query. Hence the usage of flushAutomatically=true.

More ways to achieve concurrency safety

  1. Use pessimistic lock for any updates

    Use the below annotation with your existing transactional annotation:

  2. Use data store specific advisory locks

    Use pg_try_advisory_xact_lock advisory lock for postgres, along with a timeout and a key (usually the DB primary key).

    Use it with a retry template, so that it will keep retrying upto the mentioned timeout for acquiring the lock on primary key.


     public void tryWithLock(String key, Duration timeout, Runnable operation) {
     lock(key.getKey(), timeout);
     // your DB updates run here.
     private void lock(final String key, Duration timeout) {
     // tries to acquire a lock until the timeout expires
     retryTemplate.execute(retryContext -> {
     boolean acquired = jdbcTemplate
     .queryForObject("select pg_try_advisory_xact_lock(pg_catalog.hashtextextended(?, 0))", Boolean.class, key);
     if (!acquired) {
     throw new AdvisoryLockNotAcquiredException("Advisory lock not acquired for key '" + key + "'");
     return null;

    You may directly use the advisory lock inside JPA query if that makes it much simpler approach.


      @Query(value = """
      select c
      from Account c
      where c.id = :accountId
      and pg_try_advisory_xact_lock(
      pg_catalog.hashtextextended('account', c.id)
      ) is true
      public Account findByIdWithPessimisticAdvisoryLocking(Long accountId);
  3. Use an optimistic lock with version number in the POJO class

    Add an attribute annotated with @Version in your POJO class.
    And then use your regular Spring JPA queries to fetch updated data.
    Spring JPA will automatically check the version before writing the updates to DB. If there are any dirty writes, the transaction will abort and the client can re-attempt the transaction with the new version. This is best for high-volume systems.

  4. Use pessimistic NO_WAIT locking


     @Query("select c from Account c where c.id = :accountId")
     @QueryHint(name = "javax.persistence.lock.timeout", value = (LockOptions.NO_WAIT + ""))
      public Account findByIdWithPessimisticNoWaitLocking(Long accountId);

    In this case, threads will not be blocked indefinitely for the write operation to release the lock. Instead, it will just return the lock acquisition failure immediately after the mentioned javax.persistence.lock.timeout. We can also handle this exception and retry the transaction if required.