If transaction a is never committed, transaction b will remain stalled. Infinite wait, connection pileup, or other bad things ™ Process 81944 waits for ShareLock on transaction 924069 blocked by process 81941.ĬONTEXT: while inserting index tuple (0,20) in relation "."ģ. Deadlock! Postgres detects this after deadlock_timeout and one of the transactions is aborted with this somewhat confusing deadlock error: ERROR: deadlock detectedĭETAIL: Process 81941 waits for ShareLock on transaction 924071 blocked by process 81944. Now b is waiting on a and a is waiting on b. This causes transaction a to attempt to acquire a lock on transaction b. If the two transactions are each inserting multiple rows into the table, transaction a may attempt to insert a key previously inserted by transaction b. transaction b fails with the message duplicate key v violates unique constraint "." 2. There are 3 possible outcomes, ordered best to worst: 1. Transaction b is now blocked waiting for transaction a to finish. In our example, Postgres will determine the transaction ID of the other transaction ( transaction a) and transaction b will attempt to acquire a lock on the transaction ID of transaction a. If you want to wait for another transaction to finish, you can attempt to acquire a lock on that transaction ID, which will be granted when that transaction finishes. Every transaction holds an exclusive lock on its own transaction ID while the transaction is in progress. Postgres handles this situation by having transaction b wait until transaction a completes. In this case, it will find that another in-progress transaction ( transaction a) has already inserted v. Postgres will look for the tuple we’re attempting to insert in both the committed and “dirty” (not-yet-committed) sections of the index. A Postgres index stores not only committed data, but also stores data written by ongoing transactions. Transaction b inserts v which causes Postgres to check the unique index. Suppose transaction a has inserted v first but has not yet committed. For the purposes of our example we’ll have transaction a and transaction b.īoth transactions attempt to insert value v into the same table. If two transactions are writing to the index concurrently, the situation is more complex. Here’s the relevant function in the Postgres source. If it already exists, our insertion will fail because we violated a unique constraint. Postgres simply looks up the tuple we’re attempting to insert in the unique index. In the case where only one transaction is writing to a unique index, the process is straightforward. It’s not uncommon to be in the worst case scenario described above, until, by sheer luck, insert ordering triggers the deadlock detector and cancels the transactions. The Postgres deadlock detector will detect the deadlock after deadlock_timeout (typically one second) 2 and cancel one of the transactions. If you were a bit luckier, you could have created deadlock within Postgres. In most cases you will leak database connections until your application hangs. The Postgres deadlock detector cannot save you. You now have an application level deadlock. A similar behavior can occur if application thread 1 is blocked for an unrelated reason eg. This seems far fetched, but I’ve seen it multiple times in practice in seemingly reasonable code. application thread 1 now waits for something from application thread 2.transaction 2 and application thread 2 are now both blocked until transaction 1 is committed or rolled back application thread 2 in transaction 2 inserts a.application thread 1 in transaction 1 inserts a.Avoid long transactions involving tables with unique constraints especially if you broke rule #1.įailure to follow these guidelines can result in your application leaking databases connections and eventually hanging.Avoid inserting large numbers of records during the same transaction into a table with a unique constraint, especially if you broke rule #1.This is especially important if multiple transactions do this concurrently. ![]() Avoid inserting duplicate keys into tables with unique constraints and relying on the unique constraint to save you.Moreover, you can cause deadlock using only inserts. When two transactions insert the same value into a unique index, one transaction will wait for the other transaction to finish before proceeding. Before I get into the “why”, here are the implications: 1 It turns out that unique indices and concurrent transactions can interact in nasty and surprising ways. Postgres implements unique constraints by creating a unique index – an index that can only contain unique values. A recent outage lead me to investigate Postgres unique constraints more deeply.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |