Is there an existing issue for the same bug?
Branch Name
3.0-dev
Commit ID
9016584
Other Environment Information
- Hardware parameters: cn.s8c64g
- OS type: Linux
- Others:
- CN: 33366666-3331-3934-3564-643931656131
- tenant/account: ws_bf2d347f
- transaction mode: Pessimistic
Actual Behavior
Large INSERT INTO ... SELECT ... statements against jst_receipts_tables.purchase_order_detail can hang for a long time when the target table has a real primary key. Profiles show the statement entering Compile.prePipelineInitializer() and blocking in lockop.LockTable() / lockservice.waiter.wait() before the pipeline starts. Once the table lock is acquired, the statement can hold it for the whole source scan and block later writers on the same table.
Expected Behavior
For pessimistic INSERT ... SELECT ..., table-lock semantics should be preserved, but the table lock should not be acquired in prePipelineInitializer() before the source side starts running. The lock hold window should be minimized to reduce long blocking on the target table.
Steps to Reproduce
- Create a target table with a real primary key.
- In a pessimistic transaction, run a large
INSERT INTO purchase_order_detail ... SELECT ... that scans a large source dataset.
- Observe that the statement can hang before pipeline execution with profiles showing
lockservice.waiter.wait -> lockop.LockTable -> Compile.lockTable -> Compile.prePipelineInitializer.
- Remove the primary key from the target table and rerun the same statement. The statement finishes much faster and no longer hits the same table-lock path.
Additional information
- The locked table id observed in production was
5396701, which maps to jst_receipts_tables.purchase_order_detail.
- Root cause analysis showed the lock target was generated from the real PK and escalated to a table lock, then acquired too early in the execution lifecycle.
- The fix keeps table-lock semantics for pessimistic INSERTs, but defers pre-pipeline table locking for
Query_INSERT so the lock is acquired by the pipeline LockOp instead of prePipelineInitializer().
Is there an existing issue for the same bug?
Branch Name
3.0-dev
Commit ID
9016584
Other Environment Information
Actual Behavior
Large
INSERT INTO ... SELECT ...statements againstjst_receipts_tables.purchase_order_detailcan hang for a long time when the target table has a real primary key. Profiles show the statement enteringCompile.prePipelineInitializer()and blocking inlockop.LockTable()/lockservice.waiter.wait()before the pipeline starts. Once the table lock is acquired, the statement can hold it for the whole source scan and block later writers on the same table.Expected Behavior
For pessimistic
INSERT ... SELECT ..., table-lock semantics should be preserved, but the table lock should not be acquired inprePipelineInitializer()before the source side starts running. The lock hold window should be minimized to reduce long blocking on the target table.Steps to Reproduce
INSERT INTO purchase_order_detail ... SELECT ...that scans a large source dataset.lockservice.waiter.wait -> lockop.LockTable -> Compile.lockTable -> Compile.prePipelineInitializer.Additional information
5396701, which maps tojst_receipts_tables.purchase_order_detail.Query_INSERTso the lock is acquired by the pipelineLockOpinstead ofprePipelineInitializer().