Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions pkg/common/bitmap/bitmap.go
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,15 @@ func (n *Bitmap) Len() int64 {
return n.len
}

// RecalculateCount recounts the number of set bits from the data array.
// This is used to fix count after concurrent bitmap corruption.
func (n *Bitmap) RecalculateCount() {
n.count = 0
for _, w := range n.data {
n.count += int64(bits.OnesCount64(w))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 RecalculateCount() treats the symptom, not the cause

This function exists to fix count/data inconsistency after "concurrent bitmap corruption" (per the comment). But if concurrent access can corrupt the count, it can also corrupt the data bits themselves. Recalculating count from (possibly corrupted) data only makes the count consistent with whatever garbage is in data.

The real fix should prevent the concurrent corruption in the first place. Consider this a temporary band-aid and track the root cause separately.

}
}

// Size return number of bytes in n.data
// XXX WTF Note that this size is not the same as InitWithSize.
func (n *Bitmap) Size() int {
Expand Down Expand Up @@ -349,7 +358,13 @@ func (n *Bitmap) TryExpandWithSize(size int) {
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ TryExpandWithSize zeroing — correct fix

This is a legitimate bug fix. When len(n.data) < newCap <= cap(n.data), the n.data = n.data[:newCap] re-slice exposes backing-array slots that may contain stale data from a previous use. Zeroing them is correct.

Note: the newCap > cap(n.data) branch (line 348) allocates via make() which zero-initializes in Go, so it doesn't need this treatment.

}
if len(n.data) < newCap {
oldLen := len(n.data)
n.data = n.data[:newCap]
// Zero out newly exposed slots to avoid reading stale data
// left over from a previous use of the same backing array.
for i := oldLen; i < newCap; i++ {
n.data[i] = 0
}
}
}

Expand Down
9 changes: 7 additions & 2 deletions pkg/container/nulls/nulls.go
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ func TryExpand(nsp *Nulls, size int) {

// Contains returns true if the integer is contained in the Nulls
func (nsp *Nulls) Contains(row uint64) bool {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Removing EmptyByFlag() short-circuit masks the root cause

EmptyByFlag() checks n.count == 0 — a fast O(1) short-circuit on one of the hottest paths in the engine. Removing it means you've discovered that count and data are inconsistent (count=0 but bits are set). The correct fix is to find what's corrupting the count/data invariant, not to stop trusting count.

Also note the contradiction with other changes in this PR: the EmptyByFlag()→IsEmpty()replacements in vector.go still rely oncount being accurate (IsEmpty()returnsn.count == 0`). If count is untrustworthy here, it's untrustworthy there too.

This will cause a measurable performance regression on every Contains() call across the entire engine.

return nsp != nil && !nsp.np.EmptyByFlag() && nsp.np.Contains(row)
return nsp != nil && nsp.np.Contains(row)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Bitmap contains race panic 🐞 Bug ☼ Reliability

Nulls.Contains now always calls bitmap.Bitmap.Contains, and some union paths rely on IsEmpty()
instead of EmptyByFlag(), increasing read access to bitmaps during concurrent expansion. Since
Bitmap.TryExpandWithSize sets len before resizing data, a concurrent Contains() can index
past data and panic.
Agent Prompt
### Issue description
`Bitmap.TryExpandWithSize` can temporarily expose `len` that is larger than `data` length. With the PR’s increased reliance on `Contains()` in concurrent contexts, this can crash with an index-out-of-range panic.

### Issue Context
This PR is explicitly addressing concurrent bitmap corruption; hardening the core bitmap read path avoids turning “wrong null detection” into process crashes.

### Fix Focus Areas
- Add a defensive bound check in `Bitmap.Contains` (and any other direct `n.data[idx]` reads) to return false if `idx >= uint64(len(n.data))`.
- Optionally reorder `TryExpandWithSize` to grow `data` before publishing `len`.

- pkg/common/bitmap/bitmap.go[231-238]
- pkg/common/bitmap/bitmap.go[348-363]
- pkg/container/nulls/nulls.go[136-139]
- pkg/container/vector/vector.go[1340-1364]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

}

func Contains(nsp *Nulls, row uint64) bool {
Expand Down Expand Up @@ -226,8 +226,13 @@ func Range(nsp *Nulls, start, end, bias uint64, b *Nulls) {
}

b.np.InitWithSize(int64(end + 1 - bias))

// Take a snapshot of the source bitmap to prevent reading inconsistent
// state when the source is being concurrently modified by a parallel
// Prepare call on the same operator chain.
snap := nsp.Clone()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Clone() on every Range() call — severe hot-path performance regression

Range() is called frequently during null propagation. Cloning the entire bitmap on every call is very expensive. If the source bitmap is being concurrently modified by a parallel Prepare call, the correct fix is to add synchronization at the caller level (or ensure the operator chain is not shared), not to defensively clone on every Range() invocation.

This also only provides a snapshot — if the underlying concurrency bug remains, the snapshot may still be taken from an inconsistent state (torn read).

for ; start < end; start++ {
if nsp.np.Contains(start) {
if snap.np.Contains(start) {
b.np.Add(start - bias)
}
}
Comment on lines 226 to 238
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Range skips corrupted nulls 🐞 Bug ≡ Correctness

nulls.Range still returns early based on EmptyByFlag() (which is count==0), so if count is
corrupted to 0 while data has set bits, Range will drop NULLs entirely. This can make
Vector.Window/CloneWindowTo silently lose NULL markers and produce incorrect results.
Agent Prompt
### Issue description
`nulls.Range` still gates on `EmptyByFlag()` which depends on `count`. In the exact corruption scenario this PR is targeting (stale `count`), `Range` can incorrectly return early and drop NULLs in windowed/sliced vectors.

### Issue Context
The PR adds a snapshot clone inside `Range`, but the early-return happens before the snapshot and can mask real nulls.

### Fix Focus Areas
- Avoid using `EmptyByFlag()` as the early-exit condition.
- Consider: clone first, then call `snap.np.RecalculateCount()` and check `snap.np.IsEmpty()` before iterating.

- pkg/container/nulls/nulls.go[223-239]
- pkg/common/bitmap/bitmap.go[171-178]
- pkg/common/bitmap/bitmap.go[193-197]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Expand Down
52 changes: 30 additions & 22 deletions pkg/container/vector/vector.go
Original file line number Diff line number Diff line change
Expand Up @@ -1329,6 +1329,9 @@ func (v *Vector) Copy(w *Vector, vi, wi int64, mp *mpool.MPool) error {
if w.GetNulls().Contains(uint64(wi)) {
v.GetNulls().Set(uint64(vi))
} else {
// Ensure bitmap len covers vi so that Contains() works correctly
// for all rows. TryExpand is needed because Unset doesn't expand.
nulls.TryExpand(v.GetNulls(), int(vi)+1)
v.GetNulls().Unset(uint64(vi))
}
return nil
Expand All @@ -1343,7 +1346,7 @@ func GetUnionAllFunction(typ types.Type, mp *mpool.MPool) func(v, w *Vector) err
u64Length := uint64(moreLength)

moreNp := more.GetBitmap()
if moreNp == nil || moreNp.EmptyByFlag() || moreLength == 0 {
if moreNp == nil || moreNp.IsEmpty() || moreLength == 0 {
return
}

Expand All @@ -1355,6 +1358,9 @@ func GetUnionAllFunction(typ types.Type, mp *mpool.MPool) func(v, w *Vector) err
if moreNp.Contains(0) {
dst.Set(u64offset)
}
// Ensure bitmap len covers the full range so Contains() works
// correctly for all rows, even trailing non-null rows.
nulls.TryExpand(dst, oldLength+moreLength)
}

switch typ.Oid {
Expand Down Expand Up @@ -2085,30 +2091,25 @@ func GetUnionAllFunction(typ types.Type, mp *mpool.MPool) func(v, w *Vector) err
var err error
vs := toSliceOfLengthNoTypeCheck[types.Varlena](v, v.length+w.length)

bm := w.nsp.GetBitmap()
if bm != nil && !bm.EmptyByFlag() {
for i := range ws {
if w.gsp.Contains(uint64(i)) {
nulls.Add(&v.gsp, uint64(v.length))
}
if bm.Contains(uint64(i)) {
nulls.Add(&v.nsp, uint64(v.length))
} else {
err = BuildVarlenaFromVarlena(v, &vs[v.length], &ws[i], &w.area, mp)
if err != nil {
return err
}
}
v.length++
// Always use null-aware path to prevent losing null information
// when bitmap state is inconsistent due to concurrent access.
for i := range ws {
if w.gsp.Contains(uint64(i)) {
nulls.Add(&v.gsp, uint64(v.length))
}
} else {
for i := range ws {
if w.nsp.Contains(uint64(i)) {
nulls.Add(&v.nsp, uint64(v.length))
} else {
err = BuildVarlenaFromVarlena(v, &vs[v.length], &ws[i], &w.area, mp)
if err != nil {
return err
}
v.length++
}
v.length++
}
// Ensure bitmap len covers all rows so Contains() works correctly
if v.nsp.Count() > 0 {
nulls.TryExpand(&v.nsp, v.length)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Removing Varlena fast path — performance regression

The old code had a fast path for vectors with no nulls (bm == nil || bm.EmptyByFlag()) that skipped per-row null checks. This is now removed in favor of always checking w.nsp.Contains(uint64(i)) for every row.

For large batches with no nulls (the common case), this adds an unnecessary Contains() call per row. If the concern is that EmptyByFlag() is unreliable due to count corruption, that's the same root cause that should be fixed at the bitmap level.

return nil
}
Expand Down Expand Up @@ -2647,7 +2648,7 @@ func unionT[T int32 | int64](v, w *Vector, sels []T, mp *mpool.MPool) error {
}
} else {
tlen := v.GetType().TypeSize()
if !w.nsp.EmptyByFlag() {
if !w.nsp.IsEmpty() {
for i, sel := range sels {
if w.gsp.Contains(uint64(sel)) {
nulls.Add(&v.gsp, uint64(oldLen+i))
Expand Down Expand Up @@ -2750,7 +2751,7 @@ func (v *Vector) UnionBatch(w *Vector, offset int64, cnt int, flags []uint8, mp
vCol = toSliceOfLengthNoTypeCheck[types.Varlena](v, v.length+addCnt)
ToSliceNoTypeCheck(w, &wCol)

if !w.nsp.EmptyByFlag() {
if !w.nsp.IsEmpty() {
if flags == nil {
for i := 0; i < cnt; i++ {
if w.gsp.Contains(uint64(offset) + uint64(i)) {
Expand Down Expand Up @@ -2785,6 +2786,8 @@ func (v *Vector) UnionBatch(w *Vector, offset int64, cnt int, flags []uint8, mp
v.length++
}
}
// Ensure bitmap len covers all rows after union
nulls.TryExpand(&v.nsp, v.length)
} else {
if flags == nil {
for i := 0; i < cnt; i++ {
Expand Down Expand Up @@ -2815,7 +2818,7 @@ func (v *Vector) UnionBatch(w *Vector, offset int64, cnt int, flags []uint8, mp
}
} else {
tlen := v.GetType().TypeSize()
if !w.nsp.EmptyByFlag() {
if !w.nsp.IsEmpty() {
if flags == nil {
for i := 0; i < cnt; i++ {
if w.gsp.Contains(uint64(offset) + uint64(i)) {
Expand Down Expand Up @@ -3425,6 +3428,11 @@ func appendOneFixed[T any](vec *Vector, val T, isNull bool, mp *mpool.MPool) err
if isNull {
nulls.Add(&vec.nsp, uint64(length))
} else {
// Ensure bitmap len covers the new position so Contains() works
// correctly even when the last appended rows are non-null.
if vec.nsp.Count() > 0 {
nulls.TryExpand(&vec.nsp, vec.length)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 TryExpand on every non-null appendOneFixed — hot path cost

if vec.nsp.Count() > 0 {
    nulls.TryExpand(&vec.nsp, vec.length)
}

This runs on every non-null appendOneFixed call when the vector has any previous nulls. For a batch of 8192 rows where only row 0 is null, this calls TryExpand 8191 times. The expand is a no-op when already big enough, but the function call overhead on this hot path is unnecessary.

Consider doing this once after the batch is fully built, or tracking whether expansion is needed with a flag.

var col []T
ToSliceNoTypeCheck(vec, &col)
col[length] = val
Expand Down
7 changes: 4 additions & 3 deletions pkg/sql/colexec/lockop/lock_op.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,6 @@ import (
"bytes"
"context"
"fmt"
"strings"
"time"

"github.com/matrixorigin/matrixone/pkg/common/moerr"
"github.com/matrixorigin/matrixone/pkg/common/morpc"
"github.com/matrixorigin/matrixone/pkg/common/reuse"
Expand All @@ -44,6 +41,8 @@ import (
"github.com/matrixorigin/matrixone/pkg/vm/engine"
"github.com/matrixorigin/matrixone/pkg/vm/process"
"go.uber.org/zap"
"strings"
"time"
)

var (
Expand Down Expand Up @@ -155,6 +154,7 @@ func callNonBlocking(
}

lockOp.ctr.lockCount += int64(result.Batch.RowCount())

if err = performLock(result.Batch, proc, lockOp, analyzer, -1); err != nil {
return result, err
}
Expand Down Expand Up @@ -243,6 +243,7 @@ func performLock(
WithLockTable(target.lockTable, target.changeDef).
WithHasNewVersionInRangeFunc(lockOp.ctr.hasNewVersionInRange),
)

if lockOp.logger.Enabled(zap.DebugLevel) {
lockOp.logger.Debug("lock result",
zap.Uint64("table", target.tableID),
Expand Down
9 changes: 6 additions & 3 deletions pkg/sql/colexec/preinsert/preinsert.go
Original file line number Diff line number Diff line change
Expand Up @@ -139,9 +139,8 @@ func (preInsert *PreInsert) constructColBuf(proc *proc, bat *batch.Batch, first
return err
}
} else {
preInsert.ctr.canFreeVecIdx[idx] = true
if bat.Vecs[idx].IsConst() {
preInsert.ctr.canFreeVecIdx[idx] = true
//expland const vector
typ := bat.Vecs[idx].GetType()
tmpVec := vector.NewOffHeapVecWithType(*typ)
if err = vector.GetUnionAllFunction(*typ, proc.Mp())(tmpVec, bat.Vecs[idx]); err != nil {
Expand All @@ -150,7 +149,11 @@ func (preInsert *PreInsert) constructColBuf(proc *proc, bat *batch.Batch, first
}
preInsert.ctr.buf.Vecs[idx] = tmpVec
} else {
preInsert.ctr.buf.SetVector(int32(idx), bat.Vecs[idx])
dupVec, dupErr := bat.Vecs[idx].Dup(proc.Mp())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Dup() for every non-const column — memory regression

The old code used SetVector() (zero-copy). The new code does a full Dup() for every non-const, non-generated column on every batch. For high-throughput INSERT workloads, this doubles memory usage per batch.

If the issue is that a downstream operator modifies the shared vector (causing the bitmap corruption), the proper fix is either:

  1. Fix the downstream operator to copy-on-write, or
  2. Identify which specific column needs isolation (likely only the unique key column)

Rather than copying all columns unconditionally.

if dupErr != nil {
return dupErr
}
preInsert.ctr.buf.Vecs[idx] = dupVec
}
}
}
Expand Down
3 changes: 0 additions & 3 deletions pkg/sql/colexec/projection/projection.go
Original file line number Diff line number Diff line change
Expand Up @@ -79,9 +79,6 @@ func (projection *Projection) Call(proc *process.Process) (vm.CallResult, error)
}
// for projection operator, all Vectors of projectBat come from executor.Eval
// and will not be modified within projection operator. so we can used the result of executor.Eval directly.
// (if operator will modify vector/agg of batch, you should make a copy)
// however, it should be noted that since they directly come from executor.Eval
// these vectors cannot be free by batch.Clean directly and must be handed over executor.Free
projection.ctr.buf.Vecs[i] = vec
}
projection.maxAllocSize = max(projection.maxAllocSize, projection.ctr.buf.Size())
Expand Down
8 changes: 5 additions & 3 deletions pkg/sql/colexec/value_scan/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,10 @@ type ValueScan struct {

type container struct {
// nowIdx indicates which data should send to next operator now.
nowIdx int
start int
end int
nowIdx int
start int
end int
prepared bool
}

func init() {
Expand Down Expand Up @@ -73,6 +74,7 @@ func (valueScan *ValueScan) Reset(proc *process.Process, _ bool, _ error) {
valueScan.runningCtx.nowIdx = 0
valueScan.runningCtx.start = 0
valueScan.runningCtx.end = 0
valueScan.runningCtx.prepared = false

//for prepare stmt, valuescan batch vecs do not need to reset, when next execute, prepare just copy data to vecs, length is same to last execute
for i := 0; i < valueScan.ColCount; i++ {
Expand Down
15 changes: 15 additions & 0 deletions pkg/sql/colexec/value_scan/value_scan.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,13 @@ func (valueScan *ValueScan) makeValueScanBatch(proc *process.Process) (err error
// select * from (values row(1,1), row(2,2), row(3,3)) a;
bat := valueScan.Batchs[0]

// Skip evalRowsetData if already done (prevents concurrent bitmap corruption
// when the same scope is started multiple times by nested MergeRun)
if valueScan.runningCtx.prepared {
return nil
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

�� prepared flag — may mask legitimate re-execution

The prepared flag prevents evalRowsetData from running more than once between Reset() calls. If the root cause is that MergeRun calls the same scope multiple times concurrently, this flag (without synchronization) has a TOCTOU race: two goroutines can both read prepared == false before either sets it to true.

Also, the RecalculateCount() call at line 91-95 after evalRowsetData acknowledges that bitmap state is already corrupted at this point. If evalRowsetData itself isn't the source of corruption (the comment says it's from "nested MergeRun"), then the prepared guard won't prevent the corruption — it happens from a different code path.

}
valueScan.runningCtx.prepared = true

for i := 0; i < valueScan.ColCount; i++ {
exprList = valueScan.ExprExecLists[i]
if len(exprList) == 0 {
Expand All @@ -78,6 +85,14 @@ func (valueScan *ValueScan) makeValueScanBatch(proc *process.Process) (err error
}
}

// Fix bitmap count/data inconsistency that can occur when the same
// operator chain is started multiple times by nested MergeRun.
for _, vec := range bat.Vecs {
if vec != nil && !vec.IsConst() && vec.Length() > 0 {
vec.GetNulls().GetBitmap().RecalculateCount()
}
}

return nil
}

Expand Down
9 changes: 9 additions & 0 deletions pkg/sql/compile/analyze_module.go
Original file line number Diff line number Diff line change
Expand Up @@ -295,6 +295,9 @@ func (c *Compile) fillPlanNodeAnalyzeInfo(stats *statistic.StatsInfo) {
//----------------------------------------------------------------------------------------------------------------------

func ConvertScopeToPhyScope(scope *Scope, receiverMap map[*process.WaitRegister]int) models.PhyScope {
if scope == nil {
return models.PhyScope{}
}
phyScope := models.PhyScope{
Magic: scope.Magic.String(),
Mcpu: int8(scope.NodeInfo.Mcpu),
Expand All @@ -319,6 +322,9 @@ func ConvertScopeToPhyScope(scope *Scope, receiverMap map[*process.WaitRegister]
}

func UpdatePreparePhyScope(scope *Scope, phyScope models.PhyScope) bool {
if scope == nil {
return true
}
res := UpdatePreparePhyOperator(scope.RootOp, phyScope.RootOperator)
if !res {
return false
Expand Down Expand Up @@ -714,6 +720,9 @@ func explainScopes(scopes []*Scope, gap int, rmp map[*process.WaitRegister]int,
// It includes header information of Scope, data source information, and pipeline tree information.
// In addition, it recursively displays information from any PreScopes.
func explainSingleScope(scope *Scope, index int, gap int, rmp map[*process.WaitRegister]int, option *ExplainOption, buffer *bytes.Buffer) {
if scope == nil {
return
}
gapNextLine(gap, buffer)

// Scope Header
Expand Down
6 changes: 6 additions & 0 deletions pkg/sql/compile/compile.go
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,9 @@ func (c *Compile) Reset(proc *process.Process, startAt time.Time, fill func(*bat
}

func UpdateScopeTxnOffset(scope *Scope, txnOffset int) {
if scope == nil {
return
}
scope.TxnOffset = txnOffset
for i := range scope.PreScopes {
UpdateScopeTxnOffset(scope.PreScopes[i], txnOffset)
Expand Down Expand Up @@ -302,6 +305,9 @@ func (c *Compile) clear() {
}

func (c *Compile) addAllAffectedRows(s *Scope) {
if s == nil {
return
}
for _, ps := range s.PreScopes {
c.addAllAffectedRows(ps)
}
Expand Down
6 changes: 6 additions & 0 deletions pkg/sql/compile/compile2.go
Original file line number Diff line number Diff line change
Expand Up @@ -647,6 +647,9 @@ func (c *Compile) InitPipelineContextToRetryQuery() {
// buildContextFromParentCtx build the context for the pipeline tree.
// the input parameter is the whole tree's parent context.
func (s *Scope) buildContextFromParentCtx(parentCtx context.Context) {
if s == nil {
return
}
receiverCtx := s.Proc.BuildPipelineContext(parentCtx)

// build context for receiver.
Expand All @@ -664,6 +667,9 @@ func setContextForParallelScope(parallelScope *Scope, originalContext context.Co

// build context for data entry.
for _, prePipeline := range parallelScope.PreScopes {
if prePipeline == nil {
continue
}
prePipeline.buildContextFromParentCtx(parallelScope.Proc.Ctx)
}
}
Expand Down
6 changes: 6 additions & 0 deletions pkg/sql/compile/debugTools.go
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,9 @@ func DebugShowScopes(ss []*Scope, level DebugLevel) string {
// genReceiverMap recursively traverses the Scope tree and generates unique identifiers (integers) for
// each WaitRegister in Scope.
func genReceiverMap(s *Scope, mp map[*process.WaitRegister]int) {
if s == nil {
return
}
for i := range s.PreScopes {
genReceiverMap(s.PreScopes[i], mp)
}
Expand All @@ -147,6 +150,9 @@ func showScopes(scopes []*Scope, gap int, rmp map[*process.WaitRegister]int, lev
// It includes header information of Scope, data source information, and pipeline tree information.
// In addition, it recursively displays information from any PreScopes.
func showSingleScope(scope *Scope, index int, gap int, rmp map[*process.WaitRegister]int, level DebugLevel, buffer *bytes.Buffer) {
if scope == nil {
return
}
gapNextLine(gap, buffer)

// Scope Header
Expand Down
Loading
Loading