Skip to content

feat: optimize get next app tag as sender#23239

Merged
Thunkar merged 2 commits into
merge-train/fairiesfrom
gj/optimize_get_next_app_tag_as_sender
May 13, 2026
Merged

feat: optimize get next app tag as sender#23239
Thunkar merged 2 commits into
merge-train/fairiesfrom
gj/optimize_get_next_app_tag_as_sender

Conversation

@Thunkar
Copy link
Copy Markdown
Contributor

@Thunkar Thunkar commented May 13, 2026

Summary

Cuts the round-trip cost of aztec_prv_getNextAppTagAsSender by firing the logs query and the receipts query for already-known pending tx hashes in parallel.

A second-pass receipt query still runs when the logs query surfaces previously-unseen pending tx hashes, but only for those.

Changes

  • sync_sender_tagging_indexes.ts: pre-fetch known-pending tx hashes from the store, then Promise.all([loadAndStoreNewTaggingIndexes, getStatusChangeOfPending(known)]). Diff against the post-load store snapshot to find newly-discovered pending and fetch their receipts in a conditional follow-up call.
  • get_status_change_of_pending.ts: exports StatusChange, EMPTY_STATUS_CHANGE, mergeStatusChanges for use by the sync loop.
  • Tests: four new cases covering pre-existing-pending finalization, the no-pending no-logs RPC-skip path, mixed known/newly-discovered pending in one window, and idempotent rediscovery.

@Thunkar Thunkar requested a review from nchamo May 13, 2026 10:38
@Thunkar Thunkar self-assigned this May 13, 2026
Copy link
Copy Markdown
Contributor

@nchamo nchamo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work!

Comment on lines +390 to +391
expect(aztecNode.getTxReceipt).toHaveBeenCalledWith(preExistingTxHash);
expect(aztecNode.getTxReceipt).toHaveBeenCalledWith(newlyDiscoveredTxHash);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we check it was called twice?

@Thunkar Thunkar enabled auto-merge (squash) May 13, 2026 12:32
@Thunkar Thunkar merged commit e8a5f8d into merge-train/fairies May 13, 2026
14 checks passed
@Thunkar Thunkar deleted the gj/optimize_get_next_app_tag_as_sender branch May 13, 2026 12:53
AztecBot pushed a commit that referenced this pull request May 13, 2026
## Summary

Cuts the round-trip cost of `aztec_prv_getNextAppTagAsSender` by firing
the logs query and the receipts query for already-known pending tx
hashes in parallel.

A second-pass receipt query still runs when the logs query surfaces
previously-unseen pending tx hashes, but only for those.

## Changes

- `sync_sender_tagging_indexes.ts`: pre-fetch known-pending tx hashes
from the store, then `Promise.all([loadAndStoreNewTaggingIndexes,
getStatusChangeOfPending(known)])`. Diff against the post-load store
snapshot to find newly-discovered pending and fetch their receipts in a
conditional follow-up call.
- `get_status_change_of_pending.ts`: exports `StatusChange`,
`EMPTY_STATUS_CHANGE`, `mergeStatusChanges` for use by the sync loop.
- Tests: four new cases covering pre-existing-pending finalization, the
no-pending no-logs RPC-skip path, mixed known/newly-discovered pending
in one window, and idempotent rediscovery.
@AztecBot
Copy link
Copy Markdown
Collaborator

✅ Successfully backported to backport-to-v4-next-staging #23236.

AztecBot added a commit that referenced this pull request May 14, 2026
BEGIN_COMMIT_OVERRIDE
feat: package sqlite kv-store backend for stricter browser envs (#23089)
fix(pxe): sync target contract before cross-contract utility call
(#23225)
fix(ci): swap slack_notify args in CLI acceptance test (#23241)
feat: optimize get next app tag as sender (#23239)
fix(noir): noirfmt nested_utility_contract main.nr (#23246)
chore(aztec-nr): mark emit_event_in_public as #[inline_never] to shrink
public dispatch (#23161)
END_COMMIT_OVERRIDE
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants