feat(firehose): record cache extraction#6846
Merged
ekjotmultani merged 10 commits intofeat/amplify-firehose-client-featurefrom Apr 13, 2026
Merged
feat(firehose): record cache extraction#6846ekjotmultani merged 10 commits intofeat/amplify-firehose-client-featurefrom
ekjotmultani merged 10 commits intofeat/amplify-firehose-client-featurefrom
Conversation
…che_dart Create amplify_record_cache_dart package with shared caching infrastructure: - RecordCacheException hierarchy (const constructors) - Record/RecordInput models (partitionKey optional, dataSize caller-computed) - RecordStorage base + SqliteRecordStorage, InMemoryRecordStorage, IndexedDbRecordStorage - RecordCacheDatabase (Drift, parameterized dbPrefix) - Sender interface + SendResult (replaces KDS-specific PutRecordsResult) - RecordClient, AutoFlushScheduler, FlushStrategy, FlushData, RecordData, ClearCacheData - Platform resolution (VM/web/stub conditional exports) Update amplify_kinesis_dart to depend on shared package: - KinesisSender implements Sender interface (sendBatch replaces putRecords) - Partition key validation moved from RecordStorage to AmplifyKinesisClient - createKinesisRecordInputNow computes dataSize with partition key - All test imports updated, zero behavioral change
The aft generate workflows command regenerated the entire dependabot.yaml with entries for many unrelated packages. Reverting to keep this PR scoped to the record cache extraction only.
…lation The barrel file was unconditionally exporting record_storage_indexeddb.dart which imports dart:js_interop. This caused VM tests to fail with "Dart library 'dart:js_interop' is not available on this platform". IndexedDB storage is only reachable through the platform conditional export (record_storage_platform_web.dart), not the barrel.
… validation test Drift warns when multiple RecordCacheDatabase instances exist simultaneously. Close the default client first, then reassign so tearDown handles cleanup.
The 10 MiB test record exceeded the 5 MiB batch limit, causing getRecordsByStream to filter it out. Bumped to 20 MiB to match the cache size.
All test files were using 5 MiB for maxBytesPerBatch, but the KDS default was 10 MiB (matching maxRecordSizeBytes). This mismatch caused getRecordsByStream to filter out records at the max size limit.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## feat/amplify-firehose-client-feature #6846 +/- ##
========================================================================
- Coverage 43.23% 43.22% -0.02%
========================================================================
Files 99 99
Lines 7769 7769
Branches 3400 3400
========================================================================
- Hits 3359 3358 -1
- Misses 4410 4411 +1 🚀 New features to boost your workflow:
|
jvh-aws
requested changes
Apr 8, 2026
| /// The partition key for the record. | ||
| TextColumn get partitionKey => text()(); | ||
| /// The partition key (empty string for services that don't use it). | ||
| TextColumn get partitionKey => text().withDefault(const Constant(''))(); |
Contributor
There was a problem hiding this comment.
I think we should avoid just keeping the column around for firehose but instead remove it from the schema and queries in that case (see e.g. Android)
Restore detailed dartdoc comments on AmplifyKinesisClient methods (create, kinesisClient, record, flush, clearCache, disable, close, _wrapError) and the error code comment in KinesisSender that were accidentally removed during the extraction rewrite.
…ing to empty string KDS always provides a partition key. Using ?? '' silently hides bugs. Use ! to assert non-null since KDS records always have partitionKey set.
…internal, document drift dev dep - Export defaultRecoverySuggestion from shared barrel (remove hide clause) - KDS exception file now uses the shared constant instead of its own - Mark amplify_record_cache_dart as internal (publish_to: none) - Add comment explaining why drift is a dev dependency in KDS
…g change The table name in SQLite is derived from the Drift class name. Renaming from KinesisRecords to CachedRecords would change the table from 'kinesis_records' to 'cached_records', breaking existing users. Firehose is in the Kinesis family so the name is semantically fine. Regenerated .g.dart and updated all references.
ec31ede
into
feat/amplify-firehose-client-feature
27 of 29 checks passed
ekjotmultani
added a commit
that referenced
this pull request
Apr 13, 2026
* refactor(kinesis): extract shared record cache into amplify_record_cache_dart Create amplify_record_cache_dart package with shared caching infrastructure: - RecordCacheException hierarchy (const constructors) - Record/RecordInput models (partitionKey optional, dataSize caller-computed) - RecordStorage base + SqliteRecordStorage, InMemoryRecordStorage, IndexedDbRecordStorage - RecordCacheDatabase (Drift, parameterized dbPrefix) - Sender interface + SendResult (replaces KDS-specific PutRecordsResult) - RecordClient, AutoFlushScheduler, FlushStrategy, FlushData, RecordData, ClearCacheData - Platform resolution (VM/web/stub conditional exports) Update amplify_kinesis_dart to depend on shared package: - KinesisSender implements Sender interface (sendBatch replaces putRecords) - Partition key validation moved from RecordStorage to AmplifyKinesisClient - createKinesisRecordInputNow computes dataSize with partition key - All test imports updated, zero behavioral change
ekjotmultani
added a commit
that referenced
this pull request
Apr 16, 2026
* refactor(kinesis): extract shared record cache into amplify_record_cache_dart Create amplify_record_cache_dart package with shared caching infrastructure: - RecordCacheException hierarchy (const constructors) - Record/RecordInput models (partitionKey optional, dataSize caller-computed) - RecordStorage base + SqliteRecordStorage, InMemoryRecordStorage, IndexedDbRecordStorage - RecordCacheDatabase (Drift, parameterized dbPrefix) - Sender interface + SendResult (replaces KDS-specific PutRecordsResult) - RecordClient, AutoFlushScheduler, FlushStrategy, FlushData, RecordData, ClearCacheData - Platform resolution (VM/web/stub conditional exports) Update amplify_kinesis_dart to depend on shared package: - KinesisSender implements Sender interface (sendBatch replaces putRecords) - Partition key validation moved from RecordStorage to AmplifyKinesisClient - createKinesisRecordInputNow computes dataSize with partition key - All test imports updated, zero behavioral change
ekjotmultani
added a commit
that referenced
this pull request
Apr 17, 2026
* refactor(kinesis): extract shared record cache into amplify_record_cache_dart Create amplify_record_cache_dart package with shared caching infrastructure: - RecordCacheException hierarchy (const constructors) - Record/RecordInput models (partitionKey optional, dataSize caller-computed) - RecordStorage base + SqliteRecordStorage, InMemoryRecordStorage, IndexedDbRecordStorage - RecordCacheDatabase (Drift, parameterized dbPrefix) - Sender interface + SendResult (replaces KDS-specific PutRecordsResult) - RecordClient, AutoFlushScheduler, FlushStrategy, FlushData, RecordData, ClearCacheData - Platform resolution (VM/web/stub conditional exports) Update amplify_kinesis_dart to depend on shared package: - KinesisSender implements Sender interface (sendBatch replaces putRecords) - Partition key validation moved from RecordStorage to AmplifyKinesisClient - createKinesisRecordInputNow computes dataSize with partition key - All test imports updated, zero behavioral change
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.