|
| 1 | +# Examples |
| 2 | + |
| 3 | +This document provides a "cookbook" of common usage patterns for the `LockFreeSpscQueue`. |
| 4 | + |
| 5 | +## 1. Initialization |
| 6 | + |
| 7 | +The `LockFreeSpscQueue` is an index manager that operates on a user-provided memory buffer. The user is responsible for creating and owning the buffer and providing a `std::span` of it to the queue's constructor. |
| 8 | + |
| 9 | +The buffer's capacity **must** be a power of two. |
| 10 | + |
| 11 | +```cpp |
| 12 | +#include "LockFreeSpscQueue.h" |
| 13 | +#include <vector> |
| 14 | +#include <string> |
| 15 | + |
| 16 | +// Define a data type to be used in the queue. |
| 17 | +struct Message { |
| 18 | + int id; |
| 19 | + std::string payload; |
| 20 | +}; |
| 21 | + |
| 22 | +// Define the capacity. |
| 23 | +const size_t QUEUE_CAPACITY = 128; // 2^7 |
| 24 | + |
| 25 | +// Create the memory buffer that will be shared between threads. |
| 26 | +std::vector<Message> shared_data_buffer(QUEUE_CAPACITY); |
| 27 | + |
| 28 | +// Create the queue, giving it a non-owning view of our buffer. |
| 29 | +LockFreeSpscQueue<Message> queue(shared_data_buffer); |
| 30 | +``` |
| 31 | +**Lifetime Note:** The user must ensure that `shared_data_buffer` outlives the `queue` object. |
| 32 | +
|
| 33 | +--- |
| 34 | +
|
| 35 | +## 2. Writing to the Queue (Producer) |
| 36 | +
|
| 37 | +There are three primary ways to write to the queue, each suited for a different use case. |
| 38 | +
|
| 39 | +### High-Frequency Writes: `WriteTransaction` |
| 40 | +
|
| 41 | +This is the **highest-performance method** for producers that generate many individual items in a tight loop. It amortizes the cost of atomic operations over many fast, non-atomic pushes. |
| 42 | +
|
| 43 | +```cpp |
| 44 | +void high_frequency_producer(LockFreeSpscQueue<Message>& queue) |
| 45 | +{ |
| 46 | + for (int i = 0; i < 100; ) { |
| 47 | + // 1. Try to start a transaction for a batch of up to 16 items. |
| 48 | + // This returns an optional; it will be empty if the queue is too full. |
| 49 | + if (auto transaction = queue.try_start_write(16)) { |
| 50 | + // 2. We got a reservation! Push items into it. |
| 51 | + // This `try_push` is extremely fast (non-atomic). |
| 52 | + while (i < 100 && transaction->try_push({i, "some data"})) { |
| 53 | + // Item was successfully pushed into the transaction's reserved space. |
| 54 | + i++; |
| 55 | + } |
| 56 | + // 3. The transaction automatically commits the items that were |
| 57 | + // pushed when it goes out of scope here. |
| 58 | + } else { |
| 59 | + // Queue was too full to even start a transaction, yield to the consumer. |
| 60 | + std::this_thread::yield(); |
| 61 | + } |
| 62 | + } |
| 63 | +} |
| 64 | +``` |
| 65 | +*Move semantics are also supported: `transaction->try_push(std::move(my_message));`* |
| 66 | + |
| 67 | +### Batch Writes (Convenience): `try_write` |
| 68 | + |
| 69 | +This is the **recommended method for transferring a pre-prepared batch of data**. It's safe, convenient, and highly efficient. It accepts a lambda that is only called if space is available. |
| 70 | + |
| 71 | +```cpp |
| 72 | +void batch_producer(LockFreeSpscQueue<Message>& queue) |
| 73 | +{ |
| 74 | + // Prepare a local batch of data to send. |
| 75 | + std::vector<Message> local_batch = { {0, "a"}, {1, "b"}, {2, "c"} }; |
| 76 | + |
| 77 | + size_t items_written = 0; |
| 78 | + while (items_written < local_batch.size()) { |
| 79 | + // Create a view of the remaining items we need to write. |
| 80 | + std::span<const Message> sub_batch(local_batch.data() + items_written, |
| 81 | + local_batch.size() - items_written); |
| 82 | + |
| 83 | + // Ask `try_write` to write the sub-batch. It will write as many items as |
| 84 | + // it can and return the count. The lambda handles the actual copy. |
| 85 | + items_written += queue.try_write(sub_batch.size(), [&](auto block1, auto block2) { |
| 86 | + std::copy_n(sub_batch.begin(), block1.size(), block1.begin()); |
| 87 | + if (!block2.empty()) { |
| 88 | + std::copy_n(sub_batch.begin() + block1.size(), block2.size(), block2.begin()); |
| 89 | + } |
| 90 | + }); |
| 91 | + |
| 92 | + // If the queue was full, items_written won't increase, and we'll loop. |
| 93 | + } |
| 94 | +} |
| 95 | +``` |
| 96 | +
|
| 97 | +### Batch Writes (Low-Level): `prepare_write` |
| 98 | +
|
| 99 | +This is the low-level "engine" that the other write methods are built upon. It provides maximum control by returning a `WriteScope` object that grants direct `std::span` access to the underlying buffer. |
| 100 | +
|
| 101 | +```cpp |
| 102 | +void low_level_batch_producer(LockFreeSpscQueue<Message>& queue, |
| 103 | + const std::vector<Message>& data) |
| 104 | +{ |
| 105 | + // Ask to reserve up to `data.size()` items in the queue for writing. |
| 106 | + auto write_scope = queue.prepare_write(data.size()); |
| 107 | +
|
| 108 | + // `get_items_written()` returns the actual number of slots reserved, |
| 109 | + // which may be less than what we asked for if the queue was partially full. |
| 110 | + size_t items_to_write = write_scope.get_items_written(); |
| 111 | +
|
| 112 | + if (items_to_write > 0) { |
| 113 | + auto block1 = write_scope.get_block1(); |
| 114 | + auto block2 = write_scope.get_block2(); |
| 115 | +
|
| 116 | + // Copy data into the contiguous memory blocks. |
| 117 | + std::copy_n(data.begin(), block1.size(), block1.begin()); |
| 118 | + if (!block2.empty()) { |
| 119 | + std::copy_n(data.begin() + block1.size(), block2.size(), block2.begin()); |
| 120 | + } |
| 121 | + } |
| 122 | + // The write is automatically committed when `write_scope` is destroyed. |
| 123 | +} |
| 124 | +``` |
| 125 | + |
| 126 | +--- |
| 127 | + |
| 128 | +## 3. Reading from the Queue (Consumer) |
| 129 | + |
| 130 | +There are two primary ways to read from the queue. |
| 131 | + |
| 132 | +### Batch Reads (Convenience): `try_read` |
| 133 | + |
| 134 | +This is the **recommended and most efficient method** for consuming data. It accepts a lambda which is given direct `std::span` access to the largest available contiguous blocks of readable data. |
| 135 | + |
| 136 | +```cpp |
| 137 | +void batch_consumer(LockFreeSpscQueue<Message>& queue) |
| 138 | +{ |
| 139 | + // Ask to read up to 16 items at a time. |
| 140 | + const size_t items_read = queue.try_read(16, [&](auto block1, auto block2) { |
| 141 | + // Process all items in the first contiguous block. |
| 142 | + for (const auto& msg : block1) { |
| 143 | + std::cout << "Consumer: Got ID " << msg.id << "\n"; |
| 144 | + } |
| 145 | + // Process all items in the second (wrapped-around) block. |
| 146 | + for (const auto& msg : block2) { |
| 147 | + std::cout << "Consumer: Got ID " << msg.id << "\n"; |
| 148 | + } |
| 149 | + }); |
| 150 | + |
| 151 | + if (items_read == 0) { |
| 152 | + // The queue was empty. |
| 153 | + std::this_thread::yield(); |
| 154 | + } |
| 155 | +} |
| 156 | +``` |
| 157 | +
|
| 158 | +### Batch Reads (Low-Level): `prepare_read` |
| 159 | +
|
| 160 | +This is the low-level counterpart to `prepare_write`. It gives you a `ReadScope` object that provides direct access to the readable memory blocks. |
| 161 | +
|
| 162 | +```cpp |
| 163 | +void low_level_batch_consumer(LockFreeSpscQueue<Message>& queue) |
| 164 | +{ |
| 165 | + std::vector<Message> consumed_data; |
| 166 | +
|
| 167 | + // Ask to read up to 16 items from the queue. |
| 168 | + auto read_scope = queue.prepare_read(16); |
| 169 | +
|
| 170 | + if (read_scope.get_items_read() > 0) |
| 171 | + { |
| 172 | + // Copy the data from the queue's buffer into our local vector. |
| 173 | + auto block1 = read_scope.get_block1(); |
| 174 | + consumed_data.insert(consumed_data.end(), block1.begin(), block1.end()); |
| 175 | +
|
| 176 | + auto block2 = read_scope.get_block2(); |
| 177 | + if (!block2.empty()) { |
| 178 | + consumed_data.insert(consumed_data.end(), block2.begin(), block2.end()); |
| 179 | + } |
| 180 | + } |
| 181 | + // The read is automatically committed when `read_scope` is destroyed. |
| 182 | +} |
| 183 | +``` |
0 commit comments