Guide

A comprehensive guide to using ring-buffer-macro, covering all modes, configuration options, and patterns.

Installation

cargo add ring-buffer-macro

Requires Rust 2021 edition. The crate has zero runtime dependencies — only proc-macro build dependencies (syn, quote, proc-macro2).

Basic Usage

The macro takes a tuple struct and generates a complete ring buffer implementation. The single field defines the element type.

1use ring_buffer_macro::ring_buffer;
2
3#[ring_buffer(8)]
4struct MessageQueue(String);
5
6fn main() {
7 let mut queue = MessageQueue::new();
8 queue.enqueue("hello".to_string()).unwrap();
9 queue.enqueue("world".to_string()).unwrap();
10
11 while let Some(msg) = queue.dequeue() {
12 println!("{}", msg);
13 }
14}

The buffer operates as a FIFO (First In, First Out) queue with fixed capacity. When full, enqueue returns Err(item) so you can recover the rejected value.

Working with Generics

Generic type parameters are fully preserved through macro expansion:

1#[ring_buffer(10)]
2struct GenericBuffer<T: Clone>(T);
3
4let mut buf: GenericBuffer<String> = GenericBuffer::new();
5buf.enqueue("hello".to_string()).unwrap();
6assert_eq!(buf.dequeue(), Some("hello".to_string()));

Nested generics work too:

#[ring_buffer(5)]
struct NestedBuffer<T: Clone>(Vec<T>);
let mut buf: NestedBuffer<i32> = NestedBuffer::new();
buf.enqueue(vec![1, 2, 3]).unwrap();

Visibility Modifiers

Visibility is preserved from your struct definition. The generated methods inherit the same visibility as the struct.

// Public buffer — generates pub methods
pub struct PublicBuffer(i32);
// Crate-visible buffer — generates pub(crate) methods
pub(crate) struct CrateBuffer(i32);

Buffer Modes

Standard Mode (default)

Single-threaded mode using Clone semantics. Best for simple, single-threaded use cases.

#[ring_buffer(16)]
struct SimpleBuffer(i32);
let mut buf = SimpleBuffer::new();
for i in 0..16 {
buf.enqueue(i).unwrap();
}
assert!(buf.is_full());

SPSC Mode

Lock-free single-producer single-consumer mode. Uses atomic operations with Acquire/Release ordering. Requires T: Send.

1use std::sync::Arc;
2use std::thread;
3
4#[ring_buffer(capacity = 256, mode = "spsc")]
5struct SpscBuffer(i32);
6
7fn main() {
8 let buf = Arc::new(SpscBuffer::new());
9 let (producer, consumer) = buf.split();
10
11 // Producer thread
12 let p = thread::spawn(move || {
13 for i in 0..100 {
14 while producer.try_enqueue(i).is_err() {
15 std::hint::spin_loop();
16 }
17 }
18 });
19
20 // Consumer thread
21 let c = thread::spawn(move || {
22 let mut received = Vec::new();
23 while received.len() < 100 {
24 if let Some(item) = consumer.try_dequeue() {
25 received.push(item);
26 }
27 }
28 received
29 });
30
31 p.join().unwrap();
32 let data = c.join().unwrap();
33 assert_eq!(data.len(), 100);
34}
The split() method enforces the SPSC contract at the type level — the producer handle can only enqueue, and the consumer handle can only dequeue.

MPSC Mode

Multiple producers, single consumer. Uses CAS-based (Compare-And-Swap) coordination for producer slot allocation. Requires T: Send.

1use std::sync::Arc;
2use std::thread;
3
4#[ring_buffer(capacity = 128, mode = "mpsc")]
5struct MpscBuffer(i32);
6
7fn main() {
8 let buf = Arc::new(MpscBuffer::new());
9 let consumer = buf.consumer();
10
11 // Spawn multiple producers
12 let handles: Vec<_> = (0..4).map(|id| {
13 let producer = buf.producer(); // Clonable handle
14 thread::spawn(move || {
15 for i in 0..10 {
16 while producer.try_enqueue(id * 10 + i).is_err() {
17 std::hint::spin_loop();
18 }
19 }
20 })
21 }).collect();
22
23 // Consume all items
24 let mut count = 0;
25 while count < 40 {
26 if let Some(_item) = consumer.try_dequeue() {
27 count += 1;
28 }
29 }
30
31 for h in handles { h.join().unwrap(); }
32}

Error Handling

enqueue() returns Result<(), T>. If the buffer is full, you get the item back:

match buf.enqueue(42) {
Ok(()) => println!("enqueued"),
Err(rejected) => println!("buffer full, {} rejected", rejected),
}

dequeue() returns Option<T>. Returns None if the buffer is empty:

match buf.dequeue() {
Some(item) => println!("got {}", item),
None => println!("buffer empty"),
}

Power-of-Two Optimization

Uses bitwise AND instead of modulo for index wrapping. Faster on most architectures. Capacity must be a power of two — enforced at compile time.

#[ring_buffer(capacity = 256, power_of_two = true)]
struct FastBuffer(u64);
// (index + 1) & (capacity - 1) instead of (index + 1) % capacity

Cache-Line Padding

Prevents false sharing in concurrent scenarios by aligning head and tail indices to 64-byte cache line boundaries.

#[ring_buffer(capacity = 1024, mode = "spsc", cache_padded = true)]
struct PaddedBuffer(i32);
Cache-line padding increases memory usage but can significantly improve performance when producer and consumer run on different CPU cores.

Blocking Operations

Enable blocking variants with blocking = true. Uses Mutex+Condvar internally — the data path remains lock-free.

#[ring_buffer(capacity = 64, mode = "spsc", blocking = true)]
struct BlockingBuffer(i32);
let buf = Arc::new(BlockingBuffer::new());
let (producer, consumer) = buf.split();
// Blocks until space is available
producer.enqueue_blocking(42);
// Blocks until an item is available
let item = consumer.dequeue_blocking();