BPF maps: how the kernel and userspace actually share data
April 12, 2026 · kernel notes · ebpf · bpf-maps · linux
BPF programs run in kernel context with a tiny stack and no heap. They cannot allocate memory, cannot call most kernel functions, and cannot persist anything between invocations. BPF maps are how this changes.
A map is a kernel data structure that BPF programs read and write through helper functions, and userspace processes read and write through syscalls. The map outlives the BPF program. It’s how counters survive between packets, how policy gets pushed in, and how observations get pulled out.
This is a working understanding of the map types you’ll meet in real code.
BPF_MAP_TYPE_HASH and BPF_MAP_TYPE_LRU_HASH
A regular hash map. Key-value lookup, fixed maximum size, eviction is your problem.
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__type(key, __u32);
__type(value, __u64);
__uint(max_entries, 1024);
} my_hash SEC(".maps");
Two things hurt people:
First, max_entries is a hard cap. When the map is full, bpf_map_update_elem returns -E2BIG. If you don’t check the return value, you silently lose updates. For maps that grow with input you don’t control (source IPs, session IDs), use BPF_MAP_TYPE_LRU_HASH instead — it evicts the least-recently-used entry to make room. This is almost always what you want for “track every X you see.”
Second, hash maps lock per-bucket. High-contention hot keys serialize. For per-packet counters, use BPF_MAP_TYPE_PERCPU_HASH and sum in userspace.
BPF_MAP_TYPE_ARRAY and BPF_MAP_TYPE_PERCPU_ARRAY
Fixed-size, integer-keyed. The PERCPU variant gives each CPU its own copy of every value — when you increment, you only touch your CPU’s slot. No contention.
struct {
__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
__type(key, __u32);
__type(value, __u64);
__uint(max_entries, 16);
} stats SEC(".maps");
Userspace reads all CPU values and sums them:
values := make([]uint64, ebpf.MustPossibleCPU())
err := stats.Lookup(uint32(0), &values)
var total uint64
for _, v := range values {
total += v
}
If you forget the per-CPU sum and just read index 0, your numbers are wrong by a factor of the CPU count. This is a classic bug.
BPF_MAP_TYPE_RINGBUF
Single-producer single-consumer ring buffer for sending events from kernel to userspace. Replaces the older BPF_MAP_TYPE_PERF_EVENT_ARRAY for most use cases.
struct {
__uint(type, BPF_MAP_TYPE_RINGBUF);
__uint(max_entries, 256 * 1024);
} events SEC(".maps");
In the BPF program you reserve space, fill it, and submit:
struct event *e = bpf_ringbuf_reserve(&events, sizeof(*e), 0);
if (!e) return 0; // buffer full, drop the event
e->src_ip = ip->saddr;
bpf_ringbuf_submit(e, 0);
Userspace reads with ringbuf.NewReader(map) (cilium/ebpf) or equivalent. The kernel never blocks on a full buffer. If userspace can’t keep up, events drop. Size your buffer for peak burst, not average rate.
BPF_MAP_TYPE_PROG_ARRAY
A map of BPF programs. Lets one BPF program tail-call another. Useful for breaking long programs into pieces (the verifier has instruction count limits) or for plugin-style designs.
You’ll meet this when reading complex eBPF projects. You’ll rarely need it in your own code at first.
What can a map’s value be?
Anything plain-old-data. Integers, structs, fixed-size arrays. Not pointers (the kernel can’t trust them). Not variable-length data (use ringbuf for that).
The struct must have predictable layout. If you share a struct between BPF and userspace (Go, Rust, C), align fields explicitly and verify with bpf2go or equivalent. A subtle mismatch in struct layout will return wrong values silently.
Reading from userspace doesn’t pause the kernel
Map updates from BPF are safe under heavy traffic. Map reads from userspace are also safe — they don’t lock the kernel side. But for counters incrementing millions of times per second, you’ll see torn reads if you read individual entries with naive code. Use atomic helpers or accept the imprecision; per-second rate is fine, exact instantaneous count is hard.
Practical advice
For a first BPF program, you almost always want one BPF_MAP_TYPE_PERCPU_ARRAY for counters and one BPF_MAP_TYPE_RINGBUF for events. Add hash maps when you need keyed state. Add LRU when the key space is unbounded.
If a tutorial uses BPF_MAP_TYPE_HASH for source IP tracking, the tutorial is wrong. Source IPs are unbounded; use LRU.