Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consuming iterator for Hashmap types #1125

Open
esemeniuc opened this issue Dec 28, 2024 · 1 comment
Open

Consuming iterator for Hashmap types #1125

esemeniuc opened this issue Dec 28, 2024 · 1 comment

Comments

@esemeniuc
Copy link

I'm trying to log per-ip packet counts, incrementing the them in a hashmap. Currently I can only use .iter() and read the current value, deleting entries with a call to remove() afterwards. This is not ideal since we can lose counts that were added between the read and the subsequent delete. Having something like drain or iter_mut or some kind of atomic swap would be ideal

@esemeniuc
Copy link
Author

current workaround is to extract the file descriptor, and to use libbpf-sys to fetch and delete

    let request_counters_map = bpf.map_mut("REQUEST_COUNTERS").unwrap();
    let map_fd = {
        let re = regex::Regex::new(r"fd:\s*(\d+)").unwrap();
        let map_info = format!("{request_counters_map:?}");
        let map_fd = u32::from_str(&re.captures(&map_info).expect("No match found.")[1])
            .expect("Failed to parse map fd") as c_int;
        dbg!(map_info, map_fd);
        map_fd
    };

    tokio::spawn(async move {
        const FETCH_INTERVAL: Duration = Duration::from_secs(5);
        let mut ip_stats = Vec::new();

        const BATCH_SIZE: usize = 64;

        loop {
            tokio::time::sleep(FETCH_INTERVAL).await;
            ip_stats.clear();

            let mut in_batch: u32 = 0;
            let mut out_batch: u32 = 0;
            let mut keys = [0u32; BATCH_SIZE];
            let mut values = [0u64; BATCH_SIZE];
            let mut count = BATCH_SIZE as __u32;
            let mut batch_count = 0;

            // Repeatedly call bpf_map_lookup_and_delete_batch until ENOENT
            // to retrieve *and remove* all entries in the map.
            loop {
                batch_count += 1;
                count = BATCH_SIZE as __u32;
                let ret = unsafe {
                    bpf_map_lookup_and_delete_batch(
                        map_fd,
                        // in_batch / out_batch must be passed as pointers:
                        &mut in_batch as *mut u32 as *mut c_void,
                        &mut out_batch as *mut u32 as *mut c_void,
                        keys.as_mut_ptr() as *mut c_void,
                        values.as_mut_ptr() as *mut c_void,
                        &mut count,
                        std::ptr::null::<bpf_map_batch_opts>(),
                    )
                };

                // Process the returned elements
                for i in 0..count as usize {
                    // Convert from big-endian IPv4 to Rust’s Ipv4Addr
                    let ip = Ipv4Addr::from(u32::from_be(keys[i]));
                    ip_stats.push((ip, values[i]));
                }

                if ret < 0 {
                    // ENOENT means we've consumed all keys
                    if ret == -libc::ENOENT {
                        break;
                    } else if ret == -libc::EFAULT {
                        // see: https://libbpf.readthedocs.io/en/latest/api.html#:~:text=LIBBPF_API%20int-,bpf_map_lookup_and_delete_batch
                        warn!("Lost {count} entries (deleted without being returned");
                        break;
                    } else {
                        eprintln!("Error calling bpf_map_lookup_and_delete_batch: {ret}");
                        break;
                    }
                }

                // Prepare for next batch call
                in_batch = out_batch;
            }
        }
    });

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant