Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set sampling ratio of dropping for single syscall individually #1309

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion driver/API_VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
5.0.0
5.1.0
2 changes: 1 addition & 1 deletion driver/SCHEMA_VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2.9.0
2.10.0
2 changes: 1 addition & 1 deletion driver/bpf/fillers.h
Original file line number Diff line number Diff line change
Expand Up @@ -4939,7 +4939,7 @@ FILLER(sched_drop, false)
/*
* ratio
*/
return bpf_push_u32_to_ring(data, data->settings->sampling_ratio);
return bpf_push_u32_to_ring(data, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm correct, this is going to cause a lot of pain for us :( we actually do rely on getting the sampling ratio in drop events) and would require a major redesign once there's no single sampling ratio.

It feels like a way forward would be to keep the notion of a single sampling ratio and disable it only when a consumer uses the per-syscall ioctl (and we'd never do that then).

something like:

if(settings->sampling_ratio != PER_SYSCALL_SAMPLING_RATIO)
{
    sampling_ratio = settings->sampling_ratio;
}
else
{
    sampling_ratio = settings->sampling_ratios[syscall_id];
}

and in the ioctl that sets per syscall sampling ratios, just set settings->sampling_ratio = PER_SYSCALL_SAMPLING_RATIO too

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there are both global and per-system call ratios, which one should be reported here? Global? Or the one being sampled? In the komd driver, it seems difficult to obtain the system call being sampled based on its mechanism that may delay the insertion of drop events.

}

/* In this kernel version the instruction limit was bumped to 1000000 */
Expand Down
17 changes: 12 additions & 5 deletions driver/bpf/plumbing_helpers.h
Original file line number Diff line number Diff line change
Expand Up @@ -493,6 +493,7 @@ static __always_inline bool drop_event(void *ctx,
struct scap_bpf_settings *settings,
enum syscall_flags drop_flags)
{
long id;
if (!settings->dropping_mode)
return false;

Expand Down Expand Up @@ -563,19 +564,25 @@ static __always_inline bool drop_event(void *ctx,
if (drop_flags & UF_ALWAYS_DROP)
return true;

id = bpf_syscall_get_nr(ctx);
if (id < 0 || id >= SYSCALL_TABLE_SIZE)
{
return false;
}

if (state->tail_ctx.ts % 1000000000 >= 1000000000 /
settings->sampling_ratio) {
if (!settings->is_dropping) {
settings->is_dropping = true;
settings->sampling_ratio[id]) {
if (!settings->is_dropping[id]) {
settings->is_dropping[id] = true;
state->tail_ctx.evt_type = PPME_DROP_E;
return false;
}

return true;
}

if (settings->is_dropping) {
settings->is_dropping = false;
if (settings->is_dropping[id]) {
settings->is_dropping[id] = false;
state->tail_ctx.evt_type = PPME_DROP_X;
return false;
}
Expand Down
4 changes: 2 additions & 2 deletions driver/bpf/types.h
Original file line number Diff line number Diff line change
Expand Up @@ -233,10 +233,10 @@ struct scap_bpf_settings {
uint64_t boot_time;
void *socket_file_ops;
uint32_t snaplen;
uint32_t sampling_ratio;
uint32_t sampling_ratio[SYSCALL_TABLE_SIZE];
bool do_dynamic_snaplen;
bool dropping_mode;
bool is_dropping;
bool is_dropping[SYSCALL_TABLE_SIZE];
bool drop_failed;
bool tracers_enabled;
uint16_t fullcapture_port_range_start;
Expand Down
95 changes: 74 additions & 21 deletions driver/main.c
Original file line number Diff line number Diff line change
Expand Up @@ -394,6 +394,7 @@ static void check_remove_consumer(struct ppm_consumer_t *consumer, int remove_fr
static int ppm_open(struct inode *inode, struct file *filp)
{
int ret;
int i;
int in_list = false;
#if LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 20)
int ring_no = iminor(filp->f_path.dentry->d_inode);
Expand Down Expand Up @@ -531,9 +532,12 @@ static int ppm_open(struct inode *inode, struct file *filp)
*/
consumer->dropping_mode = 0;
consumer->snaplen = SNAPLEN;
consumer->sampling_ratio = 1;
consumer->sampling_interval = 0;
consumer->is_dropping = 0;
for (i = 0; i < SYSCALL_TABLE_SIZE; i++)
{
consumer->sampling_ratio[i] = 1;
consumer->sampling_interval[i] = 0;
consumer->is_dropping[i] = false;
}
consumer->do_dynamic_snaplen = false;
consumer->drop_failed = false;
consumer->need_to_insert_drop_e = 0;
Expand Down Expand Up @@ -956,17 +960,23 @@ static long ppm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
switch (cmd) {
case PPM_IOCTL_DISABLE_DROPPING_MODE:
{
int i;
vpr_info("PPM_IOCTL_DISABLE_DROPPING_MODE, consumer %p\n", consumer_id);

consumer->dropping_mode = 0;
consumer->sampling_interval = 1000000000;
consumer->sampling_ratio = 1;
for (i = 0; i < SYSCALL_TABLE_SIZE; i++)
{
consumer->sampling_interval[i] = 1000000000;
consumer->sampling_ratio[i] = 1;
consumer->is_dropping[i] = false;
}

ret = 0;
goto cleanup_ioctl;
}
case PPM_IOCTL_ENABLE_DROPPING_MODE:
{
int i;
u32 new_sampling_ratio;

consumer->dropping_mode = 1;
Expand All @@ -987,10 +997,13 @@ static long ppm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
goto cleanup_ioctl;
}

consumer->sampling_interval = 1000000000 / new_sampling_ratio;
consumer->sampling_ratio = new_sampling_ratio;
for (i = 0; i < SYSCALL_TABLE_SIZE; i++)
{
consumer->sampling_interval[i] = 1000000000 / new_sampling_ratio;
consumer->sampling_ratio[i] = new_sampling_ratio;
}

vpr_info("new sampling ratio: %d\n", new_sampling_ratio);
vpr_info("new default sampling ratio: %d\n", new_sampling_ratio);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't really setting just the default, it's overwriting any previous per-syscall sampling ratios configured


ret = 0;
goto cleanup_ioctl;
Expand Down Expand Up @@ -1186,6 +1199,43 @@ static long ppm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
ret = 0;
goto cleanup_ioctl;
}
case PPM_IOCTL_SET_DROPPING_RATIO:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bikeshedding names is always fun, but I'd rather keep the SAMPLING in the name here (e.g. PPM_IOCTL_SET_SYSCALL_SAMPLING_RATIO?). Otherwise I'd keep wondering if sampling_ratio and dropping_ratio are the same thing and why do we need two ioctls to manage them.

Also, please consider (not forcing this in any way, just please consider :)) passing a (two-u32) struct by pointer instead of unpacking the arguments from a single (by-value) u64.

{
u32 syscall_to_set = (arg >> 32) - SYSCALL_TABLE_ID0;
u32 new_sampling_ratio = (u32)arg;

vpr_info("PPM_IOCTL_SET_DROPPING_RATIO, syscall(%u), ratio(%u), consumer %p\n", syscall_to_set, new_sampling_ratio, consumer_id);

if (syscall_to_set >= SYSCALL_TABLE_SIZE) {
pr_err("invalid syscall %u\n", syscall_to_set);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will appear in the kernel log without any extra context, so maybe add a few words here (like that we're in this particular ioctl for example). I'm sure there's precedent for cryptic log messages but let's try to make things better one line at a time :)

ret = -EINVAL;
goto cleanup_ioctl;
}

if ((g_syscall_table[syscall_to_set].flags & (UF_NEVER_DROP | UF_ALWAYS_DROP))) {
ret = -EPERM;
goto cleanup_ioctl;
}

if (new_sampling_ratio != 1 &&
new_sampling_ratio != 2 &&
new_sampling_ratio != 4 &&
new_sampling_ratio != 8 &&
new_sampling_ratio != 16 &&
new_sampling_ratio != 32 &&
new_sampling_ratio != 64 &&
new_sampling_ratio != 128) {
pr_err("invalid sampling ratio %u\n", new_sampling_ratio);
ret = -EINVAL;
goto cleanup_ioctl;
}

consumer->sampling_interval[syscall_to_set] = 1000000000 / new_sampling_ratio;
consumer->sampling_ratio[syscall_to_set] = new_sampling_ratio;
pr_info("new sampling ratio %u, %u\n", syscall_to_set, new_sampling_ratio);
ret = 0;
goto cleanup_ioctl;
}
default:
ret = -ENOTTY;
goto cleanup_ioctl;
Expand Down Expand Up @@ -1660,6 +1710,7 @@ static inline int drop_nostate_event(ppm_event_code event_type,
static inline int drop_event(struct ppm_consumer_t *consumer,
ppm_event_code event_type,
enum syscall_flags drop_flags,
long table_index,
nanoseconds ns,
struct pt_regs *regs)
{
Expand All @@ -1682,21 +1733,22 @@ static inline int drop_event(struct ppm_consumer_t *consumer,
ASSERT((drop_flags & UF_NEVER_DROP) == 0);
return 1;
}
if (table_index != -1) {
if (consumer->sampling_interval[table_index] < SECOND_IN_NS &&
/* do_div replaces ns2 with the quotient and returns the remainder */
do_div(ns2, SECOND_IN_NS) >= consumer->sampling_interval[table_index]) {
if (!consumer->is_dropping[table_index]) {
consumer->is_dropping[table_index] = true;
record_drop_e(consumer, ns, drop_flags);
}

if (consumer->sampling_interval < SECOND_IN_NS &&
/* do_div replaces ns2 with the quotient and returns the remainder */
do_div(ns2, SECOND_IN_NS) >= consumer->sampling_interval) {
if (consumer->is_dropping == 0) {
consumer->is_dropping = 1;
record_drop_e(consumer, ns, drop_flags);
return 1;
}

return 1;
}

if (consumer->is_dropping == 1) {
consumer->is_dropping = 0;
record_drop_x(consumer, ns, drop_flags);
if (consumer->is_dropping[table_index]) {
consumer->is_dropping[table_index] = false;
record_drop_x(consumer, ns, drop_flags);
}
}
}

Expand Down Expand Up @@ -1742,7 +1794,7 @@ static int record_event_consumer(struct ppm_consumer_t *consumer,
int drop = 1;
int32_t cbres = PPM_SUCCESS;
int cpu;
long table_index;
long table_index = -1;
int64_t retval;

if (tp_type < INTERNAL_EVENTS && !(consumer->tracepoints_attached & (1 << tp_type)))
Expand Down Expand Up @@ -1807,6 +1859,7 @@ static int record_event_consumer(struct ppm_consumer_t *consumer,
if (drop_event(consumer,
event_type,
drop_flags,
table_index,
ns,
event_datap->event_info.syscall_data.regs))
return res;
Expand Down
12 changes: 6 additions & 6 deletions driver/modern_bpf/helpers/base/maps_getters.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@ static __always_inline bool maps__get_dropping_mode()
return g_settings.dropping_mode;
}

static __always_inline uint32_t maps__get_sampling_ratio()
static __always_inline uint32_t maps__get_sampling_ratio(u32 syscall_id)
{
return g_settings.sampling_ratio;
return g_settings.sampling_ratio[syscall_id];
}

static __always_inline bool maps__get_drop_failed()
Expand Down Expand Up @@ -65,14 +65,14 @@ static __always_inline uint16_t maps__get_statsd_port()

/*=============================== KERNEL CONFIGS ===========================*/

static __always_inline bool maps__get_is_dropping()
static __always_inline bool maps__get_is_dropping(u32 syscall_id)
{
return is_dropping;
return is_dropping[syscall_id];
}

static __always_inline void maps__set_is_dropping(bool value)
static __always_inline void maps__set_is_dropping(u32 syscall_id, bool value)
{
is_dropping = value;
is_dropping[syscall_id] = value;
}

/*=============================== KERNEL CONFIGS ===========================*/
Expand Down
15 changes: 10 additions & 5 deletions driver/modern_bpf/helpers/interfaces/attached_programs.h
Original file line number Diff line number Diff line change
Expand Up @@ -55,28 +55,33 @@ static __always_inline bool sampling_logic(void* ctx, u32 id, enum intrumentatio
return true;
}

if((bpf_ktime_get_boot_ns() % SECOND_TO_NS) >= (SECOND_TO_NS / maps__get_sampling_ratio()))
if (id < 0 || id >= SYSCALL_TABLE_SIZE)
{
return false;
}

if((bpf_ktime_get_boot_ns() % SECOND_TO_NS) >= (SECOND_TO_NS / maps__get_sampling_ratio(id)))
{
/* If we are starting the dropping phase we need to notify the userspace, otherwise, we
* simply drop our event.
* PLEASE NOTE: this logic is not per-CPU so it is best effort!
*/
if(!maps__get_is_dropping())
if(!maps__get_is_dropping(id))
{
/* Here we are not sure we can send the drop_e event to userspace
* if the buffer is full, but this is not essential even if we lose
* an iteration we will synchronize again the next time the logic is enabled.
*/
maps__set_is_dropping(true);
maps__set_is_dropping(id, true);
bpf_tail_call(ctx, &extra_event_prog_tail_table, T1_DROP_E);
bpf_printk("unable to tail call into 'drop_e' prog");
}
return true;
}

if(maps__get_is_dropping())
if(maps__get_is_dropping(id))
{
maps__set_is_dropping(false);
maps__set_is_dropping(id, false);
bpf_tail_call(ctx, &extra_event_prog_tail_table, T1_DROP_X);
bpf_printk("unable to tail call into 'drop_x' prog");
}
Expand Down
2 changes: 1 addition & 1 deletion driver/modern_bpf/maps/maps.h
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ __weak struct capture_settings g_settings;
* @brief Variable used only kernel side to understand when we need to send
* `DROP_E` and `DROP_X` events
*/
__weak bool is_dropping;
__weak bool is_dropping[SYSCALL_TABLE_SIZE];

/*=============================== BPF GLOBAL VARIABLES ===============================*/

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ int BPF_PROG(t1_drop_e)

/*=============================== COLLECT PARAMETERS ===========================*/

ringbuf__store_u32(&ringbuf, maps__get_sampling_ratio());
ringbuf__store_u32(&ringbuf, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as for the other engines, we can't work without a reliable sampling ratio report (as seen by the driver) :<


/*=============================== COLLECT PARAMETERS ===========================*/

Expand All @@ -47,7 +47,7 @@ int BPF_PROG(t1_drop_x)

/*=============================== COLLECT PARAMETERS ===========================*/

ringbuf__store_u32(&ringbuf, maps__get_sampling_ratio());
ringbuf__store_u32(&ringbuf, 0);

/*=============================== COLLECT PARAMETERS ===========================*/

Expand Down
3 changes: 2 additions & 1 deletion driver/modern_bpf/shared_definitions/struct_definitions.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
*/
#define AUXILIARY_MAP_SIZE 128 * 1024

#define SYSCALL_TABLE_SIZE 512
/**
* @brief General settings shared among all the CPUs.
*
Expand All @@ -26,7 +27,7 @@ struct capture_settings
uint64_t boot_time; /* boot time. */
uint32_t snaplen; /* we use it when we want to read a maximum size from an event and no more. */
bool dropping_mode; /* this flag actives the sampling logic */
uint32_t sampling_ratio; /* this config tells tracepoints when they have to drop events */
uint32_t sampling_ratio[SYSCALL_TABLE_SIZE]; /* this config tells tracepoints when they have to drop events */
bool drop_failed; /* whether to drop failed syscalls (exit events) */
bool do_dynamic_snaplen; /* enforce snaplen according to the event content */
uint16_t fullcapture_port_range_start; /* first interesting port */
Expand Down
6 changes: 3 additions & 3 deletions driver/ppm_consumer.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ struct ppm_consumer_t {
struct ppm_ring_buffer_context *ring_buffers;
#endif
u32 snaplen;
u32 sampling_ratio;
uint16_t sampling_ratio[SYSCALL_TABLE_SIZE];
bool do_dynamic_snaplen;
u32 sampling_interval;
int is_dropping;
u32 sampling_interval[SYSCALL_TABLE_SIZE];
bool is_dropping[SYSCALL_TABLE_SIZE];
int dropping_mode;
bool drop_failed;
volatile int need_to_insert_drop_e;
Expand Down
1 change: 1 addition & 0 deletions driver/ppm_events_public.h
Original file line number Diff line number Diff line change
Expand Up @@ -2076,6 +2076,7 @@ struct ppm_evt_hdr {
#define PPM_IOCTL_DISABLE_TP _IO(PPM_IOCTL_MAGIC, 32)
#define PPM_IOCTL_ENABLE_DROPFAILED _IO(PPM_IOCTL_MAGIC, 33)
#define PPM_IOCTL_DISABLE_DROPFAILED _IO(PPM_IOCTL_MAGIC, 34)
#define PPM_IOCTL_SET_DROPPING_RATIO _IO(PPM_IOCTL_MAGIC, 35)
#endif // CYGWING_AGENT

extern const struct ppm_name_value socket_families[];
Expand Down
2 changes: 1 addition & 1 deletion driver/ppm_fillers.c
Original file line number Diff line number Diff line change
Expand Up @@ -4525,7 +4525,7 @@ int f_sched_drop(struct event_filler_arguments *args)
/*
* ratio
*/
res = val_to_ring(args, args->consumer->sampling_ratio, 0, false, 0);
res = val_to_ring(args, 0, 0, false, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same. We need it :(

if (unlikely(res != PPM_SUCCESS))
return res;

Expand Down
Loading
Loading