Skip to content

Commit

Permalink
Merge tag 'trace-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/…
Browse files Browse the repository at this point in the history
…git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:
 "New feature:

   - A new "func-no-repeats" option in tracefs/options directory.

     When set the function tracer will detect if the current function
     being traced is the same as the previous one, and instead of
     recording it, it will keep track of the number of times that the
     function is repeated in a row. And when another function is
     recorded, it will write a new event that shows the function that
     repeated, the number of times it repeated and the time stamp of
     when the last repeated function occurred.

  Enhancements:

   - In order to implement the above "func-no-repeats" option, the ring
     buffer timestamp can now give the accurate timestamp of the event
     as it is being recorded, instead of having to record an absolute
     timestamp for all events. This helps the histogram code which no
     longer needs to waste ring buffer space.

   - New validation logic to make sure all trace events that access
     dereferenced pointers do so in a safe way, and will warn otherwise.

  Fixes:

   - No longer limit the PIDs of tasks that are recorded for
     "saved_cmdlines" to PID_MAX_DEFAULT (32768), as systemd now allows
     for a much larger range. This caused the mapping of PIDs to the
     task names to be dropped for all tasks with a PID greater than
     32768.

   - Change trace_clock_global() to never block. This caused a deadlock.

  Clean ups:

   - Typos, prototype fixes, and removing of duplicate or unused code.

   - Better management of ftrace_page allocations"

* tag 'trace-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (32 commits)
  tracing: Restructure trace_clock_global() to never block
  tracing: Map all PIDs to command lines
  ftrace: Reuse the output of the function tracer for func_repeats
  tracing: Add "func_no_repeats" option for function tracing
  tracing: Unify the logic for function tracing options
  tracing: Add method for recording "func_repeats" events
  tracing: Add "last_func_repeats" to struct trace_array
  tracing: Define new ftrace event "func_repeats"
  tracing: Define static void trace_print_time()
  ftrace: Simplify the calculation of page number for ftrace_page->records some more
  ftrace: Store the order of pages allocated in ftrace_page
  tracing: Remove unused argument from "ring_buffer_time_stamp()
  tracing: Remove duplicate struct declaration in trace_events.h
  tracing: Update create_system_filter() kernel-doc comment
  tracing: A minor cleanup for create_system_filter()
  kernel: trace: Mundane typo fixes in the file trace_events_filter.c
  tracing: Fix various typos in comments
  scripts/recordmcount.pl: Make vim and emacs indent the same
  scripts/recordmcount.pl: Make indent spacing consistent
  tracing: Add a verifier to check string pointers for trace events
  ...
  • Loading branch information
torvalds committed May 3, 2021
2 parents 6f8ee8d + aafe104 commit 9b1f61d
Show file tree
Hide file tree
Showing 43 changed files with 1,187 additions and 297 deletions.
2 changes: 1 addition & 1 deletion arch/microblaze/include/asm/ftrace.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ extern void ftrace_call_graph(void);
#endif

#ifdef CONFIG_DYNAMIC_FTRACE
/* reloction of mcount call site is the same as the address */
/* relocation of mcount call site is the same as the address */
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
return addr;
Expand Down
2 changes: 1 addition & 1 deletion arch/nds32/kernel/ftrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ void __naked return_to_handler(void)
"bal ftrace_return_to_handler\n\t"
"move $lp, $r0 \n\t"

/* restore state nedded by the ABI */
/* restore state needed by the ABI */
"lmw.bim $r0,[$sp],$r1,#0x0 \n\t");
}

Expand Down
4 changes: 2 additions & 2 deletions arch/powerpc/include/asm/ftrace.h
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

#ifdef __ASSEMBLY__

/* Based off of objdump optput from glibc */
/* Based off of objdump output from glibc */

#define MCOUNT_SAVE_FRAME \
stwu r1,-48(r1); \
Expand Down Expand Up @@ -52,7 +52,7 @@ extern void _mcount(void);

static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
/* reloction of mcount call site is the same as the address */
/* relocation of mcount call site is the same as the address */
return addr;
}

Expand Down
2 changes: 1 addition & 1 deletion arch/sh/kernel/ftrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ static unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
* Modifying code must take extra care. On an SMP machine, if
* the code being modified is also being executed on another CPU
* that CPU will have undefined results and possibly take a GPF.
* We use kstop_machine to stop other CPUS from exectuing code.
* We use kstop_machine to stop other CPUS from executing code.
* But this does not stop NMIs from happening. We still need
* to protect against that. We separate out the modification of
* the code to take care of this.
Expand Down
2 changes: 1 addition & 1 deletion arch/sparc/include/asm/ftrace.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ void _mcount(void);
#endif

#ifdef CONFIG_DYNAMIC_FTRACE
/* reloction of mcount call site is the same as the address */
/* relocation of mcount call site is the same as the address */
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
return addr;
Expand Down
2 changes: 1 addition & 1 deletion fs/tracefs/inode.c
Original file line number Diff line number Diff line change
Expand Up @@ -477,7 +477,7 @@ struct dentry *tracefs_create_dir(const char *name, struct dentry *parent)
*
* The instances directory is special as it allows for mkdir and rmdir to
* to be done by userspace. When a mkdir or rmdir is performed, the inode
* locks are released and the methhods passed in (@mkdir and @rmdir) are
* locks are released and the methods passed in (@mkdir and @rmdir) are
* called without locks and with the name of the directory being created
* within the instances directory.
*
Expand Down
4 changes: 2 additions & 2 deletions include/linux/ftrace.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
/*
* If the arch's mcount caller does not support all of ftrace's
* features, then it must call an indirect function that
* does. Or at least does enough to prevent any unwelcomed side effects.
* does. Or at least does enough to prevent any unwelcome side effects.
*/
#if !ARCH_SUPPORTS_FTRACE_OPS
# define FTRACE_FORCE_LIST_FUNC 1
Expand Down Expand Up @@ -389,7 +389,7 @@ DECLARE_PER_CPU(int, disable_stack_tracer);
*/
static inline void stack_tracer_disable(void)
{
/* Preemption or interupts must be disabled */
/* Preemption or interrupts must be disabled */
if (IS_ENABLED(CONFIG_DEBUG_PREEMPT))
WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
this_cpu_inc(disable_stack_tracer);
Expand Down
5 changes: 3 additions & 2 deletions include/linux/ring_buffer.h
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,8 @@ enum ring_buffer_type {

unsigned ring_buffer_event_length(struct ring_buffer_event *event);
void *ring_buffer_event_data(struct ring_buffer_event *event);
u64 ring_buffer_event_time_stamp(struct ring_buffer_event *event);
u64 ring_buffer_event_time_stamp(struct trace_buffer *buffer,
struct ring_buffer_event *event);

/*
* ring_buffer_discard_commit will remove an event that has not
Expand Down Expand Up @@ -180,7 +181,7 @@ unsigned long ring_buffer_commit_overrun_cpu(struct trace_buffer *buffer, int cp
unsigned long ring_buffer_dropped_events_cpu(struct trace_buffer *buffer, int cpu);
unsigned long ring_buffer_read_events_cpu(struct trace_buffer *buffer, int cpu);

u64 ring_buffer_time_stamp(struct trace_buffer *buffer, int cpu);
u64 ring_buffer_time_stamp(struct trace_buffer *buffer);
void ring_buffer_normalize_time_stamp(struct trace_buffer *buffer,
int cpu, u64 *ts);
void ring_buffer_set_clock(struct trace_buffer *buffer,
Expand Down
25 changes: 25 additions & 0 deletions include/linux/seq_buf.h
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,31 @@ static inline unsigned int seq_buf_used(struct seq_buf *s)
return min(s->len, s->size);
}

/**
* seq_buf_terminate - Make sure buffer is nul terminated
* @s: the seq_buf descriptor to terminate.
*
* This makes sure that the buffer in @s is nul terminated and
* safe to read as a string.
*
* Note, if this is called when the buffer has overflowed, then
* the last byte of the buffer is zeroed, and the len will still
* point passed it.
*
* After this function is called, s->buffer is safe to use
* in string operations.
*/
static inline void seq_buf_terminate(struct seq_buf *s)
{
if (WARN_ON(s->size == 0))
return;

if (seq_buf_buffer_left(s))
s->buffer[s->len] = 0;
else
s->buffer[s->size - 1] = 0;
}

/**
* seq_buf_get_buf - get buffer to write arbitrary data to
* @s: the seq_buf handle
Expand Down
8 changes: 4 additions & 4 deletions include/linux/trace_events.h
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ static inline unsigned int tracing_gen_ctx_dec(void)

trace_ctx = tracing_gen_ctx();
/*
* Subtract one from the preeption counter if preemption is enabled,
* Subtract one from the preemption counter if preemption is enabled,
* see trace_event_buffer_reserve()for details.
*/
if (IS_ENABLED(CONFIG_PREEMPTION))
Expand Down Expand Up @@ -404,7 +404,6 @@ trace_get_fields(struct trace_event_call *event_call)
return event_call->class->get_fields(event_call);
}

struct trace_array;
struct trace_subsystem_dir;

enum {
Expand Down Expand Up @@ -640,7 +639,8 @@ enum event_trigger_type {
extern int filter_match_preds(struct event_filter *filter, void *rec);

extern enum event_trigger_type
event_triggers_call(struct trace_event_file *file, void *rec,
event_triggers_call(struct trace_event_file *file,
struct trace_buffer *buffer, void *rec,
struct ring_buffer_event *event);
extern void
event_triggers_post_call(struct trace_event_file *file,
Expand All @@ -664,7 +664,7 @@ trace_trigger_soft_disabled(struct trace_event_file *file)

if (!(eflags & EVENT_FILE_FL_TRIGGER_COND)) {
if (eflags & EVENT_FILE_FL_TRIGGER_MODE)
event_triggers_call(file, NULL, NULL);
event_triggers_call(file, NULL, NULL, NULL);
if (eflags & EVENT_FILE_FL_SOFT_DISABLED)
return true;
if (eflags & EVENT_FILE_FL_PID_FILTER)
Expand Down
2 changes: 1 addition & 1 deletion include/linux/tracepoint.h
Original file line number Diff line number Diff line change
Expand Up @@ -465,7 +465,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
* *
* * The declared 'local variable' is called '__entry'
* *
* * __field(pid_t, prev_prid) is equivalent to a standard declariton:
* * __field(pid_t, prev_prid) is equivalent to a standard declaration:
* *
* * pid_t prev_pid;
* *
Expand Down
2 changes: 1 addition & 1 deletion include/trace/events/io_uring.h
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ TRACE_EVENT(io_uring_create,
);

/**
* io_uring_register - called after a buffer/file/eventfd was succesfully
* io_uring_register - called after a buffer/file/eventfd was successfully
* registered for a ring
*
* @ctx: pointer to a ring context structure
Expand Down
2 changes: 1 addition & 1 deletion include/trace/events/rcu.h
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ TRACE_EVENT(rcu_utilization,
* RCU flavor, the grace-period number, and a string identifying the
* grace-period-related event as follows:
*
* "AccReadyCB": CPU acclerates new callbacks to RCU_NEXT_READY_TAIL.
* "AccReadyCB": CPU accelerates new callbacks to RCU_NEXT_READY_TAIL.
* "AccWaitCB": CPU accelerates new callbacks to RCU_WAIT_TAIL.
* "newreq": Request a new grace period.
* "start": Start a grace period.
Expand Down
2 changes: 1 addition & 1 deletion include/trace/events/sched.h
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ DEFINE_EVENT(sched_wakeup_template, sched_waking,
TP_ARGS(p));

/*
* Tracepoint called when the task is actually woken; p->state == TASK_RUNNNG.
* Tracepoint called when the task is actually woken; p->state == TASK_RUNNING.
* It is not always called from the waking context.
*/
DEFINE_EVENT(sched_wakeup_template, sched_wakeup,
Expand Down
2 changes: 1 addition & 1 deletion include/trace/events/timer.h
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ TRACE_EVENT(timer_expire_entry,
* When used in combination with the timer_expire_entry tracepoint we can
* determine the runtime of the timer callback function.
*
* NOTE: Do NOT derefernce timer in TP_fast_assign. The pointer might
* NOTE: Do NOT dereference timer in TP_fast_assign. The pointer might
* be invalid. We solely track the pointer.
*/
DEFINE_EVENT(timer_class, timer_expire_exit,
Expand Down
6 changes: 3 additions & 3 deletions init/main.c
Original file line number Diff line number Diff line change
Expand Up @@ -405,7 +405,7 @@ static int __init bootconfig_params(char *param, char *val,
return 0;
}

static void __init setup_boot_config(const char *cmdline)
static void __init setup_boot_config(void)
{
static char tmp_cmdline[COMMAND_LINE_SIZE] __initdata;
const char *msg;
Expand Down Expand Up @@ -472,7 +472,7 @@ static void __init setup_boot_config(const char *cmdline)

#else

static void __init setup_boot_config(const char *cmdline)
static void __init setup_boot_config(void)
{
/* Remove bootconfig data from initrd */
get_boot_config_from_initrd(NULL, NULL);
Expand Down Expand Up @@ -895,7 +895,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
pr_notice("%s", linux_banner);
early_security_init();
setup_arch(&command_line);
setup_boot_config(command_line);
setup_boot_config();
setup_command_line(command_line);
setup_nr_cpu_ids();
setup_per_cpu_areas();
Expand Down
4 changes: 2 additions & 2 deletions kernel/trace/fgraph.c
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ bool ftrace_graph_is_dead(void)
}

/**
* ftrace_graph_stop - set to permanently disable function graph tracincg
* ftrace_graph_stop - set to permanently disable function graph tracing
*
* In case of an error int function graph tracing, this is called
* to try to keep function graph tracing from causing any more harm.
Expand Down Expand Up @@ -117,7 +117,7 @@ int function_graph_enter(unsigned long ret, unsigned long func,

/*
* Skip graph tracing if the return location is served by direct trampoline,
* since call sequence and return addresses is unpredicatable anymore.
* since call sequence and return addresses are unpredictable anyway.
* Ex: BPF trampoline may call original function and may skip frame
* depending on type of BPF programs attached.
*/
Expand Down
Loading

0 comments on commit 9b1f61d

Please sign in to comment.