mirror of
https://github.com/torvalds/linux.git
synced 2026-03-08 01:04:41 +01:00
tracing updates for 7.0:
User visible changes:
- Add an entry into MAINTAINERS file for RUST versions of code
There's now RUST code for tracing and static branches. To differentiate
that code from the C code, add entries in for the RUST version (with "[RUST]"
around it) so that the right maintainers get notified on changes.
- New bitmask-list option added to tracefs
When this is set, bitmasks in trace event are not displayed as hex
numbers, but instead as lists: e.g. 0-5,7,9 instead of 0000015f
- New show_event_filters file in tracefs
Instead of having to search all events/*/*/filter for any active filters
enabled in the trace instance, the file show_event_filters will list them
so that there's only one file that needs to be examined to see if any
filters are active.
- New show_event_triggers file in tracefs
Instead of having to search all events/*/*/trigger for any active triggers
enabled in the trace instance, the file show_event_triggers will list them
so that there's only one file that needs to be examined to see if any
triggers are active.
- Have traceoff_on_warning disable trace pintk buffer too
Recently recording of trace_printk() could go to other trace instances
instead of the top level instance. But if traceoff_on_warning triggers, it
doesn't stop the buffer with trace_printk() and that data can easily be
lost by being overwritten. Have traceoff_on_warning also disable the
instance that has trace_printk() being written to it.
- Update the hist_debug file to show what function the field uses
When CONFIG_HIST_TRIGGERS_DEBUG is enabled, a hist_debug file exists for
every event. This displays the internal data of any histogram enabled for
that event. But it is lacking the function that is called to process one
of its fields. This is very useful information that was missing when
debugging histograms.
- Up the histogram stack size from 16 to 31
Stack traces can be used as keys for event histograms. Currently the size
of the stack that is stored is limited to just 16 entries. But the storage
space in the histogram is 256 bytes, meaning that it can store up to 31
entries (plus one for the count of entries). Instead of letting that space
go to waste, up the limit from 16 to 31. This makes the keys much more
useful.
- Fix permissions of per CPU file buffer_size_kb
The per CPU file of buffer_size_kb was incorrectly set to read only in a
previous cleanup. It should be writable.
- Reset "last_boot_info" if the persistent buffer is cleared
The last_boot_info shows address information of a persistent ring buffer
if it contains data from a previous boot. It is cleared when recording
starts again, but it is not cleared when the buffer is reset. The data is
useless after a reset so clear it on reset too.
Internal changes:
- A change was made to allow tracepoint callbacks to have preemption
enabled, and instead be protected by SRCU. This required some updates to
the callbacks for perf and BPF.
perf needed to disable preemption directly in its callback because it
expects preemption disabled in the later code.
BPF needed to disable migration, as its code expects to run completely on
the same CPU.
- Have irq_work wake up other CPU if current CPU is "isolated"
When there's a waiter waiting on ring buffer data and a new event happens,
an irq work is triggered to wake up that waiter. This is noisy on isolated
CPUs (running NO_HZ_FULL). Trigger an IPI to a house keeping CPU instead.
- Use proper free of trigger_data instead of open coding it in.
- Remove redundant call of event_trigger_reset_filter()
It was called immediately in a function that was called right after it.
- Workqueue cleanups
- Report errors if tracing_update_buffers() were to fail.
- Make the enum update workqueue generic for other parts of tracing
On boot up, a work queue is created to convert enum names into their
numbers in the trace event format files. This work queue can also be used
for other aspects of tracing that takes some time and shouldn't be called
by the init call code.
The blk_trace initialization takes a bit of time. Have the initialization
code moved to the new tracing generic work queue function.
- Skip kprobe boot event creation call if there's no kprobes defined on cmdline
The kprobe initialization to set up kprobes if they are defined on the
cmdline requires taking the event_mutex lock. This can be held by other
tracing code doing initialization for a long time. Since kprobes added to
the kernel command line need to be setup immediately, as they may be
tracing early initialization code, they cannot be postponed in a work
queue and must be setup in the initcall code.
If there's no kprobe on the kernel cmdline, there's no reason to take the
mutex and slow down the boot up code waiting to get the lock only to find
out there's nothing to do. Simply exit out early if there's no kprobes on
the kernel cmdline.
If there are kprobes on the cmdline, then someone cares more about tracing
over the speed of boot up.
- Clean up the trigger code a bit
- Move code out of trace.c and into their own files
trace.c is now over 11,000 lines of code and has become more difficult to
maintain. Start splitting it up so that related code is in their own
files.
Move all the trace_printk() related code into trace_printk.c.
Move the __always_inline stack functions into trace.h.
Move the pid filtering code into a new trace_pid.c file.
- Better define the max latency and snapshot code
The latency tracers have a "max latency" buffer that is a copy of the main
buffer and gets swapped with it when a new high latency is detected. This
keeps the trace up to the highest latency around where this max_latency
buffer is never written to. It is only used to save the last max latency
trace.
A while ago a snapshot feature was added to tracefs to allow user space to
perform the same logic. It could also enable events to trigger a
"snapshot" if one of their fields hit a new high. This was built on top of
the latency max_latency buffer logic.
Because snapshots came later, they were dependent on the latency tracers
to be enabled. In reality, the latency tracers depend on the snapshot code
and not the other way around. It was just that they came first.
Restructure the code and the kconfigs to have the latency tracers depend
on snapshot code instead. This actually simplifies the logic a bit and
allows to disable more when the latency tracers are not defined and the
snapshot code is.
- Fix a "false sharing" in the hwlat tracer code
The loop to search for latency in hardware was using a variable that could
be changed by user space for each sample. If the user change this
variable, it could cause a bus contention, and reading that variable can
show up as a large latency in the trace causing a false positive. Read
this variable at the start of the sample with a READ_ONCE() into a local
variable and keep the code from sharing cache lines with readers.
- Fix function graph tracer static branch optimization code
When only one tracer is defined for function graph tracing, it uses a
static branch to call that tracer directly. When another tracer is added,
it goes into loop logic to call all the registered callbacks.
The code was incorrect when going back to one tracer and never re-enabled
the static branch again to do the optimization code.
- And other small fixes and cleanups.
-----BEGIN PGP SIGNATURE-----
iIoEABYKADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCaY9P3BQccm9zdGVkdEBn
b29kbWlzLm9yZwAKCRAp5XQQmuv6qou3AQCCrzdrIglLNABGPyny9sqWLDz6vyyw
nWAK9xg1VFxwRQD+LyJvVMWbpGeRBS/PsAK19RgldbgkQFWNv8gNhRKRgw0=
=U/kg
-----END PGP SIGNATURE-----
Merge tag 'trace-v7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing updates from Steven Rostedt:
"User visible changes:
- Add an entry into MAINTAINERS file for RUST versions of code
There's now RUST code for tracing and static branches. To
differentiate that code from the C code, add entries in for the
RUST version (with "[RUST]" around it) so that the right
maintainers get notified on changes.
- New bitmask-list option added to tracefs
When this is set, bitmasks in trace event are not displayed as hex
numbers, but instead as lists: e.g. 0-5,7,9 instead of 0000015f
- New show_event_filters file in tracefs
Instead of having to search all events/*/*/filter for any active
filters enabled in the trace instance, the file show_event_filters
will list them so that there's only one file that needs to be
examined to see if any filters are active.
- New show_event_triggers file in tracefs
Instead of having to search all events/*/*/trigger for any active
triggers enabled in the trace instance, the file
show_event_triggers will list them so that there's only one file
that needs to be examined to see if any triggers are active.
- Have traceoff_on_warning disable trace pintk buffer too
Recently recording of trace_printk() could go to other trace
instances instead of the top level instance. But if
traceoff_on_warning triggers, it doesn't stop the buffer with
trace_printk() and that data can easily be lost by being
overwritten. Have traceoff_on_warning also disable the instance
that has trace_printk() being written to it.
- Update the hist_debug file to show what function the field uses
When CONFIG_HIST_TRIGGERS_DEBUG is enabled, a hist_debug file
exists for every event. This displays the internal data of any
histogram enabled for that event. But it is lacking the function
that is called to process one of its fields. This is very useful
information that was missing when debugging histograms.
- Up the histogram stack size from 16 to 31
Stack traces can be used as keys for event histograms. Currently
the size of the stack that is stored is limited to just 16 entries.
But the storage space in the histogram is 256 bytes, meaning that
it can store up to 31 entries (plus one for the count of entries).
Instead of letting that space go to waste, up the limit from 16 to
31. This makes the keys much more useful.
- Fix permissions of per CPU file buffer_size_kb
The per CPU file of buffer_size_kb was incorrectly set to read only
in a previous cleanup. It should be writable.
- Reset "last_boot_info" if the persistent buffer is cleared
The last_boot_info shows address information of a persistent ring
buffer if it contains data from a previous boot. It is cleared when
recording starts again, but it is not cleared when the buffer is
reset. The data is useless after a reset so clear it on reset too.
Internal changes:
- A change was made to allow tracepoint callbacks to have preemption
enabled, and instead be protected by SRCU. This required some
updates to the callbacks for perf and BPF.
perf needed to disable preemption directly in its callback because
it expects preemption disabled in the later code.
BPF needed to disable migration, as its code expects to run
completely on the same CPU.
- Have irq_work wake up other CPU if current CPU is "isolated"
When there's a waiter waiting on ring buffer data and a new event
happens, an irq work is triggered to wake up that waiter. This is
noisy on isolated CPUs (running NO_HZ_FULL). Trigger an IPI to a
house keeping CPU instead.
- Use proper free of trigger_data instead of open coding it in.
- Remove redundant call of event_trigger_reset_filter()
It was called immediately in a function that was called right after
it.
- Workqueue cleanups
- Report errors if tracing_update_buffers() were to fail.
- Make the enum update workqueue generic for other parts of tracing
On boot up, a work queue is created to convert enum names into
their numbers in the trace event format files. This work queue can
also be used for other aspects of tracing that takes some time and
shouldn't be called by the init call code.
The blk_trace initialization takes a bit of time. Have the
initialization code moved to the new tracing generic work queue
function.
- Skip kprobe boot event creation call if there's no kprobes defined
on cmdline
The kprobe initialization to set up kprobes if they are defined on
the cmdline requires taking the event_mutex lock. This can be held
by other tracing code doing initialization for a long time. Since
kprobes added to the kernel command line need to be setup
immediately, as they may be tracing early initialization code, they
cannot be postponed in a work queue and must be setup in the
initcall code.
If there's no kprobe on the kernel cmdline, there's no reason to
take the mutex and slow down the boot up code waiting to get the
lock only to find out there's nothing to do. Simply exit out early
if there's no kprobes on the kernel cmdline.
If there are kprobes on the cmdline, then someone cares more about
tracing over the speed of boot up.
- Clean up the trigger code a bit
- Move code out of trace.c and into their own files
trace.c is now over 11,000 lines of code and has become more
difficult to maintain. Start splitting it up so that related code
is in their own files.
Move all the trace_printk() related code into trace_printk.c.
Move the __always_inline stack functions into trace.h.
Move the pid filtering code into a new trace_pid.c file.
- Better define the max latency and snapshot code
The latency tracers have a "max latency" buffer that is a copy of
the main buffer and gets swapped with it when a new high latency is
detected. This keeps the trace up to the highest latency around
where this max_latency buffer is never written to. It is only used
to save the last max latency trace.
A while ago a snapshot feature was added to tracefs to allow user
space to perform the same logic. It could also enable events to
trigger a "snapshot" if one of their fields hit a new high. This
was built on top of the latency max_latency buffer logic.
Because snapshots came later, they were dependent on the latency
tracers to be enabled. In reality, the latency tracers depend on
the snapshot code and not the other way around. It was just that
they came first.
Restructure the code and the kconfigs to have the latency tracers
depend on snapshot code instead. This actually simplifies the logic
a bit and allows to disable more when the latency tracers are not
defined and the snapshot code is.
- Fix a "false sharing" in the hwlat tracer code
The loop to search for latency in hardware was using a variable
that could be changed by user space for each sample. If the user
change this variable, it could cause a bus contention, and reading
that variable can show up as a large latency in the trace causing a
false positive. Read this variable at the start of the sample with
a READ_ONCE() into a local variable and keep the code from sharing
cache lines with readers.
- Fix function graph tracer static branch optimization code
When only one tracer is defined for function graph tracing, it uses
a static branch to call that tracer directly. When another tracer
is added, it goes into loop logic to call all the registered
callbacks.
The code was incorrect when going back to one tracer and never
re-enabled the static branch again to do the optimization code.
- And other small fixes and cleanups"
* tag 'trace-v7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (46 commits)
function_graph: Restore direct mode when callbacks drop to one
tracing: Fix indentation of return statement in print_trace_fmt()
tracing: Reset last_boot_info if ring buffer is reset
tracing: Fix to set write permission to per-cpu buffer_size_kb
tracing: Fix false sharing in hwlat get_sample()
tracing: Move d_max_latency out of CONFIG_FSNOTIFY protection
tracing: Better separate SNAPSHOT and MAX_TRACE options
tracing: Add tracer_uses_snapshot() helper to remove #ifdefs
tracing: Rename trace_array field max_buffer to snapshot_buffer
tracing: Move pid filtering into trace_pid.c
tracing: Move trace_printk functions out of trace.c and into trace_printk.c
tracing: Use system_state in trace_printk_init_buffers()
tracing: Have trace_printk functions use flags instead of using global_trace
tracing: Make tracing_update_buffers() take NULL for global_trace
tracing: Make printk_trace global for tracing system
tracing: Move ftrace_trace_stack() out of trace.c and into trace.h
tracing: Move __trace_buffer_{un}lock_*() functions to trace.h
tracing: Make tracing_selftest_running global to the tracing subsystem
tracing: Make tracing_disabled global for tracing system
tracing: Clean up use of trace_create_maxlat_file()
...
This commit is contained in:
commit
3c6e577d5a
31 changed files with 1392 additions and 1077 deletions
|
|
@ -684,6 +684,22 @@ of ftrace. Here is a list of some of the key files:
|
|||
|
||||
See events.rst for more information.
|
||||
|
||||
show_event_filters:
|
||||
|
||||
A list of events that have filters. This shows the
|
||||
system/event pair along with the filter that is attached to
|
||||
the event.
|
||||
|
||||
See events.rst for more information.
|
||||
|
||||
show_event_triggers:
|
||||
|
||||
A list of events that have triggers. This shows the
|
||||
system/event pair along with the trigger that is attached to
|
||||
the event.
|
||||
|
||||
See events.rst for more information.
|
||||
|
||||
available_events:
|
||||
|
||||
A list of events that can be enabled in tracing.
|
||||
|
|
@ -1290,6 +1306,15 @@ Here are the available options:
|
|||
This will be useful if you want to find out which hashed
|
||||
value is corresponding to the real value in trace log.
|
||||
|
||||
bitmask-list
|
||||
When enabled, bitmasks are displayed as a human-readable list of
|
||||
ranges (e.g., 0,2-5,7) using the printk "%*pbl" format specifier.
|
||||
When disabled (the default), bitmasks are displayed in the
|
||||
traditional hexadecimal bitmap representation. The list format is
|
||||
particularly useful for tracing CPU masks and other large bitmasks
|
||||
where individual bit positions are more meaningful than their
|
||||
hexadecimal encoding.
|
||||
|
||||
record-cmd
|
||||
When any event or tracer is enabled, a hook is enabled
|
||||
in the sched_switch trace point to fill comm cache
|
||||
|
|
|
|||
15
MAINTAINERS
15
MAINTAINERS
|
|
@ -25250,6 +25250,7 @@ STATIC BRANCH/CALL
|
|||
M: Peter Zijlstra <peterz@infradead.org>
|
||||
M: Josh Poimboeuf <jpoimboe@kernel.org>
|
||||
M: Jason Baron <jbaron@akamai.com>
|
||||
M: Alice Ryhl <aliceryhl@google.com>
|
||||
R: Steven Rostedt <rostedt@goodmis.org>
|
||||
R: Ard Biesheuvel <ardb@kernel.org>
|
||||
S: Supported
|
||||
|
|
@ -25261,6 +25262,9 @@ F: include/linux/jump_label*.h
|
|||
F: include/linux/static_call*.h
|
||||
F: kernel/jump_label.c
|
||||
F: kernel/static_call*.c
|
||||
F: rust/helpers/jump_label.c
|
||||
F: rust/kernel/generated_arch_static_branch_asm.rs.S
|
||||
F: rust/kernel/jump_label.rs
|
||||
|
||||
STI AUDIO (ASoC) DRIVERS
|
||||
M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
|
||||
|
|
@ -26727,6 +26731,17 @@ F: scripts/tracing/
|
|||
F: scripts/tracepoint-update.c
|
||||
F: tools/testing/selftests/ftrace/
|
||||
|
||||
TRACING [RUST]
|
||||
M: Alice Ryhl <aliceryhl@google.com>
|
||||
M: Steven Rostedt <rostedt@goodmis.org>
|
||||
R: Masami Hiramatsu <mhiramat@kernel.org>
|
||||
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||
L: linux-trace-kernel@vger.kernel.org
|
||||
L: rust-for-linux@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
|
||||
F: rust/kernel/tracepoint.rs
|
||||
|
||||
TRACING MMIO ACCESSES (MMIOTRACE)
|
||||
M: Steven Rostedt <rostedt@goodmis.org>
|
||||
M: Masami Hiramatsu <mhiramat@kernel.org>
|
||||
|
|
|
|||
|
|
@ -38,7 +38,10 @@ const char *trace_print_symbols_seq_u64(struct trace_seq *p,
|
|||
*symbol_array);
|
||||
#endif
|
||||
|
||||
const char *trace_print_bitmask_seq(struct trace_seq *p, void *bitmask_ptr,
|
||||
struct trace_iterator;
|
||||
struct trace_event;
|
||||
|
||||
const char *trace_print_bitmask_seq(struct trace_iterator *iter, void *bitmask_ptr,
|
||||
unsigned int bitmask_size);
|
||||
|
||||
const char *trace_print_hex_seq(struct trace_seq *p,
|
||||
|
|
@ -54,9 +57,6 @@ trace_print_hex_dump_seq(struct trace_seq *p, const char *prefix_str,
|
|||
int prefix_type, int rowsize, int groupsize,
|
||||
const void *buf, size_t len, bool ascii);
|
||||
|
||||
struct trace_iterator;
|
||||
struct trace_event;
|
||||
|
||||
int trace_raw_output_prep(struct trace_iterator *iter,
|
||||
struct trace_event *event);
|
||||
extern __printf(2, 3)
|
||||
|
|
|
|||
|
|
@ -114,7 +114,11 @@ extern void trace_seq_putmem_hex(struct trace_seq *s, const void *mem,
|
|||
extern int trace_seq_path(struct trace_seq *s, const struct path *path);
|
||||
|
||||
extern void trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
|
||||
int nmaskbits);
|
||||
int nmaskbits);
|
||||
|
||||
extern void trace_seq_bitmask_list(struct trace_seq *s,
|
||||
const unsigned long *maskp,
|
||||
int nmaskbits);
|
||||
|
||||
extern int trace_seq_hex_dump(struct trace_seq *s, const char *prefix_str,
|
||||
int prefix_type, int rowsize, int groupsize,
|
||||
|
|
@ -137,6 +141,12 @@ trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
|
|||
{
|
||||
}
|
||||
|
||||
static inline void
|
||||
trace_seq_bitmask_list(struct trace_seq *s, const unsigned long *maskp,
|
||||
int nmaskbits)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int trace_print_seq(struct seq_file *m, struct trace_seq *s)
|
||||
{
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -108,14 +108,15 @@ void for_each_tracepoint_in_module(struct module *mod,
|
|||
* An alternative is to use the following for batch reclaim associated
|
||||
* with a given tracepoint:
|
||||
*
|
||||
* - tracepoint_is_faultable() == false: call_rcu()
|
||||
* - tracepoint_is_faultable() == false: call_srcu()
|
||||
* - tracepoint_is_faultable() == true: call_rcu_tasks_trace()
|
||||
*/
|
||||
#ifdef CONFIG_TRACEPOINTS
|
||||
extern struct srcu_struct tracepoint_srcu;
|
||||
static inline void tracepoint_synchronize_unregister(void)
|
||||
{
|
||||
synchronize_rcu_tasks_trace();
|
||||
synchronize_rcu();
|
||||
synchronize_srcu(&tracepoint_srcu);
|
||||
}
|
||||
static inline bool tracepoint_is_faultable(struct tracepoint *tp)
|
||||
{
|
||||
|
|
@ -275,13 +276,13 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
|||
return static_branch_unlikely(&__tracepoint_##name.key);\
|
||||
}
|
||||
|
||||
#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \
|
||||
#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \
|
||||
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
|
||||
static inline void __do_trace_##name(proto) \
|
||||
{ \
|
||||
TRACEPOINT_CHECK(name) \
|
||||
if (cond) { \
|
||||
guard(preempt_notrace)(); \
|
||||
guard(srcu_fast_notrace)(&tracepoint_srcu); \
|
||||
__DO_TRACE_CALL(name, TP_ARGS(args)); \
|
||||
} \
|
||||
} \
|
||||
|
|
|
|||
|
|
@ -71,6 +71,7 @@ perf_trace_##call(void *__data, proto) \
|
|||
u64 __count __attribute__((unused)); \
|
||||
struct task_struct *__task __attribute__((unused)); \
|
||||
\
|
||||
guard(preempt_notrace)(); \
|
||||
do_perf_trace_##call(__data, args); \
|
||||
}
|
||||
|
||||
|
|
@ -85,9 +86,8 @@ perf_trace_##call(void *__data, proto) \
|
|||
struct task_struct *__task __attribute__((unused)); \
|
||||
\
|
||||
might_fault(); \
|
||||
preempt_disable_notrace(); \
|
||||
guard(preempt_notrace)(); \
|
||||
do_perf_trace_##call(__data, args); \
|
||||
preempt_enable_notrace(); \
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@
|
|||
void *__bitmask = __get_dynamic_array(field); \
|
||||
unsigned int __bitmask_size; \
|
||||
__bitmask_size = __get_dynamic_array_len(field); \
|
||||
trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
|
||||
trace_print_bitmask_seq(iter, __bitmask, __bitmask_size); \
|
||||
})
|
||||
|
||||
#undef __get_cpumask
|
||||
|
|
@ -51,7 +51,7 @@
|
|||
void *__bitmask = __get_rel_dynamic_array(field); \
|
||||
unsigned int __bitmask_size; \
|
||||
__bitmask_size = __get_rel_dynamic_array_len(field); \
|
||||
trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
|
||||
trace_print_bitmask_seq(iter, __bitmask, __bitmask_size); \
|
||||
})
|
||||
|
||||
#undef __get_rel_cpumask
|
||||
|
|
|
|||
|
|
@ -436,6 +436,7 @@ __DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), PARAMS(tstruct), \
|
|||
static notrace void \
|
||||
trace_event_raw_event_##call(void *__data, proto) \
|
||||
{ \
|
||||
guard(preempt_notrace)(); \
|
||||
do_trace_event_raw_event_##call(__data, args); \
|
||||
}
|
||||
|
||||
|
|
@ -447,9 +448,8 @@ static notrace void \
|
|||
trace_event_raw_event_##call(void *__data, proto) \
|
||||
{ \
|
||||
might_fault(); \
|
||||
preempt_disable_notrace(); \
|
||||
guard(preempt_notrace)(); \
|
||||
do_trace_event_raw_event_##call(__data, args); \
|
||||
preempt_enable_notrace(); \
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -789,7 +789,8 @@ void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor)
|
|||
struct srcu_data *sdp;
|
||||
|
||||
/* NMI-unsafe use in NMI is a bad sign, as is multi-bit read_flavor values. */
|
||||
WARN_ON_ONCE((read_flavor != SRCU_READ_FLAVOR_NMI) && in_nmi());
|
||||
WARN_ON_ONCE(read_flavor != SRCU_READ_FLAVOR_NMI &&
|
||||
read_flavor != SRCU_READ_FLAVOR_FAST && in_nmi());
|
||||
WARN_ON_ONCE(read_flavor & (read_flavor - 1));
|
||||
|
||||
sdp = raw_cpu_ptr(ssp->sda);
|
||||
|
|
|
|||
|
|
@ -136,6 +136,7 @@ config BUILDTIME_MCOUNT_SORT
|
|||
|
||||
config TRACER_MAX_TRACE
|
||||
bool
|
||||
select TRACER_SNAPSHOT
|
||||
|
||||
config TRACE_CLOCK
|
||||
bool
|
||||
|
|
@ -425,7 +426,6 @@ config IRQSOFF_TRACER
|
|||
select GENERIC_TRACER
|
||||
select TRACER_MAX_TRACE
|
||||
select RING_BUFFER_ALLOW_SWAP
|
||||
select TRACER_SNAPSHOT
|
||||
select TRACER_SNAPSHOT_PER_CPU_SWAP
|
||||
help
|
||||
This option measures the time spent in irqs-off critical
|
||||
|
|
@ -448,7 +448,6 @@ config PREEMPT_TRACER
|
|||
select GENERIC_TRACER
|
||||
select TRACER_MAX_TRACE
|
||||
select RING_BUFFER_ALLOW_SWAP
|
||||
select TRACER_SNAPSHOT
|
||||
select TRACER_SNAPSHOT_PER_CPU_SWAP
|
||||
select TRACE_PREEMPT_TOGGLE
|
||||
help
|
||||
|
|
@ -470,7 +469,6 @@ config SCHED_TRACER
|
|||
select GENERIC_TRACER
|
||||
select CONTEXT_SWITCH_TRACER
|
||||
select TRACER_MAX_TRACE
|
||||
select TRACER_SNAPSHOT
|
||||
help
|
||||
This tracer tracks the latency of the highest priority task
|
||||
to be scheduled in, starting from the point it has woken up.
|
||||
|
|
@ -620,7 +618,6 @@ config TRACE_SYSCALL_BUF_SIZE_DEFAULT
|
|||
|
||||
config TRACER_SNAPSHOT
|
||||
bool "Create a snapshot trace buffer"
|
||||
select TRACER_MAX_TRACE
|
||||
help
|
||||
Allow tracing users to take snapshot of the current buffer using the
|
||||
ftrace interface, e.g.:
|
||||
|
|
@ -628,6 +625,9 @@ config TRACER_SNAPSHOT
|
|||
echo 1 > /sys/kernel/tracing/snapshot
|
||||
cat snapshot
|
||||
|
||||
Note, the latency tracers select this option. To disable it,
|
||||
all the latency tracers need to be disabled.
|
||||
|
||||
config TRACER_SNAPSHOT_PER_CPU_SWAP
|
||||
bool "Allow snapshot to swap per CPU"
|
||||
depends on TRACER_SNAPSHOT
|
||||
|
|
|
|||
|
|
@ -68,6 +68,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
|
|||
obj-$(CONFIG_TRACING) += trace_seq.o
|
||||
obj-$(CONFIG_TRACING) += trace_stat.o
|
||||
obj-$(CONFIG_TRACING) += trace_printk.o
|
||||
obj-$(CONFIG_TRACING) += trace_pid.o
|
||||
obj-$(CONFIG_TRACING) += pid_list.o
|
||||
obj-$(CONFIG_TRACING_MAP) += tracing_map.o
|
||||
obj-$(CONFIG_PREEMPTIRQ_DELAY_TEST) += preemptirq_delay_test.o
|
||||
|
|
|
|||
|
|
@ -1832,7 +1832,9 @@ static struct trace_event trace_blk_event = {
|
|||
.funcs = &trace_blk_event_funcs,
|
||||
};
|
||||
|
||||
static int __init init_blk_tracer(void)
|
||||
static struct work_struct blktrace_works __initdata;
|
||||
|
||||
static int __init __init_blk_tracer(void)
|
||||
{
|
||||
if (!register_trace_event(&trace_blk_event)) {
|
||||
pr_warn("Warning: could not register block events\n");
|
||||
|
|
@ -1852,6 +1854,25 @@ static int __init init_blk_tracer(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void __init blktrace_works_func(struct work_struct *work)
|
||||
{
|
||||
__init_blk_tracer();
|
||||
}
|
||||
|
||||
static int __init init_blk_tracer(void)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (trace_init_wq) {
|
||||
INIT_WORK(&blktrace_works, blktrace_works_func);
|
||||
queue_work(trace_init_wq, &blktrace_works);
|
||||
} else {
|
||||
ret = __init_blk_tracer();
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
device_initcall(init_blk_tracer);
|
||||
|
||||
static int blk_trace_remove_queue(struct request_queue *q)
|
||||
|
|
|
|||
|
|
@ -2076,7 +2076,7 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
|
|||
struct bpf_run_ctx *old_run_ctx;
|
||||
struct bpf_trace_run_ctx run_ctx;
|
||||
|
||||
cant_sleep();
|
||||
rcu_read_lock_dont_migrate();
|
||||
if (unlikely(!bpf_prog_get_recursion_context(prog))) {
|
||||
bpf_prog_inc_misses_counter(prog);
|
||||
goto out;
|
||||
|
|
@ -2085,13 +2085,12 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
|
|||
run_ctx.bpf_cookie = link->cookie;
|
||||
old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
|
||||
|
||||
rcu_read_lock();
|
||||
(void) bpf_prog_run(prog, args);
|
||||
rcu_read_unlock();
|
||||
|
||||
bpf_reset_run_ctx(old_run_ctx);
|
||||
out:
|
||||
bpf_prog_put_recursion_context(prog);
|
||||
rcu_read_unlock_migrate();
|
||||
}
|
||||
|
||||
#define UNPACK(...) __VA_ARGS__
|
||||
|
|
|
|||
|
|
@ -1303,7 +1303,7 @@ static void ftrace_graph_enable_direct(bool enable_branch, struct fgraph_ops *go
|
|||
static_call_update(fgraph_func, func);
|
||||
static_call_update(fgraph_retfunc, retfunc);
|
||||
if (enable_branch)
|
||||
static_branch_disable(&fgraph_do_direct);
|
||||
static_branch_enable(&fgraph_do_direct);
|
||||
}
|
||||
|
||||
static void ftrace_graph_disable_direct(bool disable_branch)
|
||||
|
|
|
|||
|
|
@ -1147,6 +1147,7 @@ struct ftrace_page {
|
|||
};
|
||||
|
||||
#define ENTRY_SIZE sizeof(struct dyn_ftrace)
|
||||
#define ENTRIES_PER_PAGE_GROUP(order) ((PAGE_SIZE << (order)) / ENTRY_SIZE)
|
||||
|
||||
static struct ftrace_page *ftrace_pages_start;
|
||||
static struct ftrace_page *ftrace_pages;
|
||||
|
|
@ -3873,7 +3874,7 @@ static int ftrace_allocate_records(struct ftrace_page *pg, int count,
|
|||
*num_pages += 1 << order;
|
||||
ftrace_number_of_groups++;
|
||||
|
||||
cnt = (PAGE_SIZE << order) / ENTRY_SIZE;
|
||||
cnt = ENTRIES_PER_PAGE_GROUP(order);
|
||||
pg->order = order;
|
||||
|
||||
if (cnt > count)
|
||||
|
|
@ -7668,7 +7669,7 @@ static int ftrace_process_locs(struct module *mod,
|
|||
long skip;
|
||||
|
||||
/* Count the number of entries unused and compare it to skipped. */
|
||||
pg_remaining = (PAGE_SIZE << pg->order) / ENTRY_SIZE - pg->index;
|
||||
pg_remaining = ENTRIES_PER_PAGE_GROUP(pg->order) - pg->index;
|
||||
|
||||
if (!WARN(skipped < pg_remaining, "Extra allocated pages for ftrace")) {
|
||||
|
||||
|
|
@ -7676,7 +7677,7 @@ static int ftrace_process_locs(struct module *mod,
|
|||
|
||||
for (pg = pg_unuse; pg && skip > 0; pg = pg->next) {
|
||||
remaining += 1 << pg->order;
|
||||
skip -= (PAGE_SIZE << pg->order) / ENTRY_SIZE;
|
||||
skip -= ENTRIES_PER_PAGE_GROUP(pg->order);
|
||||
}
|
||||
|
||||
pages -= remaining;
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@
|
|||
*
|
||||
* Copyright (C) 2008 Steven Rostedt <srostedt@redhat.com>
|
||||
*/
|
||||
#include <linux/sched/isolation.h>
|
||||
#include <linux/trace_recursion.h>
|
||||
#include <linux/trace_events.h>
|
||||
#include <linux/ring_buffer.h>
|
||||
|
|
@ -4013,19 +4014,36 @@ static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer)
|
|||
rb_end_commit(cpu_buffer);
|
||||
}
|
||||
|
||||
static bool
|
||||
rb_irq_work_queue(struct rb_irq_work *irq_work)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
/* irq_work_queue_on() is not NMI-safe */
|
||||
if (unlikely(in_nmi()))
|
||||
return irq_work_queue(&irq_work->work);
|
||||
|
||||
/*
|
||||
* If CPU isolation is not active, cpu is always the current
|
||||
* CPU, and the following is equivallent to irq_work_queue().
|
||||
*/
|
||||
cpu = housekeeping_any_cpu(HK_TYPE_KERNEL_NOISE);
|
||||
return irq_work_queue_on(&irq_work->work, cpu);
|
||||
}
|
||||
|
||||
static __always_inline void
|
||||
rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer)
|
||||
{
|
||||
if (buffer->irq_work.waiters_pending) {
|
||||
buffer->irq_work.waiters_pending = false;
|
||||
/* irq_work_queue() supplies it's own memory barriers */
|
||||
irq_work_queue(&buffer->irq_work.work);
|
||||
rb_irq_work_queue(&buffer->irq_work);
|
||||
}
|
||||
|
||||
if (cpu_buffer->irq_work.waiters_pending) {
|
||||
cpu_buffer->irq_work.waiters_pending = false;
|
||||
/* irq_work_queue() supplies it's own memory barriers */
|
||||
irq_work_queue(&cpu_buffer->irq_work.work);
|
||||
rb_irq_work_queue(&cpu_buffer->irq_work);
|
||||
}
|
||||
|
||||
if (cpu_buffer->last_pages_touch == local_read(&cpu_buffer->pages_touched))
|
||||
|
|
@ -4045,7 +4063,7 @@ rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer)
|
|||
cpu_buffer->irq_work.wakeup_full = true;
|
||||
cpu_buffer->irq_work.full_waiters_pending = false;
|
||||
/* irq_work_queue() supplies it's own memory barriers */
|
||||
irq_work_queue(&cpu_buffer->irq_work.work);
|
||||
rb_irq_work_queue(&cpu_buffer->irq_work);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_RING_BUFFER_RECORD_RECURSION
|
||||
|
|
|
|||
1058
kernel/trace/trace.c
1058
kernel/trace/trace.c
File diff suppressed because it is too large
Load diff
|
|
@ -131,7 +131,7 @@ enum trace_type {
|
|||
|
||||
#define FAULT_STRING "(fault)"
|
||||
|
||||
#define HIST_STACKTRACE_DEPTH 16
|
||||
#define HIST_STACKTRACE_DEPTH 31
|
||||
#define HIST_STACKTRACE_SIZE (HIST_STACKTRACE_DEPTH * sizeof(unsigned long))
|
||||
#define HIST_STACKTRACE_SKIP 5
|
||||
|
||||
|
|
@ -332,29 +332,33 @@ struct trace_array {
|
|||
struct list_head list;
|
||||
char *name;
|
||||
struct array_buffer array_buffer;
|
||||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
#ifdef CONFIG_TRACER_SNAPSHOT
|
||||
/*
|
||||
* The max_buffer is used to snapshot the trace when a maximum
|
||||
* The snapshot_buffer is used to snapshot the trace when a maximum
|
||||
* latency is reached, or when the user initiates a snapshot.
|
||||
* Some tracers will use this to store a maximum trace while
|
||||
* it continues examining live traces.
|
||||
*
|
||||
* The buffers for the max_buffer are set up the same as the array_buffer
|
||||
* When a snapshot is taken, the buffer of the max_buffer is swapped
|
||||
* with the buffer of the array_buffer and the buffers are reset for
|
||||
* the array_buffer so the tracing can continue.
|
||||
* The buffers for the snapshot_buffer are set up the same as the
|
||||
* array_buffer. When a snapshot is taken, the buffer of the
|
||||
* snapshot_buffer is swapped with the buffer of the array_buffer
|
||||
* and the buffers are reset for the array_buffer so the tracing can
|
||||
* continue.
|
||||
*/
|
||||
struct array_buffer max_buffer;
|
||||
struct array_buffer snapshot_buffer;
|
||||
bool allocated_snapshot;
|
||||
spinlock_t snapshot_trigger_lock;
|
||||
unsigned int snapshot;
|
||||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
unsigned long max_latency;
|
||||
#ifdef CONFIG_FSNOTIFY
|
||||
struct dentry *d_max_latency;
|
||||
#ifdef CONFIG_FSNOTIFY
|
||||
struct work_struct fsnotify_work;
|
||||
struct irq_work fsnotify_irqwork;
|
||||
#endif
|
||||
#endif
|
||||
#endif /* CONFIG_FSNOTIFY */
|
||||
#endif /* CONFIG_TRACER_MAX_TRACE */
|
||||
#endif /* CONFIG_TRACER_SNAPSHOT */
|
||||
|
||||
/* The below is for memory mapped ring buffer */
|
||||
unsigned int mapped;
|
||||
unsigned long range_addr_start;
|
||||
|
|
@ -380,7 +384,7 @@ struct trace_array {
|
|||
*
|
||||
* It is also used in other places outside the update_max_tr
|
||||
* so it needs to be defined outside of the
|
||||
* CONFIG_TRACER_MAX_TRACE.
|
||||
* CONFIG_TRACER_SNAPSHOT.
|
||||
*/
|
||||
arch_spinlock_t max_lock;
|
||||
#ifdef CONFIG_FTRACE_SYSCALLS
|
||||
|
|
@ -479,13 +483,14 @@ extern struct trace_array *trace_array_find(const char *instance);
|
|||
extern struct trace_array *trace_array_find_get(const char *instance);
|
||||
|
||||
extern u64 tracing_event_time_stamp(struct trace_buffer *buffer, struct ring_buffer_event *rbe);
|
||||
extern int tracing_set_filter_buffering(struct trace_array *tr, bool set);
|
||||
extern int tracing_set_clock(struct trace_array *tr, const char *clockstr);
|
||||
|
||||
extern bool trace_clock_in_ns(struct trace_array *tr);
|
||||
|
||||
extern unsigned long trace_adjust_address(struct trace_array *tr, unsigned long addr);
|
||||
|
||||
extern struct trace_array *printk_trace;
|
||||
|
||||
/*
|
||||
* The global tracer (top) should be the first trace array added,
|
||||
* but we check the flag anyway.
|
||||
|
|
@ -661,6 +666,8 @@ trace_buffer_iter(struct trace_iterator *iter, int cpu)
|
|||
return iter->buffer_iter ? iter->buffer_iter[cpu] : NULL;
|
||||
}
|
||||
|
||||
extern int tracing_disabled;
|
||||
|
||||
int tracer_init(struct tracer *t, struct trace_array *tr);
|
||||
int tracing_is_enabled(void);
|
||||
void tracing_reset_online_cpus(struct array_buffer *buf);
|
||||
|
|
@ -672,7 +679,6 @@ int tracing_release_generic_tr(struct inode *inode, struct file *file);
|
|||
int tracing_open_file_tr(struct inode *inode, struct file *filp);
|
||||
int tracing_release_file_tr(struct inode *inode, struct file *filp);
|
||||
int tracing_single_release_file_tr(struct inode *inode, struct file *filp);
|
||||
bool tracing_is_disabled(void);
|
||||
bool tracer_tracing_is_on(struct trace_array *tr);
|
||||
void tracer_tracing_on(struct trace_array *tr);
|
||||
void tracer_tracing_off(struct trace_array *tr);
|
||||
|
|
@ -772,6 +778,7 @@ extern cpumask_var_t __read_mostly tracing_buffer_mask;
|
|||
extern unsigned long nsecs_to_usecs(unsigned long nsecs);
|
||||
|
||||
extern unsigned long tracing_thresh;
|
||||
extern struct workqueue_struct *trace_init_wq __initdata;
|
||||
|
||||
/* PID filtering */
|
||||
|
||||
|
|
@ -790,22 +797,22 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
|
|||
struct trace_pid_list **new_pid_list,
|
||||
const char __user *ubuf, size_t cnt);
|
||||
|
||||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
#ifdef CONFIG_TRACER_SNAPSHOT
|
||||
void update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu,
|
||||
void *cond_data);
|
||||
void update_max_tr_single(struct trace_array *tr,
|
||||
struct task_struct *tsk, int cpu);
|
||||
|
||||
#ifdef CONFIG_FSNOTIFY
|
||||
#define LATENCY_FS_NOTIFY
|
||||
#if defined(CONFIG_TRACER_MAX_TRACE) && defined(CONFIG_FSNOTIFY)
|
||||
# define LATENCY_FS_NOTIFY
|
||||
#endif
|
||||
#endif /* CONFIG_TRACER_MAX_TRACE */
|
||||
|
||||
#ifdef LATENCY_FS_NOTIFY
|
||||
void latency_fsnotify(struct trace_array *tr);
|
||||
#else
|
||||
static inline void latency_fsnotify(struct trace_array *tr) { }
|
||||
#endif
|
||||
#endif /* CONFIG_TRACER_SNAPSHOT */
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
void __trace_stack(struct trace_array *tr, unsigned int trace_ctx, int skip);
|
||||
|
|
@ -816,6 +823,18 @@ static inline void __trace_stack(struct trace_array *tr, unsigned int trace_ctx,
|
|||
}
|
||||
#endif /* CONFIG_STACKTRACE */
|
||||
|
||||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
static inline bool tracer_uses_snapshot(struct tracer *tracer)
|
||||
{
|
||||
return tracer->use_max_tr;
|
||||
}
|
||||
#else
|
||||
static inline bool tracer_uses_snapshot(struct tracer *tracer)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
#endif
|
||||
|
||||
void trace_last_func_repeats(struct trace_array *tr,
|
||||
struct trace_func_repeats *last_info,
|
||||
unsigned int trace_ctx);
|
||||
|
|
@ -865,6 +884,7 @@ extern int trace_selftest_startup_nop(struct tracer *trace,
|
|||
struct trace_array *tr);
|
||||
extern int trace_selftest_startup_branch(struct tracer *trace,
|
||||
struct trace_array *tr);
|
||||
extern bool __read_mostly tracing_selftest_running;
|
||||
/*
|
||||
* Tracer data references selftest functions that only occur
|
||||
* on boot up. These can be __init functions. Thus, when selftests
|
||||
|
|
@ -877,6 +897,7 @@ static inline void __init disable_tracing_selftest(const char *reason)
|
|||
}
|
||||
/* Tracers are seldom changed. Optimize when selftests are disabled. */
|
||||
#define __tracer_data __read_mostly
|
||||
#define tracing_selftest_running 0
|
||||
#endif /* CONFIG_FTRACE_STARTUP_TEST */
|
||||
|
||||
extern void *head_page(struct trace_array_cpu *data);
|
||||
|
|
@ -1414,6 +1435,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
|
|||
C(COPY_MARKER, "copy_trace_marker"), \
|
||||
C(PAUSE_ON_TRACE, "pause-on-trace"), \
|
||||
C(HASH_PTR, "hash-ptr"), /* Print hashed pointer */ \
|
||||
C(BITMASK_LIST, "bitmask-list"), \
|
||||
FUNCTION_FLAGS \
|
||||
FGRAPH_FLAGS \
|
||||
STACK_FLAGS \
|
||||
|
|
@ -1567,6 +1589,47 @@ char *trace_user_fault_read(struct trace_user_buf_info *tinfo,
|
|||
const char __user *ptr, size_t size,
|
||||
trace_user_buf_copy copy_func, void *data);
|
||||
|
||||
static __always_inline void
|
||||
trace_event_setup(struct ring_buffer_event *event,
|
||||
int type, unsigned int trace_ctx)
|
||||
{
|
||||
struct trace_entry *ent = ring_buffer_event_data(event);
|
||||
|
||||
tracing_generic_entry_update(ent, type, trace_ctx);
|
||||
}
|
||||
|
||||
static __always_inline struct ring_buffer_event *
|
||||
__trace_buffer_lock_reserve(struct trace_buffer *buffer,
|
||||
int type,
|
||||
unsigned long len,
|
||||
unsigned int trace_ctx)
|
||||
{
|
||||
struct ring_buffer_event *event;
|
||||
|
||||
event = ring_buffer_lock_reserve(buffer, len);
|
||||
if (event != NULL)
|
||||
trace_event_setup(event, type, trace_ctx);
|
||||
|
||||
return event;
|
||||
}
|
||||
|
||||
static __always_inline void
|
||||
__buffer_unlock_commit(struct trace_buffer *buffer, struct ring_buffer_event *event)
|
||||
{
|
||||
__this_cpu_write(trace_taskinfo_save, true);
|
||||
|
||||
/* If this is the temp buffer, we need to commit fully */
|
||||
if (this_cpu_read(trace_buffered_event) == event) {
|
||||
/* Length is in event->array[0] */
|
||||
ring_buffer_write(buffer, event->array[0], &event->array[1]);
|
||||
/* Release the temp buffer */
|
||||
this_cpu_dec(trace_buffered_event_cnt);
|
||||
/* ring_buffer_unlock_commit() enables preemption */
|
||||
preempt_enable_notrace();
|
||||
} else
|
||||
ring_buffer_unlock_commit(buffer);
|
||||
}
|
||||
|
||||
static inline void
|
||||
__trace_event_discard_commit(struct trace_buffer *buffer,
|
||||
struct ring_buffer_event *event)
|
||||
|
|
@ -2087,6 +2150,7 @@ extern const char *__stop___tracepoint_str[];
|
|||
|
||||
void trace_printk_control(bool enabled);
|
||||
void trace_printk_start_comm(void);
|
||||
void trace_printk_start_stop_comm(int enabled);
|
||||
int trace_keep_overwrite(struct tracer *tracer, u64 mask, int set);
|
||||
int set_tracer_flag(struct trace_array *tr, u64 mask, int enabled);
|
||||
|
||||
|
|
@ -2237,6 +2301,37 @@ static inline void sanitize_event_name(char *name)
|
|||
*name = '_';
|
||||
}
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
void __ftrace_trace_stack(struct trace_array *tr,
|
||||
struct trace_buffer *buffer,
|
||||
unsigned int trace_ctx,
|
||||
int skip, struct pt_regs *regs);
|
||||
|
||||
static __always_inline void ftrace_trace_stack(struct trace_array *tr,
|
||||
struct trace_buffer *buffer,
|
||||
unsigned int trace_ctx,
|
||||
int skip, struct pt_regs *regs)
|
||||
{
|
||||
if (!(tr->trace_flags & TRACE_ITER(STACKTRACE)))
|
||||
return;
|
||||
|
||||
__ftrace_trace_stack(tr, buffer, trace_ctx, skip, regs);
|
||||
}
|
||||
#else
|
||||
static inline void __ftrace_trace_stack(struct trace_array *tr,
|
||||
struct trace_buffer *buffer,
|
||||
unsigned int trace_ctx,
|
||||
int skip, struct pt_regs *regs)
|
||||
{
|
||||
}
|
||||
static inline void ftrace_trace_stack(struct trace_array *tr,
|
||||
struct trace_buffer *buffer,
|
||||
unsigned long trace_ctx,
|
||||
int skip, struct pt_regs *regs)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* This is a generic way to read and write a u64 value from a file in tracefs.
|
||||
*
|
||||
|
|
|
|||
|
|
@ -649,6 +649,22 @@ bool trace_event_ignore_this_pid(struct trace_event_file *trace_file)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(trace_event_ignore_this_pid);
|
||||
|
||||
/**
|
||||
* trace_event_buffer_reserve - reserve space on the ring buffer for an event
|
||||
* @fbuffer: information about how to save the event
|
||||
* @trace_file: the instance file descriptor for the event
|
||||
* @len: The length of the event
|
||||
*
|
||||
* The @fbuffer has information about the ring buffer and data will
|
||||
* be added to it to be used by the call to trace_event_buffer_commit().
|
||||
* The @trace_file is the desrciptor with information about the status
|
||||
* of the given event for a specific trace_array instance.
|
||||
* The @len is the length of data to save for the event.
|
||||
*
|
||||
* Returns a pointer to the data on the ring buffer or NULL if the
|
||||
* event was not reserved (event was filtered, too big, or the buffer
|
||||
* simply was disabled for write).
|
||||
*/
|
||||
void *trace_event_buffer_reserve(struct trace_event_buffer *fbuffer,
|
||||
struct trace_event_file *trace_file,
|
||||
unsigned long len)
|
||||
|
|
@ -1662,6 +1678,82 @@ static void t_stop(struct seq_file *m, void *p)
|
|||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
static int get_call_len(struct trace_event_call *call)
|
||||
{
|
||||
int len;
|
||||
|
||||
/* Get the length of "<system>:<event>" */
|
||||
len = strlen(call->class->system) + 1;
|
||||
len += strlen(trace_event_name(call));
|
||||
|
||||
/* Set the index to 32 bytes to separate event from data */
|
||||
return len >= 32 ? 1 : 32 - len;
|
||||
}
|
||||
|
||||
/**
|
||||
* t_show_filters - seq_file callback to display active event filters
|
||||
* @m: The seq_file interface for formatted output
|
||||
* @v: The current trace_event_file being iterated
|
||||
*
|
||||
* Identifies and prints active filters for the current event file in the
|
||||
* iteration. If a filter is applied to the current event and, if so,
|
||||
* prints the system name, event name, and the filter string.
|
||||
*/
|
||||
static int t_show_filters(struct seq_file *m, void *v)
|
||||
{
|
||||
struct trace_event_file *file = v;
|
||||
struct trace_event_call *call = file->event_call;
|
||||
struct event_filter *filter;
|
||||
int len;
|
||||
|
||||
guard(rcu)();
|
||||
filter = rcu_dereference(file->filter);
|
||||
if (!filter || !filter->filter_string)
|
||||
return 0;
|
||||
|
||||
len = get_call_len(call);
|
||||
|
||||
seq_printf(m, "%s:%s%*.s%s\n", call->class->system,
|
||||
trace_event_name(call), len, "", filter->filter_string);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* t_show_triggers - seq_file callback to display active event triggers
|
||||
* @m: The seq_file interface for formatted output
|
||||
* @v: The current trace_event_file being iterated
|
||||
*
|
||||
* Iterates through the trigger list of the current event file and prints
|
||||
* each active trigger's configuration using its associated print
|
||||
* operation.
|
||||
*/
|
||||
static int t_show_triggers(struct seq_file *m, void *v)
|
||||
{
|
||||
struct trace_event_file *file = v;
|
||||
struct trace_event_call *call = file->event_call;
|
||||
struct event_trigger_data *data;
|
||||
int len;
|
||||
|
||||
/*
|
||||
* The event_mutex is held by t_start(), protecting the
|
||||
* file->triggers list traversal.
|
||||
*/
|
||||
if (list_empty(&file->triggers))
|
||||
return 0;
|
||||
|
||||
len = get_call_len(call);
|
||||
|
||||
list_for_each_entry_rcu(data, &file->triggers, list) {
|
||||
seq_printf(m, "%s:%s%*.s", call->class->system,
|
||||
trace_event_name(call), len, "");
|
||||
|
||||
data->cmd_ops->print(m, data);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
static int s_show(struct seq_file *m, void *v)
|
||||
{
|
||||
|
|
@ -2176,7 +2268,7 @@ static int subsystem_open(struct inode *inode, struct file *filp)
|
|||
struct event_subsystem *system = NULL;
|
||||
int ret;
|
||||
|
||||
if (tracing_is_disabled())
|
||||
if (unlikely(tracing_disabled))
|
||||
return -ENODEV;
|
||||
|
||||
/* Make sure the system still exists */
|
||||
|
|
@ -2489,6 +2581,8 @@ ftrace_event_npid_write(struct file *filp, const char __user *ubuf,
|
|||
|
||||
static int ftrace_event_avail_open(struct inode *inode, struct file *file);
|
||||
static int ftrace_event_set_open(struct inode *inode, struct file *file);
|
||||
static int ftrace_event_show_filters_open(struct inode *inode, struct file *file);
|
||||
static int ftrace_event_show_triggers_open(struct inode *inode, struct file *file);
|
||||
static int ftrace_event_set_pid_open(struct inode *inode, struct file *file);
|
||||
static int ftrace_event_set_npid_open(struct inode *inode, struct file *file);
|
||||
static int ftrace_event_release(struct inode *inode, struct file *file);
|
||||
|
|
@ -2507,6 +2601,20 @@ static const struct seq_operations show_set_event_seq_ops = {
|
|||
.stop = s_stop,
|
||||
};
|
||||
|
||||
static const struct seq_operations show_show_event_filters_seq_ops = {
|
||||
.start = t_start,
|
||||
.next = t_next,
|
||||
.show = t_show_filters,
|
||||
.stop = t_stop,
|
||||
};
|
||||
|
||||
static const struct seq_operations show_show_event_triggers_seq_ops = {
|
||||
.start = t_start,
|
||||
.next = t_next,
|
||||
.show = t_show_triggers,
|
||||
.stop = t_stop,
|
||||
};
|
||||
|
||||
static const struct seq_operations show_set_pid_seq_ops = {
|
||||
.start = p_start,
|
||||
.next = p_next,
|
||||
|
|
@ -2536,6 +2644,20 @@ static const struct file_operations ftrace_set_event_fops = {
|
|||
.release = ftrace_event_release,
|
||||
};
|
||||
|
||||
static const struct file_operations ftrace_show_event_filters_fops = {
|
||||
.open = ftrace_event_show_filters_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
static const struct file_operations ftrace_show_event_triggers_fops = {
|
||||
.open = ftrace_event_show_triggers_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
static const struct file_operations ftrace_set_event_pid_fops = {
|
||||
.open = ftrace_event_set_pid_open,
|
||||
.read = seq_read,
|
||||
|
|
@ -2680,6 +2802,34 @@ ftrace_event_set_open(struct inode *inode, struct file *file)
|
|||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* ftrace_event_show_filters_open - open interface for set_event_filters
|
||||
* @inode: The inode of the file
|
||||
* @file: The file being opened
|
||||
*
|
||||
* Connects the set_event_filters file to the sequence operations
|
||||
* required to iterate over and display active event filters.
|
||||
*/
|
||||
static int
|
||||
ftrace_event_show_filters_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return ftrace_event_open(inode, file, &show_show_event_filters_seq_ops);
|
||||
}
|
||||
|
||||
/**
|
||||
* ftrace_event_show_triggers_open - open interface for show_event_triggers
|
||||
* @inode: The inode of the file
|
||||
* @file: The file being opened
|
||||
*
|
||||
* Connects the show_event_triggers file to the sequence operations
|
||||
* required to iterate over and display active event triggers.
|
||||
*/
|
||||
static int
|
||||
ftrace_event_show_triggers_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return ftrace_event_open(inode, file, &show_show_event_triggers_seq_ops);
|
||||
}
|
||||
|
||||
static int
|
||||
ftrace_event_set_pid_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
|
|
@ -3963,11 +4113,6 @@ void trace_put_event_file(struct trace_event_file *file)
|
|||
EXPORT_SYMBOL_GPL(trace_put_event_file);
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
|
||||
/* Avoid typos */
|
||||
#define ENABLE_EVENT_STR "enable_event"
|
||||
#define DISABLE_EVENT_STR "disable_event"
|
||||
|
||||
struct event_probe_data {
|
||||
struct trace_event_file *file;
|
||||
unsigned long count;
|
||||
|
|
@ -4400,6 +4545,12 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
|
|||
if (!entry)
|
||||
return -ENOMEM;
|
||||
|
||||
trace_create_file("show_event_filters", TRACE_MODE_READ, parent, tr,
|
||||
&ftrace_show_event_filters_fops);
|
||||
|
||||
trace_create_file("show_event_triggers", TRACE_MODE_READ, parent, tr,
|
||||
&ftrace_show_event_triggers_fops);
|
||||
|
||||
nr_entries = ARRAY_SIZE(events_entries);
|
||||
|
||||
e_events = eventfs_create_events_dir("events", parent, events_entries,
|
||||
|
|
|
|||
|
|
@ -1375,7 +1375,7 @@ static void free_filter_list_tasks(struct rcu_head *rhp)
|
|||
struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
|
||||
|
||||
INIT_RCU_WORK(&filter_list->rwork, free_filter_list_work);
|
||||
queue_rcu_work(system_wq, &filter_list->rwork);
|
||||
queue_rcu_work(system_dfl_wq, &filter_list->rwork);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -105,38 +105,44 @@ enum field_op_id {
|
|||
FIELD_OP_MULT,
|
||||
};
|
||||
|
||||
#define FIELD_FUNCS \
|
||||
C(NOP, "nop"), \
|
||||
C(VAR_REF, "var_ref"), \
|
||||
C(COUNTER, "counter"), \
|
||||
C(CONST, "const"), \
|
||||
C(LOG2, "log2"), \
|
||||
C(BUCKET, "bucket"), \
|
||||
C(TIMESTAMP, "timestamp"), \
|
||||
C(CPU, "cpu"), \
|
||||
C(COMM, "comm"), \
|
||||
C(STRING, "string"), \
|
||||
C(DYNSTRING, "dynstring"), \
|
||||
C(RELDYNSTRING, "reldynstring"), \
|
||||
C(PSTRING, "pstring"), \
|
||||
C(S64, "s64"), \
|
||||
C(U64, "u64"), \
|
||||
C(S32, "s32"), \
|
||||
C(U32, "u32"), \
|
||||
C(S16, "s16"), \
|
||||
C(U16, "u16"), \
|
||||
C(S8, "s8"), \
|
||||
C(U8, "u8"), \
|
||||
C(UMINUS, "uminus"), \
|
||||
C(MINUS, "minus"), \
|
||||
C(PLUS, "plus"), \
|
||||
C(DIV, "div"), \
|
||||
C(MULT, "mult"), \
|
||||
C(DIV_POWER2, "div_power2"), \
|
||||
C(DIV_NOT_POWER2, "div_not_power2"), \
|
||||
C(DIV_MULT_SHIFT, "div_mult_shift"), \
|
||||
C(EXECNAME, "execname"), \
|
||||
C(STACK, "stack"),
|
||||
|
||||
#undef C
|
||||
#define C(a, b) HIST_FIELD_FN_##a
|
||||
|
||||
enum hist_field_fn {
|
||||
HIST_FIELD_FN_NOP,
|
||||
HIST_FIELD_FN_VAR_REF,
|
||||
HIST_FIELD_FN_COUNTER,
|
||||
HIST_FIELD_FN_CONST,
|
||||
HIST_FIELD_FN_LOG2,
|
||||
HIST_FIELD_FN_BUCKET,
|
||||
HIST_FIELD_FN_TIMESTAMP,
|
||||
HIST_FIELD_FN_CPU,
|
||||
HIST_FIELD_FN_COMM,
|
||||
HIST_FIELD_FN_STRING,
|
||||
HIST_FIELD_FN_DYNSTRING,
|
||||
HIST_FIELD_FN_RELDYNSTRING,
|
||||
HIST_FIELD_FN_PSTRING,
|
||||
HIST_FIELD_FN_S64,
|
||||
HIST_FIELD_FN_U64,
|
||||
HIST_FIELD_FN_S32,
|
||||
HIST_FIELD_FN_U32,
|
||||
HIST_FIELD_FN_S16,
|
||||
HIST_FIELD_FN_U16,
|
||||
HIST_FIELD_FN_S8,
|
||||
HIST_FIELD_FN_U8,
|
||||
HIST_FIELD_FN_UMINUS,
|
||||
HIST_FIELD_FN_MINUS,
|
||||
HIST_FIELD_FN_PLUS,
|
||||
HIST_FIELD_FN_DIV,
|
||||
HIST_FIELD_FN_MULT,
|
||||
HIST_FIELD_FN_DIV_POWER2,
|
||||
HIST_FIELD_FN_DIV_NOT_POWER2,
|
||||
HIST_FIELD_FN_DIV_MULT_SHIFT,
|
||||
HIST_FIELD_FN_EXECNAME,
|
||||
HIST_FIELD_FN_STACK,
|
||||
FIELD_FUNCS
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
@ -3157,7 +3163,7 @@ static inline void __update_field_vars(struct tracing_map_elt *elt,
|
|||
u64 var_val;
|
||||
|
||||
/* Make sure stacktrace can fit in the string variable length */
|
||||
BUILD_BUG_ON((HIST_STACKTRACE_DEPTH + 1) * sizeof(long) >= STR_VAR_LEN_MAX);
|
||||
BUILD_BUG_ON((HIST_STACKTRACE_DEPTH + 1) * sizeof(long) > STR_VAR_LEN_MAX);
|
||||
|
||||
for (i = 0, j = field_var_str_start; i < n_field_vars; i++) {
|
||||
struct field_var *field_var = field_vars[i];
|
||||
|
|
@ -5854,6 +5860,12 @@ const struct file_operations event_hist_fops = {
|
|||
};
|
||||
|
||||
#ifdef CONFIG_HIST_TRIGGERS_DEBUG
|
||||
|
||||
#undef C
|
||||
#define C(a, b) b
|
||||
|
||||
static const char * const field_funcs[] = { FIELD_FUNCS };
|
||||
|
||||
static void hist_field_debug_show_flags(struct seq_file *m,
|
||||
unsigned long flags)
|
||||
{
|
||||
|
|
@ -5918,6 +5930,7 @@ static int hist_field_debug_show(struct seq_file *m,
|
|||
seq_printf(m, " type: %s\n", field->type);
|
||||
seq_printf(m, " size: %u\n", field->size);
|
||||
seq_printf(m, " is_signed: %u\n", field->is_signed);
|
||||
seq_printf(m, " function: hist_field_%s()\n", field_funcs[field->fn_num]);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -6518,6 +6531,26 @@ static bool existing_hist_update_only(char *glob,
|
|||
return updated;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set or disable using the per CPU trace_buffer_event when possible.
|
||||
*/
|
||||
static int tracing_set_filter_buffering(struct trace_array *tr, bool set)
|
||||
{
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
if (set && tr->no_filter_buffering_ref++)
|
||||
return 0;
|
||||
|
||||
if (!set) {
|
||||
if (WARN_ON_ONCE(!tr->no_filter_buffering_ref))
|
||||
return -EINVAL;
|
||||
|
||||
--tr->no_filter_buffering_ref;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int hist_register_trigger(char *glob,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file)
|
||||
|
|
@ -6907,11 +6940,9 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
|
|||
out_unreg:
|
||||
event_trigger_unregister(cmd_ops, file, glob+1, trigger_data);
|
||||
out_free:
|
||||
event_trigger_reset_filter(cmd_ops, trigger_data);
|
||||
|
||||
remove_hist_vars(hist_data);
|
||||
|
||||
kfree(trigger_data);
|
||||
trigger_data_free(trigger_data);
|
||||
|
||||
destroy_hist_data(hist_data);
|
||||
goto out;
|
||||
|
|
|
|||
|
|
@ -499,9 +499,9 @@ static unsigned int trace_stack(struct synth_trace_event *entry,
|
|||
return len;
|
||||
}
|
||||
|
||||
static notrace void trace_event_raw_event_synth(void *__data,
|
||||
u64 *var_ref_vals,
|
||||
unsigned int *var_ref_idx)
|
||||
static void trace_event_raw_event_synth(void *__data,
|
||||
u64 *var_ref_vals,
|
||||
unsigned int *var_ref_idx)
|
||||
{
|
||||
unsigned int i, n_u64, val_idx, len, data_size = 0;
|
||||
struct trace_event_file *trace_file = __data;
|
||||
|
|
|
|||
|
|
@ -1347,18 +1347,13 @@ traceon_trigger(struct event_trigger_data *data,
|
|||
{
|
||||
struct trace_event_file *file = data->private_data;
|
||||
|
||||
if (file) {
|
||||
if (tracer_tracing_is_on(file->tr))
|
||||
return;
|
||||
|
||||
tracer_tracing_on(file->tr);
|
||||
return;
|
||||
}
|
||||
|
||||
if (tracing_is_on())
|
||||
if (WARN_ON_ONCE(!file))
|
||||
return;
|
||||
|
||||
tracing_on();
|
||||
if (tracer_tracing_is_on(file->tr))
|
||||
return;
|
||||
|
||||
tracer_tracing_on(file->tr);
|
||||
}
|
||||
|
||||
static bool
|
||||
|
|
@ -1368,13 +1363,11 @@ traceon_count_func(struct event_trigger_data *data,
|
|||
{
|
||||
struct trace_event_file *file = data->private_data;
|
||||
|
||||
if (file) {
|
||||
if (tracer_tracing_is_on(file->tr))
|
||||
return false;
|
||||
} else {
|
||||
if (tracing_is_on())
|
||||
return false;
|
||||
}
|
||||
if (WARN_ON_ONCE(!file))
|
||||
return false;
|
||||
|
||||
if (tracer_tracing_is_on(file->tr))
|
||||
return false;
|
||||
|
||||
if (!data->count)
|
||||
return false;
|
||||
|
|
@ -1392,18 +1385,13 @@ traceoff_trigger(struct event_trigger_data *data,
|
|||
{
|
||||
struct trace_event_file *file = data->private_data;
|
||||
|
||||
if (file) {
|
||||
if (!tracer_tracing_is_on(file->tr))
|
||||
return;
|
||||
|
||||
tracer_tracing_off(file->tr);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!tracing_is_on())
|
||||
if (WARN_ON_ONCE(!file))
|
||||
return;
|
||||
|
||||
tracing_off();
|
||||
if (!tracer_tracing_is_on(file->tr))
|
||||
return;
|
||||
|
||||
tracer_tracing_off(file->tr);
|
||||
}
|
||||
|
||||
static bool
|
||||
|
|
@ -1413,13 +1401,11 @@ traceoff_count_func(struct event_trigger_data *data,
|
|||
{
|
||||
struct trace_event_file *file = data->private_data;
|
||||
|
||||
if (file) {
|
||||
if (!tracer_tracing_is_on(file->tr))
|
||||
return false;
|
||||
} else {
|
||||
if (!tracing_is_on())
|
||||
return false;
|
||||
}
|
||||
if (WARN_ON_ONCE(!file))
|
||||
return false;
|
||||
|
||||
if (!tracer_tracing_is_on(file->tr))
|
||||
return false;
|
||||
|
||||
if (!data->count)
|
||||
return false;
|
||||
|
|
@ -1481,10 +1467,10 @@ snapshot_trigger(struct event_trigger_data *data,
|
|||
{
|
||||
struct trace_event_file *file = data->private_data;
|
||||
|
||||
if (file)
|
||||
tracing_snapshot_instance(file->tr);
|
||||
else
|
||||
tracing_snapshot();
|
||||
if (WARN_ON_ONCE(!file))
|
||||
return;
|
||||
|
||||
tracing_snapshot_instance(file->tr);
|
||||
}
|
||||
|
||||
static int
|
||||
|
|
@ -1570,10 +1556,10 @@ stacktrace_trigger(struct event_trigger_data *data,
|
|||
{
|
||||
struct trace_event_file *file = data->private_data;
|
||||
|
||||
if (file)
|
||||
__trace_stack(file->tr, tracing_gen_ctx_dec(), STACK_SKIP);
|
||||
else
|
||||
trace_dump_stack(STACK_SKIP);
|
||||
if (WARN_ON_ONCE(!file))
|
||||
return;
|
||||
|
||||
__trace_stack(file->tr, tracing_gen_ctx_dec(), STACK_SKIP);
|
||||
}
|
||||
|
||||
static int
|
||||
|
|
|
|||
|
|
@ -102,9 +102,9 @@ struct hwlat_sample {
|
|||
/* keep the global state somewhere. */
|
||||
static struct hwlat_data {
|
||||
|
||||
struct mutex lock; /* protect changes */
|
||||
struct mutex lock; /* protect changes */
|
||||
|
||||
u64 count; /* total since reset */
|
||||
atomic64_t count; /* total since reset */
|
||||
|
||||
u64 sample_window; /* total sampling window (on+off) */
|
||||
u64 sample_width; /* active sampling portion of window */
|
||||
|
|
@ -193,8 +193,7 @@ void trace_hwlat_callback(bool enter)
|
|||
* get_sample - sample the CPU TSC and look for likely hardware latencies
|
||||
*
|
||||
* Used to repeatedly capture the CPU TSC (or similar), looking for potential
|
||||
* hardware-induced latency. Called with interrupts disabled and with
|
||||
* hwlat_data.lock held.
|
||||
* hardware-induced latency. Called with interrupts disabled.
|
||||
*/
|
||||
static int get_sample(void)
|
||||
{
|
||||
|
|
@ -204,6 +203,7 @@ static int get_sample(void)
|
|||
time_type start, t1, t2, last_t2;
|
||||
s64 diff, outer_diff, total, last_total = 0;
|
||||
u64 sample = 0;
|
||||
u64 sample_width = READ_ONCE(hwlat_data.sample_width);
|
||||
u64 thresh = tracing_thresh;
|
||||
u64 outer_sample = 0;
|
||||
int ret = -1;
|
||||
|
|
@ -267,7 +267,7 @@ static int get_sample(void)
|
|||
if (diff > sample)
|
||||
sample = diff; /* only want highest value */
|
||||
|
||||
} while (total <= hwlat_data.sample_width);
|
||||
} while (total <= sample_width);
|
||||
|
||||
barrier(); /* finish the above in the view for NMIs */
|
||||
trace_hwlat_callback_enabled = false;
|
||||
|
|
@ -285,8 +285,7 @@ static int get_sample(void)
|
|||
if (kdata->nmi_total_ts)
|
||||
do_div(kdata->nmi_total_ts, NSEC_PER_USEC);
|
||||
|
||||
hwlat_data.count++;
|
||||
s.seqnum = hwlat_data.count;
|
||||
s.seqnum = atomic64_inc_return(&hwlat_data.count);
|
||||
s.duration = sample;
|
||||
s.outer_duration = outer_sample;
|
||||
s.nmi_total_ts = kdata->nmi_total_ts;
|
||||
|
|
@ -832,7 +831,7 @@ static int hwlat_tracer_init(struct trace_array *tr)
|
|||
|
||||
hwlat_trace = tr;
|
||||
|
||||
hwlat_data.count = 0;
|
||||
atomic64_set(&hwlat_data.count, 0);
|
||||
tr->max_latency = 0;
|
||||
save_tracing_thresh = tracing_thresh;
|
||||
|
||||
|
|
|
|||
|
|
@ -2048,6 +2048,10 @@ static __init int init_kprobe_trace(void)
|
|||
trace_create_file("kprobe_profile", TRACE_MODE_READ,
|
||||
NULL, NULL, &kprobe_profile_ops);
|
||||
|
||||
/* If no 'kprobe_event=' cmd is provided, return directly. */
|
||||
if (kprobe_boot_events_buf[0] == '\0')
|
||||
return 0;
|
||||
|
||||
setup_boot_kprobe_events();
|
||||
|
||||
return 0;
|
||||
|
|
@ -2079,7 +2083,7 @@ static __init int kprobe_trace_self_tests_init(void)
|
|||
struct trace_kprobe *tk;
|
||||
struct trace_event_file *file;
|
||||
|
||||
if (tracing_is_disabled())
|
||||
if (unlikely(tracing_disabled))
|
||||
return -ENODEV;
|
||||
|
||||
if (tracing_selftest_disabled)
|
||||
|
|
|
|||
|
|
@ -194,13 +194,37 @@ trace_print_symbols_seq_u64(struct trace_seq *p, unsigned long long val,
|
|||
EXPORT_SYMBOL(trace_print_symbols_seq_u64);
|
||||
#endif
|
||||
|
||||
/**
|
||||
* trace_print_bitmask_seq - print a bitmask to a sequence buffer
|
||||
* @iter: The trace iterator for the current event instance
|
||||
* @bitmask_ptr: The pointer to the bitmask data
|
||||
* @bitmask_size: The size of the bitmask in bytes
|
||||
*
|
||||
* Prints a bitmask into a sequence buffer as either a hex string or a
|
||||
* human-readable range list, depending on the instance's "bitmask-list"
|
||||
* trace option. The bitmask is formatted into the iterator's temporary
|
||||
* scratchpad rather than the primary sequence buffer. This avoids
|
||||
* duplication and pointer-collision issues when the returned string is
|
||||
* processed by a "%s" specifier in a TP_printk() macro.
|
||||
*
|
||||
* Returns a pointer to the formatted string within the temporary buffer.
|
||||
*/
|
||||
const char *
|
||||
trace_print_bitmask_seq(struct trace_seq *p, void *bitmask_ptr,
|
||||
trace_print_bitmask_seq(struct trace_iterator *iter, void *bitmask_ptr,
|
||||
unsigned int bitmask_size)
|
||||
{
|
||||
const char *ret = trace_seq_buffer_ptr(p);
|
||||
struct trace_seq *p = &iter->tmp_seq;
|
||||
const struct trace_array *tr = iter->tr;
|
||||
const char *ret;
|
||||
|
||||
trace_seq_init(p);
|
||||
ret = trace_seq_buffer_ptr(p);
|
||||
|
||||
if (tr->trace_flags & TRACE_ITER(BITMASK_LIST))
|
||||
trace_seq_bitmask_list(p, bitmask_ptr, bitmask_size * 8);
|
||||
else
|
||||
trace_seq_bitmask(p, bitmask_ptr, bitmask_size * 8);
|
||||
|
||||
trace_seq_bitmask(p, bitmask_ptr, bitmask_size * 8);
|
||||
trace_seq_putc(p, 0);
|
||||
|
||||
return ret;
|
||||
|
|
|
|||
246
kernel/trace/trace_pid.c
Normal file
246
kernel/trace/trace_pid.c
Normal file
|
|
@ -0,0 +1,246 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#include "trace.h"
|
||||
|
||||
/**
|
||||
* trace_find_filtered_pid - check if a pid exists in a filtered_pid list
|
||||
* @filtered_pids: The list of pids to check
|
||||
* @search_pid: The PID to find in @filtered_pids
|
||||
*
|
||||
* Returns true if @search_pid is found in @filtered_pids, and false otherwise.
|
||||
*/
|
||||
bool
|
||||
trace_find_filtered_pid(struct trace_pid_list *filtered_pids, pid_t search_pid)
|
||||
{
|
||||
return trace_pid_list_is_set(filtered_pids, search_pid);
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_ignore_this_task - should a task be ignored for tracing
|
||||
* @filtered_pids: The list of pids to check
|
||||
* @filtered_no_pids: The list of pids not to be traced
|
||||
* @task: The task that should be ignored if not filtered
|
||||
*
|
||||
* Checks if @task should be traced or not from @filtered_pids.
|
||||
* Returns true if @task should *NOT* be traced.
|
||||
* Returns false if @task should be traced.
|
||||
*/
|
||||
bool
|
||||
trace_ignore_this_task(struct trace_pid_list *filtered_pids,
|
||||
struct trace_pid_list *filtered_no_pids,
|
||||
struct task_struct *task)
|
||||
{
|
||||
/*
|
||||
* If filtered_no_pids is not empty, and the task's pid is listed
|
||||
* in filtered_no_pids, then return true.
|
||||
* Otherwise, if filtered_pids is empty, that means we can
|
||||
* trace all tasks. If it has content, then only trace pids
|
||||
* within filtered_pids.
|
||||
*/
|
||||
|
||||
return (filtered_pids &&
|
||||
!trace_find_filtered_pid(filtered_pids, task->pid)) ||
|
||||
(filtered_no_pids &&
|
||||
trace_find_filtered_pid(filtered_no_pids, task->pid));
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_filter_add_remove_task - Add or remove a task from a pid_list
|
||||
* @pid_list: The list to modify
|
||||
* @self: The current task for fork or NULL for exit
|
||||
* @task: The task to add or remove
|
||||
*
|
||||
* If adding a task, if @self is defined, the task is only added if @self
|
||||
* is also included in @pid_list. This happens on fork and tasks should
|
||||
* only be added when the parent is listed. If @self is NULL, then the
|
||||
* @task pid will be removed from the list, which would happen on exit
|
||||
* of a task.
|
||||
*/
|
||||
void trace_filter_add_remove_task(struct trace_pid_list *pid_list,
|
||||
struct task_struct *self,
|
||||
struct task_struct *task)
|
||||
{
|
||||
if (!pid_list)
|
||||
return;
|
||||
|
||||
/* For forks, we only add if the forking task is listed */
|
||||
if (self) {
|
||||
if (!trace_find_filtered_pid(pid_list, self->pid))
|
||||
return;
|
||||
}
|
||||
|
||||
/* "self" is set for forks, and NULL for exits */
|
||||
if (self)
|
||||
trace_pid_list_set(pid_list, task->pid);
|
||||
else
|
||||
trace_pid_list_clear(pid_list, task->pid);
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_pid_next - Used for seq_file to get to the next pid of a pid_list
|
||||
* @pid_list: The pid list to show
|
||||
* @v: The last pid that was shown (+1 the actual pid to let zero be displayed)
|
||||
* @pos: The position of the file
|
||||
*
|
||||
* This is used by the seq_file "next" operation to iterate the pids
|
||||
* listed in a trace_pid_list structure.
|
||||
*
|
||||
* Returns the pid+1 as we want to display pid of zero, but NULL would
|
||||
* stop the iteration.
|
||||
*/
|
||||
void *trace_pid_next(struct trace_pid_list *pid_list, void *v, loff_t *pos)
|
||||
{
|
||||
long pid = (unsigned long)v;
|
||||
unsigned int next;
|
||||
|
||||
(*pos)++;
|
||||
|
||||
/* pid already is +1 of the actual previous bit */
|
||||
if (trace_pid_list_next(pid_list, pid, &next) < 0)
|
||||
return NULL;
|
||||
|
||||
pid = next;
|
||||
|
||||
/* Return pid + 1 to allow zero to be represented */
|
||||
return (void *)(pid + 1);
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_pid_start - Used for seq_file to start reading pid lists
|
||||
* @pid_list: The pid list to show
|
||||
* @pos: The position of the file
|
||||
*
|
||||
* This is used by seq_file "start" operation to start the iteration
|
||||
* of listing pids.
|
||||
*
|
||||
* Returns the pid+1 as we want to display pid of zero, but NULL would
|
||||
* stop the iteration.
|
||||
*/
|
||||
void *trace_pid_start(struct trace_pid_list *pid_list, loff_t *pos)
|
||||
{
|
||||
unsigned long pid;
|
||||
unsigned int first;
|
||||
loff_t l = 0;
|
||||
|
||||
if (trace_pid_list_first(pid_list, &first) < 0)
|
||||
return NULL;
|
||||
|
||||
pid = first;
|
||||
|
||||
/* Return pid + 1 so that zero can be the exit value */
|
||||
for (pid++; pid && l < *pos;
|
||||
pid = (unsigned long)trace_pid_next(pid_list, (void *)pid, &l))
|
||||
;
|
||||
return (void *)pid;
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_pid_show - show the current pid in seq_file processing
|
||||
* @m: The seq_file structure to write into
|
||||
* @v: A void pointer of the pid (+1) value to display
|
||||
*
|
||||
* Can be directly used by seq_file operations to display the current
|
||||
* pid value.
|
||||
*/
|
||||
int trace_pid_show(struct seq_file *m, void *v)
|
||||
{
|
||||
unsigned long pid = (unsigned long)v - 1;
|
||||
|
||||
seq_printf(m, "%lu\n", pid);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* 128 should be much more than enough */
|
||||
#define PID_BUF_SIZE 127
|
||||
|
||||
int trace_pid_write(struct trace_pid_list *filtered_pids,
|
||||
struct trace_pid_list **new_pid_list,
|
||||
const char __user *ubuf, size_t cnt)
|
||||
{
|
||||
struct trace_pid_list *pid_list;
|
||||
struct trace_parser parser;
|
||||
unsigned long val;
|
||||
int nr_pids = 0;
|
||||
ssize_t read = 0;
|
||||
ssize_t ret;
|
||||
loff_t pos;
|
||||
pid_t pid;
|
||||
|
||||
if (trace_parser_get_init(&parser, PID_BUF_SIZE + 1))
|
||||
return -ENOMEM;
|
||||
|
||||
/*
|
||||
* Always recreate a new array. The write is an all or nothing
|
||||
* operation. Always create a new array when adding new pids by
|
||||
* the user. If the operation fails, then the current list is
|
||||
* not modified.
|
||||
*/
|
||||
pid_list = trace_pid_list_alloc();
|
||||
if (!pid_list) {
|
||||
trace_parser_put(&parser);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (filtered_pids) {
|
||||
/* copy the current bits to the new max */
|
||||
ret = trace_pid_list_first(filtered_pids, &pid);
|
||||
while (!ret) {
|
||||
ret = trace_pid_list_set(pid_list, pid);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = trace_pid_list_next(filtered_pids, pid + 1, &pid);
|
||||
nr_pids++;
|
||||
}
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
while (cnt > 0) {
|
||||
|
||||
pos = 0;
|
||||
|
||||
ret = trace_get_user(&parser, ubuf, cnt, &pos);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
read += ret;
|
||||
ubuf += ret;
|
||||
cnt -= ret;
|
||||
|
||||
if (!trace_parser_loaded(&parser))
|
||||
break;
|
||||
|
||||
ret = -EINVAL;
|
||||
if (kstrtoul(parser.buffer, 0, &val))
|
||||
break;
|
||||
|
||||
pid = (pid_t)val;
|
||||
|
||||
if (trace_pid_list_set(pid_list, pid) < 0) {
|
||||
ret = -1;
|
||||
break;
|
||||
}
|
||||
nr_pids++;
|
||||
|
||||
trace_parser_clear(&parser);
|
||||
ret = 0;
|
||||
}
|
||||
out:
|
||||
trace_parser_put(&parser);
|
||||
|
||||
if (ret < 0) {
|
||||
trace_pid_list_free(pid_list);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (!nr_pids) {
|
||||
/* Cleared the list of pids */
|
||||
trace_pid_list_free(pid_list);
|
||||
pid_list = NULL;
|
||||
}
|
||||
|
||||
*new_pid_list = pid_list;
|
||||
|
||||
return read;
|
||||
}
|
||||
|
||||
|
|
@ -376,6 +376,436 @@ static const struct file_operations ftrace_formats_fops = {
|
|||
.release = seq_release,
|
||||
};
|
||||
|
||||
static __always_inline bool printk_binsafe(struct trace_array *tr)
|
||||
{
|
||||
/*
|
||||
* The binary format of traceprintk can cause a crash if used
|
||||
* by a buffer from another boot. Force the use of the
|
||||
* non binary version of trace_printk if the trace_printk
|
||||
* buffer is a boot mapped ring buffer.
|
||||
*/
|
||||
return !(tr->flags & TRACE_ARRAY_FL_BOOT);
|
||||
}
|
||||
|
||||
int __trace_array_puts(struct trace_array *tr, unsigned long ip,
|
||||
const char *str, int size)
|
||||
{
|
||||
struct ring_buffer_event *event;
|
||||
struct trace_buffer *buffer;
|
||||
struct print_entry *entry;
|
||||
unsigned int trace_ctx;
|
||||
int alloc;
|
||||
|
||||
if (!(tr->trace_flags & TRACE_ITER(PRINTK)))
|
||||
return 0;
|
||||
|
||||
if (unlikely(tracing_selftest_running &&
|
||||
(tr->flags & TRACE_ARRAY_FL_GLOBAL)))
|
||||
return 0;
|
||||
|
||||
if (unlikely(tracing_disabled))
|
||||
return 0;
|
||||
|
||||
alloc = sizeof(*entry) + size + 2; /* possible \n added */
|
||||
|
||||
trace_ctx = tracing_gen_ctx();
|
||||
buffer = tr->array_buffer.buffer;
|
||||
guard(ring_buffer_nest)(buffer);
|
||||
event = __trace_buffer_lock_reserve(buffer, TRACE_PRINT, alloc,
|
||||
trace_ctx);
|
||||
if (!event)
|
||||
return 0;
|
||||
|
||||
entry = ring_buffer_event_data(event);
|
||||
entry->ip = ip;
|
||||
|
||||
memcpy(&entry->buf, str, size);
|
||||
|
||||
/* Add a newline if necessary */
|
||||
if (entry->buf[size - 1] != '\n') {
|
||||
entry->buf[size] = '\n';
|
||||
entry->buf[size + 1] = '\0';
|
||||
} else
|
||||
entry->buf[size] = '\0';
|
||||
|
||||
__buffer_unlock_commit(buffer, event);
|
||||
ftrace_trace_stack(tr, buffer, trace_ctx, 4, NULL);
|
||||
return size;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__trace_array_puts);
|
||||
|
||||
/**
|
||||
* __trace_puts - write a constant string into the trace buffer.
|
||||
* @ip: The address of the caller
|
||||
* @str: The constant string to write
|
||||
*/
|
||||
int __trace_puts(unsigned long ip, const char *str)
|
||||
{
|
||||
return __trace_array_puts(printk_trace, ip, str, strlen(str));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__trace_puts);
|
||||
|
||||
/**
|
||||
* __trace_bputs - write the pointer to a constant string into trace buffer
|
||||
* @ip: The address of the caller
|
||||
* @str: The constant string to write to the buffer to
|
||||
*/
|
||||
int __trace_bputs(unsigned long ip, const char *str)
|
||||
{
|
||||
struct trace_array *tr = READ_ONCE(printk_trace);
|
||||
struct ring_buffer_event *event;
|
||||
struct trace_buffer *buffer;
|
||||
struct bputs_entry *entry;
|
||||
unsigned int trace_ctx;
|
||||
int size = sizeof(struct bputs_entry);
|
||||
|
||||
if (!printk_binsafe(tr))
|
||||
return __trace_puts(ip, str);
|
||||
|
||||
if (!(tr->trace_flags & TRACE_ITER(PRINTK)))
|
||||
return 0;
|
||||
|
||||
if (unlikely(tracing_selftest_running || tracing_disabled))
|
||||
return 0;
|
||||
|
||||
trace_ctx = tracing_gen_ctx();
|
||||
buffer = tr->array_buffer.buffer;
|
||||
|
||||
guard(ring_buffer_nest)(buffer);
|
||||
event = __trace_buffer_lock_reserve(buffer, TRACE_BPUTS, size,
|
||||
trace_ctx);
|
||||
if (!event)
|
||||
return 0;
|
||||
|
||||
entry = ring_buffer_event_data(event);
|
||||
entry->ip = ip;
|
||||
entry->str = str;
|
||||
|
||||
__buffer_unlock_commit(buffer, event);
|
||||
ftrace_trace_stack(tr, buffer, trace_ctx, 4, NULL);
|
||||
|
||||
return 1;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__trace_bputs);
|
||||
|
||||
/* created for use with alloc_percpu */
|
||||
struct trace_buffer_struct {
|
||||
int nesting;
|
||||
char buffer[4][TRACE_BUF_SIZE];
|
||||
};
|
||||
|
||||
static struct trace_buffer_struct __percpu *trace_percpu_buffer;
|
||||
|
||||
/*
|
||||
* This allows for lockless recording. If we're nested too deeply, then
|
||||
* this returns NULL.
|
||||
*/
|
||||
static char *get_trace_buf(void)
|
||||
{
|
||||
struct trace_buffer_struct *buffer = this_cpu_ptr(trace_percpu_buffer);
|
||||
|
||||
if (!trace_percpu_buffer || buffer->nesting >= 4)
|
||||
return NULL;
|
||||
|
||||
buffer->nesting++;
|
||||
|
||||
/* Interrupts must see nesting incremented before we use the buffer */
|
||||
barrier();
|
||||
return &buffer->buffer[buffer->nesting - 1][0];
|
||||
}
|
||||
|
||||
static void put_trace_buf(void)
|
||||
{
|
||||
/* Don't let the decrement of nesting leak before this */
|
||||
barrier();
|
||||
this_cpu_dec(trace_percpu_buffer->nesting);
|
||||
}
|
||||
|
||||
static int alloc_percpu_trace_buffer(void)
|
||||
{
|
||||
struct trace_buffer_struct __percpu *buffers;
|
||||
|
||||
if (trace_percpu_buffer)
|
||||
return 0;
|
||||
|
||||
buffers = alloc_percpu(struct trace_buffer_struct);
|
||||
if (MEM_FAIL(!buffers, "Could not allocate percpu trace_printk buffer"))
|
||||
return -ENOMEM;
|
||||
|
||||
trace_percpu_buffer = buffers;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int buffers_allocated;
|
||||
|
||||
void trace_printk_init_buffers(void)
|
||||
{
|
||||
if (buffers_allocated)
|
||||
return;
|
||||
|
||||
if (alloc_percpu_trace_buffer())
|
||||
return;
|
||||
|
||||
/* trace_printk() is for debug use only. Don't use it in production. */
|
||||
|
||||
pr_warn("\n");
|
||||
pr_warn("**********************************************************\n");
|
||||
pr_warn("** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **\n");
|
||||
pr_warn("** **\n");
|
||||
pr_warn("** trace_printk() being used. Allocating extra memory. **\n");
|
||||
pr_warn("** **\n");
|
||||
pr_warn("** This means that this is a DEBUG kernel and it is **\n");
|
||||
pr_warn("** unsafe for production use. **\n");
|
||||
pr_warn("** **\n");
|
||||
pr_warn("** If you see this message and you are not debugging **\n");
|
||||
pr_warn("** the kernel, report this immediately to your vendor! **\n");
|
||||
pr_warn("** **\n");
|
||||
pr_warn("** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **\n");
|
||||
pr_warn("**********************************************************\n");
|
||||
|
||||
/* Expand the buffers to set size */
|
||||
if (tracing_update_buffers(NULL) < 0)
|
||||
pr_err("Failed to expand tracing buffers for trace_printk() calls\n");
|
||||
else
|
||||
buffers_allocated = 1;
|
||||
|
||||
/*
|
||||
* trace_printk_init_buffers() can be called by modules.
|
||||
* If that happens, then we need to start cmdline recording
|
||||
* directly here.
|
||||
*/
|
||||
if (system_state == SYSTEM_RUNNING)
|
||||
tracing_start_cmdline_record();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_printk_init_buffers);
|
||||
|
||||
void trace_printk_start_comm(void)
|
||||
{
|
||||
/* Start tracing comms if trace printk is set */
|
||||
if (!buffers_allocated)
|
||||
return;
|
||||
tracing_start_cmdline_record();
|
||||
}
|
||||
|
||||
void trace_printk_start_stop_comm(int enabled)
|
||||
{
|
||||
if (!buffers_allocated)
|
||||
return;
|
||||
|
||||
if (enabled)
|
||||
tracing_start_cmdline_record();
|
||||
else
|
||||
tracing_stop_cmdline_record();
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_vbprintk - write binary msg to tracing buffer
|
||||
* @ip: The address of the caller
|
||||
* @fmt: The string format to write to the buffer
|
||||
* @args: Arguments for @fmt
|
||||
*/
|
||||
int trace_vbprintk(unsigned long ip, const char *fmt, va_list args)
|
||||
{
|
||||
struct ring_buffer_event *event;
|
||||
struct trace_buffer *buffer;
|
||||
struct trace_array *tr = READ_ONCE(printk_trace);
|
||||
struct bprint_entry *entry;
|
||||
unsigned int trace_ctx;
|
||||
char *tbuffer;
|
||||
int len = 0, size;
|
||||
|
||||
if (!printk_binsafe(tr))
|
||||
return trace_vprintk(ip, fmt, args);
|
||||
|
||||
if (unlikely(tracing_selftest_running || tracing_disabled))
|
||||
return 0;
|
||||
|
||||
/* Don't pollute graph traces with trace_vprintk internals */
|
||||
pause_graph_tracing();
|
||||
|
||||
trace_ctx = tracing_gen_ctx();
|
||||
guard(preempt_notrace)();
|
||||
|
||||
tbuffer = get_trace_buf();
|
||||
if (!tbuffer) {
|
||||
len = 0;
|
||||
goto out_nobuffer;
|
||||
}
|
||||
|
||||
len = vbin_printf((u32 *)tbuffer, TRACE_BUF_SIZE/sizeof(int), fmt, args);
|
||||
|
||||
if (len > TRACE_BUF_SIZE/sizeof(int) || len < 0)
|
||||
goto out_put;
|
||||
|
||||
size = sizeof(*entry) + sizeof(u32) * len;
|
||||
buffer = tr->array_buffer.buffer;
|
||||
scoped_guard(ring_buffer_nest, buffer) {
|
||||
event = __trace_buffer_lock_reserve(buffer, TRACE_BPRINT, size,
|
||||
trace_ctx);
|
||||
if (!event)
|
||||
goto out_put;
|
||||
entry = ring_buffer_event_data(event);
|
||||
entry->ip = ip;
|
||||
entry->fmt = fmt;
|
||||
|
||||
memcpy(entry->buf, tbuffer, sizeof(u32) * len);
|
||||
__buffer_unlock_commit(buffer, event);
|
||||
ftrace_trace_stack(tr, buffer, trace_ctx, 6, NULL);
|
||||
}
|
||||
out_put:
|
||||
put_trace_buf();
|
||||
|
||||
out_nobuffer:
|
||||
unpause_graph_tracing();
|
||||
|
||||
return len;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_vbprintk);
|
||||
|
||||
static __printf(3, 0)
|
||||
int __trace_array_vprintk(struct trace_buffer *buffer,
|
||||
unsigned long ip, const char *fmt, va_list args)
|
||||
{
|
||||
struct ring_buffer_event *event;
|
||||
int len = 0, size;
|
||||
struct print_entry *entry;
|
||||
unsigned int trace_ctx;
|
||||
char *tbuffer;
|
||||
|
||||
if (unlikely(tracing_disabled))
|
||||
return 0;
|
||||
|
||||
/* Don't pollute graph traces with trace_vprintk internals */
|
||||
pause_graph_tracing();
|
||||
|
||||
trace_ctx = tracing_gen_ctx();
|
||||
guard(preempt_notrace)();
|
||||
|
||||
|
||||
tbuffer = get_trace_buf();
|
||||
if (!tbuffer) {
|
||||
len = 0;
|
||||
goto out_nobuffer;
|
||||
}
|
||||
|
||||
len = vscnprintf(tbuffer, TRACE_BUF_SIZE, fmt, args);
|
||||
|
||||
size = sizeof(*entry) + len + 1;
|
||||
scoped_guard(ring_buffer_nest, buffer) {
|
||||
event = __trace_buffer_lock_reserve(buffer, TRACE_PRINT, size,
|
||||
trace_ctx);
|
||||
if (!event)
|
||||
goto out;
|
||||
entry = ring_buffer_event_data(event);
|
||||
entry->ip = ip;
|
||||
|
||||
memcpy(&entry->buf, tbuffer, len + 1);
|
||||
__buffer_unlock_commit(buffer, event);
|
||||
ftrace_trace_stack(printk_trace, buffer, trace_ctx, 6, NULL);
|
||||
}
|
||||
out:
|
||||
put_trace_buf();
|
||||
|
||||
out_nobuffer:
|
||||
unpause_graph_tracing();
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
int trace_array_vprintk(struct trace_array *tr,
|
||||
unsigned long ip, const char *fmt, va_list args)
|
||||
{
|
||||
if (tracing_selftest_running && (tr->flags & TRACE_ARRAY_FL_GLOBAL))
|
||||
return 0;
|
||||
|
||||
return __trace_array_vprintk(tr->array_buffer.buffer, ip, fmt, args);
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_array_printk - Print a message to a specific instance
|
||||
* @tr: The instance trace_array descriptor
|
||||
* @ip: The instruction pointer that this is called from.
|
||||
* @fmt: The format to print (printf format)
|
||||
*
|
||||
* If a subsystem sets up its own instance, they have the right to
|
||||
* printk strings into their tracing instance buffer using this
|
||||
* function. Note, this function will not write into the top level
|
||||
* buffer (use trace_printk() for that), as writing into the top level
|
||||
* buffer should only have events that can be individually disabled.
|
||||
* trace_printk() is only used for debugging a kernel, and should not
|
||||
* be ever incorporated in normal use.
|
||||
*
|
||||
* trace_array_printk() can be used, as it will not add noise to the
|
||||
* top level tracing buffer.
|
||||
*
|
||||
* Note, trace_array_init_printk() must be called on @tr before this
|
||||
* can be used.
|
||||
*/
|
||||
int trace_array_printk(struct trace_array *tr,
|
||||
unsigned long ip, const char *fmt, ...)
|
||||
{
|
||||
int ret;
|
||||
va_list ap;
|
||||
|
||||
if (!tr)
|
||||
return -ENOENT;
|
||||
|
||||
/* This is only allowed for created instances */
|
||||
if (tr->flags & TRACE_ARRAY_FL_GLOBAL)
|
||||
return 0;
|
||||
|
||||
if (!(tr->trace_flags & TRACE_ITER(PRINTK)))
|
||||
return 0;
|
||||
|
||||
va_start(ap, fmt);
|
||||
ret = trace_array_vprintk(tr, ip, fmt, ap);
|
||||
va_end(ap);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_printk);
|
||||
|
||||
/**
|
||||
* trace_array_init_printk - Initialize buffers for trace_array_printk()
|
||||
* @tr: The trace array to initialize the buffers for
|
||||
*
|
||||
* As trace_array_printk() only writes into instances, they are OK to
|
||||
* have in the kernel (unlike trace_printk()). This needs to be called
|
||||
* before trace_array_printk() can be used on a trace_array.
|
||||
*/
|
||||
int trace_array_init_printk(struct trace_array *tr)
|
||||
{
|
||||
if (!tr)
|
||||
return -ENOENT;
|
||||
|
||||
/* This is only allowed for created instances */
|
||||
if (tr->flags & TRACE_ARRAY_FL_GLOBAL)
|
||||
return -EINVAL;
|
||||
|
||||
return alloc_percpu_trace_buffer();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_init_printk);
|
||||
|
||||
int trace_array_printk_buf(struct trace_buffer *buffer,
|
||||
unsigned long ip, const char *fmt, ...)
|
||||
{
|
||||
int ret;
|
||||
va_list ap;
|
||||
|
||||
if (!(printk_trace->trace_flags & TRACE_ITER(PRINTK)))
|
||||
return 0;
|
||||
|
||||
va_start(ap, fmt);
|
||||
ret = __trace_array_vprintk(buffer, ip, fmt, ap);
|
||||
va_end(ap);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int trace_vprintk(unsigned long ip, const char *fmt, va_list args)
|
||||
{
|
||||
return trace_array_vprintk(printk_trace, ip, fmt, args);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_vprintk);
|
||||
|
||||
static __init int init_trace_printk_function_export(void)
|
||||
{
|
||||
int ret;
|
||||
|
|
|
|||
|
|
@ -1225,7 +1225,7 @@ trace_selftest_startup_irqsoff(struct tracer *trace, struct trace_array *tr)
|
|||
/* check both trace buffers */
|
||||
ret = trace_test_buffer(&tr->array_buffer, NULL);
|
||||
if (!ret)
|
||||
ret = trace_test_buffer(&tr->max_buffer, &count);
|
||||
ret = trace_test_buffer(&tr->snapshot_buffer, &count);
|
||||
trace->reset(tr);
|
||||
tracing_start();
|
||||
|
||||
|
|
@ -1287,7 +1287,7 @@ trace_selftest_startup_preemptoff(struct tracer *trace, struct trace_array *tr)
|
|||
/* check both trace buffers */
|
||||
ret = trace_test_buffer(&tr->array_buffer, NULL);
|
||||
if (!ret)
|
||||
ret = trace_test_buffer(&tr->max_buffer, &count);
|
||||
ret = trace_test_buffer(&tr->snapshot_buffer, &count);
|
||||
trace->reset(tr);
|
||||
tracing_start();
|
||||
|
||||
|
|
@ -1355,7 +1355,7 @@ trace_selftest_startup_preemptirqsoff(struct tracer *trace, struct trace_array *
|
|||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = trace_test_buffer(&tr->max_buffer, &count);
|
||||
ret = trace_test_buffer(&tr->snapshot_buffer, &count);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
|
|
@ -1385,7 +1385,7 @@ trace_selftest_startup_preemptirqsoff(struct tracer *trace, struct trace_array *
|
|||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = trace_test_buffer(&tr->max_buffer, &count);
|
||||
ret = trace_test_buffer(&tr->snapshot_buffer, &count);
|
||||
|
||||
if (!ret && !count) {
|
||||
printk(KERN_CONT ".. no entries found ..");
|
||||
|
|
@ -1513,7 +1513,7 @@ trace_selftest_startup_wakeup(struct tracer *trace, struct trace_array *tr)
|
|||
/* check both trace buffers */
|
||||
ret = trace_test_buffer(&tr->array_buffer, NULL);
|
||||
if (!ret)
|
||||
ret = trace_test_buffer(&tr->max_buffer, &count);
|
||||
ret = trace_test_buffer(&tr->snapshot_buffer, &count);
|
||||
|
||||
|
||||
trace->reset(tr);
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ EXPORT_SYMBOL_GPL(trace_seq_printf);
|
|||
* Writes a ASCII representation of a bitmask string into @s.
|
||||
*/
|
||||
void trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
|
||||
int nmaskbits)
|
||||
int nmaskbits)
|
||||
{
|
||||
unsigned int save_len = s->seq.len;
|
||||
|
||||
|
|
@ -124,6 +124,33 @@ void trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(trace_seq_bitmask);
|
||||
|
||||
/**
|
||||
* trace_seq_bitmask_list - write a bitmask array in its list representation
|
||||
* @s: trace sequence descriptor
|
||||
* @maskp: points to an array of unsigned longs that represent a bitmask
|
||||
* @nmaskbits: The number of bits that are valid in @maskp
|
||||
*
|
||||
* Writes a list representation (e.g., 0-3,5-7) of a bitmask string into @s.
|
||||
*/
|
||||
void trace_seq_bitmask_list(struct trace_seq *s, const unsigned long *maskp,
|
||||
int nmaskbits)
|
||||
{
|
||||
unsigned int save_len = s->seq.len;
|
||||
|
||||
if (s->full)
|
||||
return;
|
||||
|
||||
__trace_seq_init(s);
|
||||
|
||||
seq_buf_printf(&s->seq, "%*pbl", nmaskbits, maskp);
|
||||
|
||||
if (unlikely(seq_buf_has_overflowed(&s->seq))) {
|
||||
s->seq.len = save_len;
|
||||
s->full = 1;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_seq_bitmask_list);
|
||||
|
||||
/**
|
||||
* trace_seq_vprintf - sequence printing of trace information
|
||||
* @s: trace sequence descriptor
|
||||
|
|
|
|||
|
|
@ -34,9 +34,13 @@ enum tp_transition_sync {
|
|||
|
||||
struct tp_transition_snapshot {
|
||||
unsigned long rcu;
|
||||
unsigned long srcu_gp;
|
||||
bool ongoing;
|
||||
};
|
||||
|
||||
DEFINE_SRCU_FAST(tracepoint_srcu);
|
||||
EXPORT_SYMBOL_GPL(tracepoint_srcu);
|
||||
|
||||
/* Protected by tracepoints_mutex */
|
||||
static struct tp_transition_snapshot tp_transition_snapshot[_NR_TP_TRANSITION_SYNC];
|
||||
|
||||
|
|
@ -46,6 +50,7 @@ static void tp_rcu_get_state(enum tp_transition_sync sync)
|
|||
|
||||
/* Keep the latest get_state snapshot. */
|
||||
snapshot->rcu = get_state_synchronize_rcu();
|
||||
snapshot->srcu_gp = start_poll_synchronize_srcu(&tracepoint_srcu);
|
||||
snapshot->ongoing = true;
|
||||
}
|
||||
|
||||
|
|
@ -56,6 +61,8 @@ static void tp_rcu_cond_sync(enum tp_transition_sync sync)
|
|||
if (!snapshot->ongoing)
|
||||
return;
|
||||
cond_synchronize_rcu(snapshot->rcu);
|
||||
if (!poll_state_synchronize_srcu(&tracepoint_srcu, snapshot->srcu_gp))
|
||||
synchronize_srcu(&tracepoint_srcu);
|
||||
snapshot->ongoing = false;
|
||||
}
|
||||
|
||||
|
|
@ -112,10 +119,13 @@ static inline void release_probes(struct tracepoint *tp, struct tracepoint_func
|
|||
struct tp_probes *tp_probes = container_of(old,
|
||||
struct tp_probes, probes[0]);
|
||||
|
||||
if (tracepoint_is_faultable(tp))
|
||||
call_rcu_tasks_trace(&tp_probes->rcu, rcu_free_old_probes);
|
||||
else
|
||||
call_rcu(&tp_probes->rcu, rcu_free_old_probes);
|
||||
if (tracepoint_is_faultable(tp)) {
|
||||
call_rcu_tasks_trace(&tp_probes->rcu,
|
||||
rcu_free_old_probes);
|
||||
} else {
|
||||
call_srcu(&tracepoint_srcu, &tp_probes->rcu,
|
||||
rcu_free_old_probes);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue