Patches contributed by Eötvös Lorand University


commit 00a8bf859331e349713274825e6fbf20bf2ac15a
Author: Ingo Molnar <mingo@elte.hu>
Date:   Thu Feb 19 13:01:37 2009 +0100

    tracing/function-graph-tracer: fix merge
    
    Merge artifact: pid got changed to ent->pid meanwhile.
    
    Signed-off-by: Ingo Molnar <mingo@elte.hu>

diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 8f4004a00b4e..c009553a8e81 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -589,7 +589,7 @@ print_graph_entry(struct ftrace_graph_ent_entry *field, struct trace_seq *s,
 
 	/* Proc */
 	if (tracer_flags.val & TRACE_GRAPH_PRINT_PROC) {
-		ret = print_graph_proc(s, pid);
+		ret = print_graph_proc(s, ent->pid);
 		if (ret == TRACE_TYPE_PARTIAL_LINE)
 			return TRACE_TYPE_PARTIAL_LINE;
 

commit 5752674e140db5bce08c6bc60021a9bc3b960800
Author: Ingo Molnar <mingo@elte.hu>
Date:   Thu Feb 19 12:54:10 2009 +0100

    Documentation/ftrace.txt: update
    
    - fix typos/grammos and clarify the text
    - prettify the document some more
    
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Ingo Molnar <mingo@elte.hu>

diff --git a/Documentation/ftrace.txt b/Documentation/ftrace.txt
index 055bcd2992da..2041ee951c1a 100644
--- a/Documentation/ftrace.txt
+++ b/Documentation/ftrace.txt
@@ -15,31 +15,31 @@ Introduction
 
 Ftrace is an internal tracer designed to help out developers and
 designers of systems to find what is going on inside the kernel.
-It can be used for debugging or analyzing latencies and performance
-issues that take place outside of user-space.
+It can be used for debugging or analyzing latencies and
+performance issues that take place outside of user-space.
 
 Although ftrace is the function tracer, it also includes an
-infrastructure that allows for other types of tracing. Some of the
-tracers that are currently in ftrace include a tracer to trace
-context switches, the time it takes for a high priority task to
-run after it was woken up, the time interrupts are disabled, and
-more (ftrace allows for tracer plugins, which means that the list of
-tracers can always grow).
+infrastructure that allows for other types of tracing. Some of
+the tracers that are currently in ftrace include a tracer to
+trace context switches, the time it takes for a high priority
+task to run after it was woken up, the time interrupts are
+disabled, and more (ftrace allows for tracer plugins, which
+means that the list of tracers can always grow).
 
 
 The File System
 ---------------
 
-Ftrace uses the debugfs file system to hold the control files as well
-as the files to display output.
+Ftrace uses the debugfs file system to hold the control files as
+well as the files to display output.
 
 To mount the debugfs system:
 
   # mkdir /debug
   # mount -t debugfs nodev /debug
 
-(Note: it is more common to mount at /sys/kernel/debug, but for simplicity
- this document will use /debug)
+( Note: it is more common to mount at /sys/kernel/debug, but for
+  simplicity this document will use /debug)
 
 That's it! (assuming that you have ftrace configured into your kernel)
 
@@ -50,94 +50,124 @@ of ftrace. Here is a list of some of the key files:
 
  Note: all time values are in microseconds.
 
-  current_tracer: This is used to set or display the current tracer
-		that is configured.
-
-  available_tracers: This holds the different types of tracers that
-		have been compiled into the kernel. The tracers
-		listed here can be configured by echoing their name
-		into current_tracer.
-
-  tracing_enabled: This sets or displays whether the current_tracer
-		is activated and tracing or not. Echo 0 into this
-		file to disable the tracer or 1 to enable it.
-
-  trace: This file holds the output of the trace in a human readable
-		format (described below).
-
-  latency_trace: This file shows the same trace but the information
-		is organized more to display possible latencies
-		in the system (described below).
-
-  trace_pipe: The output is the same as the "trace" file but this
-		file is meant to be streamed with live tracing.
-		Reads from this file will block until new data
-		is retrieved. Unlike the "trace" and "latency_trace"
-		files, this file is a consumer. This means reading
-		from this file causes sequential reads to display
-		more current data. Once data is read from this
-		file, it is consumed, and will not be read
-		again with a sequential read. The "trace" and
-		"latency_trace" files are static, and if the
-		tracer is not adding more data, they will display
-		the same information every time they are read.
-
-  trace_options: This file lets the user control the amount of data
-		that is displayed in one of the above output
-		files.
-
-  trace_max_latency: Some of the tracers record the max latency.
-		For example, the time interrupts are disabled.
-		This time is saved in this file. The max trace
-		will also be stored, and displayed by either
-		"trace" or "latency_trace".  A new max trace will
-		only be recorded if the latency is greater than
-		the value in this file. (in microseconds)
-
-  buffer_size_kb: This sets or displays the number of kilobytes each CPU
-		buffer can hold. The tracer buffers are the same size
-		for each CPU. The displayed number is the size of the
-		CPU buffer and not total size of all buffers. The
-		trace buffers are allocated in pages (blocks of memory
-		that the kernel uses for allocation, usually 4 KB in size).
-		If the last page allocated has room for more bytes
-		than requested, the rest of the page will be used,
-		making the actual allocation bigger than requested.
-		(Note, the size may not be a multiple of the page size due
-		to buffer managment overhead.)
-
-		This can only be updated when the current_tracer
-		is set to "nop".
-
-  tracing_cpumask: This is a mask that lets the user only trace
-		on specified CPUS. The format is a hex string
-		representing the CPUS.
-
-  set_ftrace_filter: When dynamic ftrace is configured in (see the
-		section below "dynamic ftrace"), the code is dynamically
-		modified (code text rewrite) to disable calling of the
-		function profiler (mcount). This lets tracing be configured
-		in with practically no overhead in performance.  This also
-		has a side effect of enabling or disabling specific functions
-		to be traced. Echoing names of functions into this file
-		will limit the trace to only those functions.
-
-  set_ftrace_notrace: This has an effect opposite to that of
-		set_ftrace_filter. Any function that is added here will not
-		be traced. If a function exists in both set_ftrace_filter
-		and set_ftrace_notrace,	the function will _not_ be traced.
-
-  set_ftrace_pid: Have the function tracer only trace a single thread.
-
-  set_graph_function: Select the function where the trace have to start
-		with the function graph tracer (See the section
-		"dynamic ftrace" for more details).
-
-  available_filter_functions: This lists the functions that ftrace
-		has processed and can trace. These are the function
-		names that you can pass to "set_ftrace_filter" or
-		"set_ftrace_notrace". (See the section "dynamic ftrace"
-		below for more details.)
+  current_tracer:
+
+	This is used to set or display the current tracer
+	that is configured.
+
+  available_tracers:
+
+	This holds the different types of tracers that
+	have been compiled into the kernel. The
+	tracers listed here can be configured by
+	echoing their name into current_tracer.
+
+  tracing_enabled:
+
+	This sets or displays whether the current_tracer
+	is activated and tracing or not. Echo 0 into this
+	file to disable the tracer or 1 to enable it.
+
+  trace:
+
+	This file holds the output of the trace in a human
+	readable format (described below).
+
+  latency_trace:
+
+	This file shows the same trace but the information
+	is organized more to display possible latencies
+	in the system (described below).
+
+  trace_pipe:
+
+	The output is the same as the "trace" file but this
+	file is meant to be streamed with live tracing.
+	Reads from this file will block until new data
+	is retrieved. Unlike the "trace" and "latency_trace"
+	files, this file is a consumer. This means reading
+	from this file causes sequential reads to display
+	more current data. Once data is read from this
+	file, it is consumed, and will not be read
+	again with a sequential read. The "trace" and
+	"latency_trace" files are static, and if the
+	tracer is not adding more data, they will display
+	the same information every time they are read.
+
+  trace_options:
+
+	This file lets the user control the amount of data
+	that is displayed in one of the above output
+	files.
+
+  trace_max_latency:
+
+	Some of the tracers record the max latency.
+	For example, the time interrupts are disabled.
+	This time is saved in this file. The max trace
+	will also be stored, and displayed by either
+	"trace" or "latency_trace".  A new max trace will
+	only be recorded if the latency is greater than
+	the value in this file. (in microseconds)
+
+  buffer_size_kb:
+
+	This sets or displays the number of kilobytes each CPU
+	buffer can hold. The tracer buffers are the same size
+	for each CPU. The displayed number is the size of the
+	CPU buffer and not total size of all buffers. The
+	trace buffers are allocated in pages (blocks of memory
+	that the kernel uses for allocation, usually 4 KB in size).
+	If the last page allocated has room for more bytes
+	than requested, the rest of the page will be used,
+	making the actual allocation bigger than requested.
+	( Note, the size may not be a multiple of the page size
+	  due to buffer managment overhead. )
+
+	This can only be updated when the current_tracer
+	is set to "nop".
+
+  tracing_cpumask:
+
+	This is a mask that lets the user only trace
+	on specified CPUS. The format is a hex string
+	representing the CPUS.
+
+  set_ftrace_filter:
+
+	When dynamic ftrace is configured in (see the
+	section below "dynamic ftrace"), the code is dynamically
+	modified (code text rewrite) to disable calling of the
+	function profiler (mcount). This lets tracing be configured
+	in with practically no overhead in performance.  This also
+	has a side effect of enabling or disabling specific functions
+	to be traced. Echoing names of functions into this file
+	will limit the trace to only those functions.
+
+  set_ftrace_notrace:
+
+	This has an effect opposite to that of
+	set_ftrace_filter. Any function that is added here will not
+	be traced. If a function exists in both set_ftrace_filter
+	and set_ftrace_notrace,	the function will _not_ be traced.
+
+  set_ftrace_pid:
+
+	Have the function tracer only trace a single thread.
+
+  set_graph_function:
+
+	Set a "trigger" function where tracing should start
+	with the function graph tracer (See the section
+	"dynamic ftrace" for more details).
+
+  available_filter_functions:
+
+	This lists the functions that ftrace
+	has processed and can trace. These are the function
+	names that you can pass to "set_ftrace_filter" or
+	"set_ftrace_notrace". (See the section "dynamic ftrace"
+	below for more details.)
 
 
 The Tracers
@@ -145,44 +175,66 @@ The Tracers
 
 Here is the list of current tracers that may be configured.
 
-  function - function tracer that uses mcount to trace all functions.
+  "function"
+
+	Function call tracer to trace all kernel functions.
+
+  "function_graph_tracer"
+
+	Similar to the function tracer except that the
+	function tracer probes the functions on their entry
+	whereas the function graph tracer traces on both entry
+	and exit of the functions. It then provides the ability
+	to draw a graph of function calls similar to C code
+	source.
+
+  "sched_switch"
+
+	Traces the context switches and wakeups between tasks.
+
+  "irqsoff"
+
+	Traces the areas that disable interrupts and saves
+	the trace with the longest max latency.
+	See tracing_max_latency. When a new max is recorded,
+	it replaces the old trace. It is best to view this
+	trace via the latency_trace file.
 
-  function_graph_tracer - similar to the function tracer except that the
-		function tracer probes the functions on their entry whereas the
-		function graph tracer traces on both entry and exit of the
-		functions. It then provides the ability to draw a graph of
-		function calls like a primitive C code source.
+  "preemptoff"
 
-  sched_switch - traces the context switches between tasks.
+	Similar to irqsoff but traces and records the amount of
+	time for which preemption is disabled.
 
-  irqsoff - traces the areas that disable interrupts and saves
-  		the trace with the longest max latency.
-		See tracing_max_latency.  When a new max is recorded,
-		it replaces the old trace. It is best to view this
-		trace via the latency_trace file.
+  "preemptirqsoff"
 
-  preemptoff - Similar to irqsoff but traces and records the amount of
-		time for which preemption is disabled.
+	Similar to irqsoff and preemptoff, but traces and
+	records the largest time for which irqs and/or preemption
+	is disabled.
 
-  preemptirqsoff - Similar to irqsoff and preemptoff, but traces and
-		 records the largest time for which irqs and/or preemption
-		 is disabled.
+  "wakeup"
 
-  wakeup - Traces and records the max latency that it takes for
-		the highest priority task to get scheduled after
-		it has been woken up.
+	Traces and records the max latency that it takes for
+	the highest priority task to get scheduled after
+	it has been woken up.
 
-  nop - This is not a tracer. To remove all tracers from tracing
-		simply echo "nop" into current_tracer.
+  "hw-branch-tracer"
 
-  hw-branch-tracer - traces branches on all cpu's in a circular buffer.
+	Uses the BTS CPU feature on x86 CPUs to traces all
+	branches executed.
+
+  "nop"
+
+	This is the "trace nothing" tracer. To remove all
+	tracers from tracing simply echo "nop" into
+	current_tracer.
 
 
 Examples of using the tracer
 ----------------------------
 
-Here are typical examples of using the tracers when controlling them only
-with the debugfs interface (without using any user-land utilities).
+Here are typical examples of using the tracers when controlling
+them only with the debugfs interface (without using any
+user-land utilities).
 
 Output format:
 --------------
@@ -199,16 +251,16 @@ Here is an example of the output format of the file "trace"
             bash-4251  [01] 10152.583855: _atomic_dec_and_lock <-dput
                              --------
 
-A header is printed with the tracer name that is represented by the trace.
-In this case the tracer is "function". Then a header showing the format. Task
-name "bash", the task PID "4251", the CPU that it was running on
-"01", the timestamp in <secs>.<usecs> format, the function name that was
-traced "path_put" and the parent function that called this function
-"path_walk". The timestamp is the time at which the function was
-entered.
+A header is printed with the tracer name that is represented by
+the trace. In this case the tracer is "function". Then a header
+showing the format. Task name "bash", the task PID "4251", the
+CPU that it was running on "01", the timestamp in <secs>.<usecs>
+format, the function name that was traced "path_put" and the
+parent function that called this function "path_walk". The
+timestamp is the time at which the function was entered.
 
-The sched_switch tracer also includes tracing of task wakeups and
-context switches.
+The sched_switch tracer also includes tracing of task wakeups
+and context switches.
 
      ksoftirqd/1-7     [01]  1453.070013:      7:115:R   +  2916:115:S
      ksoftirqd/1-7     [01]  1453.070013:      7:115:R   +    10:115:S
@@ -217,8 +269,8 @@ context switches.
      kondemand/1-2916  [01]  1453.070013:   2916:115:S ==>     7:115:R
      ksoftirqd/1-7     [01]  1453.070013:      7:115:S ==>     0:140:R
 
-Wake ups are represented by a "+" and the context switches are shown as
-"==>".  The format is:
+Wake ups are represented by a "+" and the context switches are
+shown as "==>".  The format is:
 
  Context switches:
 
@@ -232,19 +284,20 @@ Wake ups are represented by a "+" and the context switches are shown as
 
   <pid>:<prio>:<state>    +  <pid>:<prio>:<state>
 
-The prio is the internal kernel priority, which is the inverse of the
-priority that is usually displayed by user-space tools. Zero represents
-the highest priority (99). Prio 100 starts the "nice" priorities with
-100 being equal to nice -20 and 139 being nice 19. The prio "140" is
-reserved for the idle task which is the lowest priority thread (pid 0).
+The prio is the internal kernel priority, which is the inverse
+of the priority that is usually displayed by user-space tools.
+Zero represents the highest priority (99). Prio 100 starts the
+"nice" priorities with 100 being equal to nice -20 and 139 being
+nice 19. The prio "140" is reserved for the idle task which is
+the lowest priority thread (pid 0).
 
 
 Latency trace format
 --------------------
 
-For traces that display latency times, the latency_trace file gives
-somewhat more information to see why a latency happened. Here is a typical
-trace.
+For traces that display latency times, the latency_trace file
+gives somewhat more information to see why a latency happened.
+Here is a typical trace.
 
 # tracer: irqsoff
 #
@@ -271,20 +324,20 @@ irqsoff latency trace v1.1.5 on 2.6.26-rc8
   <idle>-0     0d.s1   98us : trace_hardirqs_on (do_softirq)
 
 
+This shows that the current tracer is "irqsoff" tracing the time
+for which interrupts were disabled. It gives the trace version
+and the version of the kernel upon which this was executed on
+(2.6.26-rc8). Then it displays the max latency in microsecs (97
+us). The number of trace entries displayed and the total number
+recorded (both are three: #3/3). The type of preemption that was
+used (PREEMPT). VP, KP, SP, and HP are always zero and are
+reserved for later use. #P is the number of online CPUS (#P:2).
 
-This shows that the current tracer is "irqsoff" tracing the time for which
-interrupts were disabled. It gives the trace version and the version
-of the kernel upon which this was executed on (2.6.26-rc8). Then it displays
-the max latency in microsecs (97 us). The number of trace entries displayed
-and the total number recorded (both are three: #3/3). The type of
-preemption that was used (PREEMPT). VP, KP, SP, and HP are always zero
-and are reserved for later use. #P is the number of online CPUS (#P:2).
-
-The task is the process that was running when the latency occurred.
-(swapper pid: 0).
+The task is the process that was running when the latency
+occurred. (swapper pid: 0).
 
-The start and stop (the functions in which the interrupts were disabled and
-enabled respectively) that caused the latencies:
+The start and stop (the functions in which the interrupts were
+disabled and enabled respectively) that caused the latencies:
 
   apic_timer_interrupt is where the interrupts were disabled.
   do_softirq is where they were enabled again.
@@ -320,12 +373,12 @@ The above is mostly meaningful for kernel developers.
 	latency_trace file is relative to the start of the trace.
 
   delay: This is just to help catch your eye a bit better. And
-	needs to be fixed to be only relative to the same CPU.
-	The marks are determined by the difference between this
-	current trace and the next trace.
-	 '!' - greater than preempt_mark_thresh (default 100)
-	 '+' - greater than 1 microsecond
-	 ' ' - less than or equal to 1 microsecond.
+	 needs to be fixed to be only relative to the same CPU.
+	 The marks are determined by the difference between this
+	 current trace and the next trace.
+	  '!' - greater than preempt_mark_thresh (default 100)
+	  '+' - greater than 1 microsecond
+	  ' ' - less than or equal to 1 microsecond.
 
   The rest is the same as the 'trace' file.
 
@@ -333,14 +386,15 @@ The above is mostly meaningful for kernel developers.
 trace_options
 -------------
 
-The trace_options file is used to control what gets printed in the trace
-output. To see what is available, simply cat the file:
+The trace_options file is used to control what gets printed in
+the trace output. To see what is available, simply cat the file:
 
   cat /debug/tracing/trace_options
   print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
- noblock nostacktrace nosched-tree nouserstacktrace nosym-userobj
+  noblock nostacktrace nosched-tree nouserstacktrace nosym-userobj
 
-To disable one of the options, echo in the option prepended with "no".
+To disable one of the options, echo in the option prepended with
+"no".
 
   echo noprint-parent > /debug/tracing/trace_options
 
@@ -350,8 +404,8 @@ To enable an option, leave off the "no".
 
 Here are the available options:
 
-  print-parent - On function traces, display the calling function
-		as well as the function being traced.
+  print-parent - On function traces, display the calling (parent)
+		 function as well as the function being traced.
 
   print-parent:
    bash-4000  [01]  1477.606694: simple_strtoul <-strict_strtoul
@@ -360,15 +414,16 @@ Here are the available options:
    bash-4000  [01]  1477.606694: simple_strtoul
 
 
-  sym-offset - Display not only the function name, but also the offset
-		in the function. For example, instead of seeing just
-		"ktime_get", you will see "ktime_get+0xb/0x20".
+  sym-offset - Display not only the function name, but also the
+	       offset in the function. For example, instead of
+	       seeing just "ktime_get", you will see
+	       "ktime_get+0xb/0x20".
 
   sym-offset:
    bash-4000  [01]  1477.606694: simple_strtoul+0x6/0xa0
 
-  sym-addr - this will also display the function address as well as
-		the function name.
+  sym-addr - this will also display the function address as well
+	     as the function name.
 
   sym-addr:
    bash-4000  [01]  1477.606694: simple_strtoul <c0339346>
@@ -378,35 +433,41 @@ Here are the available options:
     bash  4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
     (+0.000ms): simple_strtoul (strict_strtoul)
 
-  raw - This will display raw numbers. This option is best for use with
-	user applications that can translate the raw numbers better than
-	having it done in the kernel.
+  raw - This will display raw numbers. This option is best for
+	use with user applications that can translate the raw
+	numbers better than having it done in the kernel.
 
-  hex - Similar to raw, but the numbers will be in a hexadecimal format.
+  hex - Similar to raw, but the numbers will be in a hexadecimal
+	format.
 
   bin - This will print out the formats in raw binary.
 
   block - TBD (needs update)
 
-  stacktrace - This is one of the options that changes the trace itself.
-		When a trace is recorded, so is the stack of functions.
-		This allows for back traces of trace sites.
+  stacktrace - This is one of the options that changes the trace
+	       itself. When a trace is recorded, so is the stack
+	       of functions. This allows for back traces of
+	       trace sites.
 
-  userstacktrace - This option changes the trace.
-		   It records a stacktrace of the current userspace thread.
+  userstacktrace - This option changes the trace. It records a
+		   stacktrace of the current userspace thread.
 
-  sym-userobj - when user stacktrace are enabled, look up which object the
-		address belongs to, and print a relative address
-		This is especially useful when ASLR is on, otherwise you don't
-		get a chance to resolve the address to object/file/line after the app is no
-		longer running
+  sym-userobj - when user stacktrace are enabled, look up which
+		object the address belongs to, and print a
+		relative address. This is especially useful when
+		ASLR is on, otherwise you don't get a chance to
+		resolve the address to object/file/line after
+		the app is no longer running
 
-		The lookup is performed when you read trace,trace_pipe,latency_trace. Example:
+		The lookup is performed when you read
+		trace,trace_pipe,latency_trace. Example:
 
 		a.out-1623  [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
 
-  sched-tree - TBD (any users??)
+  sched-tree - trace all tasks that are on the runqueue, at
+	       every scheduling event. Will add overhead if
+	       there's a lot of tasks running at once.
 
 
 sched_switch
@@ -443,18 +504,19 @@ of how to use it.
  [...]
 
 
-As we have discussed previously about this format, the header shows
-the name of the trace and points to the options. The "FUNCTION"
-is a misnomer since here it represents the wake ups and context
-switches.
+As we have discussed previously about this format, the header
+shows the name of the trace and points to the options. The
+"FUNCTION" is a misnomer since here it represents the wake ups
+and context switches.
 
-The sched_switch file only lists the wake ups (represented with '+')
-and context switches ('==>') with the previous task or current task
-first followed by the next task or task waking up. The format for both
-of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO
-is the inverse of the actual priority with zero (0) being the highest
-priority and the nice values starting at 100 (nice -20). Below is
-a quick chart to map the kernel priority to user land priorities.
+The sched_switch file only lists the wake ups (represented with
+'+') and context switches ('==>') with the previous task or
+current task first followed by the next task or task waking up.
+The format for both of these is PID:KERNEL-PRIO:TASK-STATE.
+Remember that the KERNEL-PRIO is the inverse of the actual
+priority with zero (0) being the highest priority and the nice
+values starting at 100 (nice -20). Below is a quick chart to map
+the kernel priority to user land priorities.
 
   Kernel priority: 0 to 99    ==> user RT priority 99 to 0
   Kernel priority: 100 to 139 ==> user nice -20 to 19
@@ -475,10 +537,10 @@ The task states are:
 ftrace_enabled
 --------------
 
-The following tracers (listed below) give different output depending
-on whether or not the sysctl ftrace_enabled is set. To set ftrace_enabled,
-one can either use the sysctl function or set it via the proc
-file system interface.
+The following tracers (listed below) give different output
+depending on whether or not the sysctl ftrace_enabled is set. To
+set ftrace_enabled, one can either use the sysctl function or
+set it via the proc file system interface.
 
   sysctl kernel.ftrace_enabled=1
 
@@ -486,12 +548,12 @@ file system interface.
 
   echo 1 > /proc/sys/kernel/ftrace_enabled
 
-To disable ftrace_enabled simply replace the '1' with '0' in
-the above commands.
+To disable ftrace_enabled simply replace the '1' with '0' in the
+above commands.
 
-When ftrace_enabled is set the tracers will also record the functions
-that are within the trace. The descriptions of the tracers
-will also show an example with ftrace enabled.
+When ftrace_enabled is set the tracers will also record the
+functions that are within the trace. The descriptions of the
+tracers will also show an example with ftrace enabled.
 
 
 irqsoff
@@ -499,17 +561,18 @@ irqsoff
 
 When interrupts are disabled, the CPU can not react to any other
 external event (besides NMIs and SMIs). This prevents the timer
-interrupt from triggering or the mouse interrupt from letting the
-kernel know of a new mouse event. The result is a latency with the
-reaction time.
+interrupt from triggering or the mouse interrupt from letting
+the kernel know of a new mouse event. The result is a latency
+with the reaction time.
 
-The irqsoff tracer tracks the time for which interrupts are disabled.
-When a new maximum latency is hit, the tracer saves the trace leading up
-to that latency point so that every time a new maximum is reached, the old
-saved trace is discarded and the new trace is saved.
+The irqsoff tracer tracks the time for which interrupts are
+disabled. When a new maximum latency is hit, the tracer saves
+the trace leading up to that latency point so that every time a
+new maximum is reached, the old saved trace is discarded and the
+new trace is saved.
 
-To reset the maximum, echo 0 into tracing_max_latency. Here is an
-example:
+To reset the maximum, echo 0 into tracing_max_latency. Here is
+an example:
 
  # echo irqsoff > /debug/tracing/current_tracer
  # echo 0 > /debug/tracing/tracing_max_latency
@@ -544,10 +607,11 @@ irqsoff latency trace v1.1.5 on 2.6.26
 
 
 Here we see that that we had a latency of 12 microsecs (which is
-very good). The _write_lock_irq in sys_setpgid disabled interrupts.
-The difference between the 12 and the displayed timestamp 14us occurred
-because the clock was incremented between the time of recording the max
-latency and the time of recording the function that had that latency.
+very good). The _write_lock_irq in sys_setpgid disabled
+interrupts. The difference between the 12 and the displayed
+timestamp 14us occurred because the clock was incremented
+between the time of recording the max latency and the time of
+recording the function that had that latency.
 
 Note the above example had ftrace_enabled not set. If we set the
 ftrace_enabled, we get a much larger output:
@@ -598,24 +662,24 @@ irqsoff latency trace v1.1.5 on 2.6.26-rc8
 
 
 Here we traced a 50 microsecond latency. But we also see all the
-functions that were called during that time. Note that by enabling
-function tracing, we incur an added overhead. This overhead may
-extend the latency times. But nevertheless, this trace has provided
-some very helpful debugging information.
+functions that were called during that time. Note that by
+enabling function tracing, we incur an added overhead. This
+overhead may extend the latency times. But nevertheless, this
+trace has provided some very helpful debugging information.
 
 
 preemptoff
 ----------
 
-When preemption is disabled, we may be able to receive interrupts but
-the task cannot be preempted and a higher priority task must wait
-for preemption to be enabled again before it can preempt a lower
-priority task.
+When preemption is disabled, we may be able to receive
+interrupts but the task cannot be preempted and a higher
+priority task must wait for preemption to be enabled again
+before it can preempt a lower priority task.
 
 The preemptoff tracer traces the places that disable preemption.
-Like the irqsoff tracer, it records the maximum latency for which preemption
-was disabled. The control of preemptoff tracer is much like the irqsoff
-tracer.
+Like the irqsoff tracer, it records the maximum latency for
+which preemption was disabled. The control of preemptoff tracer
+is much like the irqsoff tracer.
 
  # echo preemptoff > /debug/tracing/current_tracer
  # echo 0 > /debug/tracing/tracing_max_latency
@@ -649,11 +713,12 @@ preemptoff latency trace v1.1.5 on 2.6.26-rc8
     sshd-4261  0d.s1   30us : trace_preempt_on (__do_softirq)
 
 
-This has some more changes. Preemption was disabled when an interrupt
-came in (notice the 'h'), and was enabled while doing a softirq.
-(notice the 's'). But we also see that interrupts have been disabled
-when entering the preempt off section and leaving it (the 'd').
-We do not know if interrupts were enabled in the mean time.
+This has some more changes. Preemption was disabled when an
+interrupt came in (notice the 'h'), and was enabled while doing
+a softirq. (notice the 's'). But we also see that interrupts
+have been disabled when entering the preempt off section and
+leaving it (the 'd'). We do not know if interrupts were enabled
+in the mean time.
 
 # tracer: preemptoff
 #
@@ -712,28 +777,30 @@ preemptoff latency trace v1.1.5 on 2.6.26-rc8
     sshd-4261  0d.s1   64us : trace_preempt_on (__do_softirq)
 
 
-The above is an example of the preemptoff trace with ftrace_enabled
-set. Here we see that interrupts were disabled the entire time.
-The irq_enter code lets us know that we entered an interrupt 'h'.
-Before that, the functions being traced still show that it is not
-in an interrupt, but we can see from the functions themselves that
-this is not the case.
+The above is an example of the preemptoff trace with
+ftrace_enabled set. Here we see that interrupts were disabled
+the entire time. The irq_enter code lets us know that we entered
+an interrupt 'h'. Before that, the functions being traced still
+show that it is not in an interrupt, but we can see from the
+functions themselves that this is not the case.
 
-Notice that __do_softirq when called does not have a preempt_count.
-It may seem that we missed a preempt enabling. What really happened
-is that the preempt count is held on the thread's stack and we
-switched to the softirq stack (4K stacks in effect). The code
-does not copy the preempt count, but because interrupts are disabled,
-we do not need to worry about it. Having a tracer like this is good
-for letting people know what really happens inside the kernel.
+Notice that __do_softirq when called does not have a
+preempt_count. It may seem that we missed a preempt enabling.
+What really happened is that the preempt count is held on the
+thread's stack and we switched to the softirq stack (4K stacks
+in effect). The code does not copy the preempt count, but
+because interrupts are disabled, we do not need to worry about
+it. Having a tracer like this is good for letting people know
+what really happens inside the kernel.
 
 
 preemptirqsoff
 --------------
 
-Knowing the locations that have interrupts disabled or preemption
-disabled for the longest times is helpful. But sometimes we would
-like to know when either preemption and/or interrupts are disabled.
+Knowing the locations that have interrupts disabled or
+preemption disabled for the longest times is helpful. But
+sometimes we would like to know when either preemption and/or
+interrupts are disabled.
 
 Consider the following code:
 
@@ -753,11 +820,13 @@ The preemptoff tracer will record the total length of
 call_function_with_irqs_and_preemption_off() and
 call_function_with_preemption_off().
 
-But neither will trace the time that interrupts and/or preemption
-is disabled. This total time is the time that we can not schedule.
-To record this time, use the preemptirqsoff tracer.
+But neither will trace the time that interrupts and/or
+preemption is disabled. This total time is the time that we can
+not schedule. To record this time, use the preemptirqsoff
+tracer.
 
-Again, using this trace is much like the irqsoff and preemptoff tracers.
+Again, using this trace is much like the irqsoff and preemptoff
+tracers.
 
  # echo preemptirqsoff > /debug/tracing/current_tracer
  # echo 0 > /debug/tracing/tracing_max_latency
@@ -793,9 +862,10 @@ preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
 
 
 The trace_hardirqs_off_thunk is called from assembly on x86 when
-interrupts are disabled in the assembly code. Without the function
-tracing, we do not know if interrupts were enabled within the preemption
-points. We do see that it started with preemption enabled.
+interrupts are disabled in the assembly code. Without the
+function tracing, we do not know if interrupts were enabled
+within the preemption points. We do see that it started with
+preemption enabled.
 
 Here is a trace with ftrace_enabled set:
 
@@ -883,40 +953,42 @@ preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
     sshd-4261  0d.s1  105us : trace_preempt_on (__do_softirq)
 
 
-This is a very interesting trace. It started with the preemption of
-the ls task. We see that the task had the "need_resched" bit set
-via the 'N' in the trace.  Interrupts were disabled before the spin_lock
-at the beginning of the trace. We see that a schedule took place to run
-sshd.  When the interrupts were enabled, we took an interrupt.
-On return from the interrupt handler, the softirq ran. We took another
-interrupt while running the softirq as we see from the capital 'H'.
+This is a very interesting trace. It started with the preemption
+of the ls task. We see that the task had the "need_resched" bit
+set via the 'N' in the trace.  Interrupts were disabled before
+the spin_lock at the beginning of the trace. We see that a
+schedule took place to run sshd.  When the interrupts were
+enabled, we took an interrupt. On return from the interrupt
+handler, the softirq ran. We took another interrupt while
+running the softirq as we see from the capital 'H'.
 
 
 wakeup
 ------
 
-In a Real-Time environment it is very important to know the wakeup
-time it takes for the highest priority task that is woken up to the
-time that it executes. This is also known as "schedule latency".
-I stress the point that this is about RT tasks. It is also important
-to know the scheduling latency of non-RT tasks, but the average
-schedule latency is better for non-RT tasks. Tools like
-LatencyTop are more appropriate for such measurements.
+In a Real-Time environment it is very important to know the
+wakeup time it takes for the highest priority task that is woken
+up to the time that it executes. This is also known as "schedule
+latency". I stress the point that this is about RT tasks. It is
+also important to know the scheduling latency of non-RT tasks,
+but the average schedule latency is better for non-RT tasks.
+Tools like LatencyTop are more appropriate for such
+measurements.
 
 Real-Time environments are interested in the worst case latency.
-That is the longest latency it takes for something to happen, and
-not the average. We can have a very fast scheduler that may only
-have a large latency once in a while, but that would not work well
-with Real-Time tasks.  The wakeup tracer was designed to record
-the worst case wakeups of RT tasks. Non-RT tasks are not recorded
-because the tracer only records one worst case and tracing non-RT
-tasks that are unpredictable will overwrite the worst case latency
-of RT tasks.
-
-Since this tracer only deals with RT tasks, we will run this slightly
-differently than we did with the previous tracers. Instead of performing
-an 'ls', we will run 'sleep 1' under 'chrt' which changes the
-priority of the task.
+That is the longest latency it takes for something to happen,
+and not the average. We can have a very fast scheduler that may
+only have a large latency once in a while, but that would not
+work well with Real-Time tasks.  The wakeup tracer was designed
+to record the worst case wakeups of RT tasks. Non-RT tasks are
+not recorded because the tracer only records one worst case and
+tracing non-RT tasks that are unpredictable will overwrite the
+worst case latency of RT tasks.
+
+Since this tracer only deals with RT tasks, we will run this
+slightly differently than we did with the previous tracers.
+Instead of performing an 'ls', we will run 'sleep 1' under
+'chrt' which changes the priority of the task.
 
  # echo wakeup > /debug/tracing/current_tracer
  # echo 0 > /debug/tracing/tracing_max_latency
@@ -946,17 +1018,16 @@ wakeup latency trace v1.1.5 on 2.6.26-rc8
   <idle>-0     1d..4    4us : schedule (cpu_idle)
 
 
+Running this on an idle system, we see that it only took 4
+microseconds to perform the task switch.  Note, since the trace
+marker in the schedule is before the actual "switch", we stop
+the tracing when the recorded task is about to schedule in. This
+may change if we add a new marker at the end of the scheduler.
 
-Running this on an idle system, we see that it only took 4 microseconds
-to perform the task switch.  Note, since the trace marker in the
-schedule is before the actual "switch", we stop the tracing when
-the recorded task is about to schedule in. This may change if
-we add a new marker at the end of the scheduler.
-
-Notice that the recorded task is 'sleep' with the PID of 4901 and it
-has an rt_prio of 5. This priority is user-space priority and not
-the internal kernel priority. The policy is 1 for SCHED_FIFO and 2
-for SCHED_RR.
+Notice that the recorded task is 'sleep' with the PID of 4901
+and it has an rt_prio of 5. This priority is user-space priority
+and not the internal kernel priority. The policy is 1 for
+SCHED_FIFO and 2 for SCHED_RR.
 
 Doing the same with chrt -r 5 and ftrace_enabled set.
 
@@ -1013,24 +1084,25 @@ ksoftirq-7     1d..6   49us : _spin_unlock (tracing_record_cmdline)
 ksoftirq-7     1d..6   49us : sub_preempt_count (_spin_unlock)
 ksoftirq-7     1d..4   50us : schedule (__cond_resched)
 
-The interrupt went off while running ksoftirqd. This task runs at
-SCHED_OTHER. Why did not we see the 'N' set early? This may be
-a harmless bug with x86_32 and 4K stacks. On x86_32 with 4K stacks
-configured, the interrupt and softirq run with their own stack.
-Some information is held on the top of the task's stack (need_resched
-and preempt_count are both stored there). The setting of the NEED_RESCHED
-bit is done directly to the task's stack, but the reading of the
-NEED_RESCHED is done by looking at the current stack, which in this case
-is the stack for the hard interrupt. This hides the fact that NEED_RESCHED
-has been set. We do not see the 'N' until we switch back to the task's
+The interrupt went off while running ksoftirqd. This task runs
+at SCHED_OTHER. Why did not we see the 'N' set early? This may
+be a harmless bug with x86_32 and 4K stacks. On x86_32 with 4K
+stacks configured, the interrupt and softirq run with their own
+stack. Some information is held on the top of the task's stack
+(need_resched and preempt_count are both stored there). The
+setting of the NEED_RESCHED bit is done directly to the task's
+stack, but the reading of the NEED_RESCHED is done by looking at
+the current stack, which in this case is the stack for the hard
+interrupt. This hides the fact that NEED_RESCHED has been set.
+We do not see the 'N' until we switch back to the task's
 assigned stack.
 
 function
 --------
 
 This tracer is the function tracer. Enabling the function tracer
-can be done from the debug file system. Make sure the ftrace_enabled is
-set; otherwise this tracer is a nop.
+can be done from the debug file system. Make sure the
+ftrace_enabled is set; otherwise this tracer is a nop.
 
  # sysctl kernel.ftrace_enabled=1
  # echo function > /debug/tracing/current_tracer
@@ -1060,14 +1132,15 @@ set; otherwise this tracer is a nop.
 [...]
 
 
-Note: function tracer uses ring buffers to store the above entries.
-The newest data may overwrite the oldest data. Sometimes using echo to
-stop the trace is not sufficient because the tracing could have overwritten
-the data that you wanted to record. For this reason, it is sometimes better to
-disable tracing directly from a program. This allows you to stop the
-tracing at the point that you hit the part that you are interested in.
-To disable the tracing directly from a C program, something like following
-code snippet can be used:
+Note: function tracer uses ring buffers to store the above
+entries. The newest data may overwrite the oldest data.
+Sometimes using echo to stop the trace is not sufficient because
+the tracing could have overwritten the data that you wanted to
+record. For this reason, it is sometimes better to disable
+tracing directly from a program. This allows you to stop the
+tracing at the point that you hit the part that you are
+interested in. To disable the tracing directly from a C program,
+something like following code snippet can be used:
 
 int trace_fd;
 [...]
@@ -1082,10 +1155,10 @@ int main(int argc, char *argv[]) {
 }
 
 Note: Here we hard coded the path name. The debugfs mount is not
-guaranteed to be at /debug (and is more commonly at /sys/kernel/debug).
-For simple one time traces, the above is sufficent. For anything else,
-a search through /proc/mounts may be needed to find where the debugfs
-file-system is mounted.
+guaranteed to be at /debug (and is more commonly at
+/sys/kernel/debug). For simple one time traces, the above is
+sufficent. For anything else, a search through /proc/mounts may
+be needed to find where the debugfs file-system is mounted.
 
 
 Single thread tracing
@@ -1186,10 +1259,11 @@ following format:
    0  scheduler_tick+0x1b6/0x1bf	  <-  scheduler_tick+0x1aa/0x1bf
 
 
-The tracer may be used to dump the trace for the oops'ing cpu on a
-kernel oops into the system log. To enable this, ftrace_dump_on_oops
-must be set. To set ftrace_dump_on_oops, one can either use the sysctl
-function or set it via the proc system interface.
+The tracer may be used to dump the trace for the oops'ing cpu on
+a kernel oops into the system log. To enable this,
+ftrace_dump_on_oops must be set. To set ftrace_dump_on_oops, one
+can either use the sysctl function or set it via the proc system
+interface.
 
   sysctl kernel.ftrace_dump_on_oops=1
 
@@ -1198,8 +1272,8 @@ or
   echo 1 > /proc/sys/kernel/ftrace_dump_on_oops
 
 
-Here's an example of such a dump after a null pointer dereference in a
-kernel module:
+Here's an example of such a dump after a null pointer
+dereference in a kernel module:
 
 [57848.105921] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
 [57848.106019] IP: [<ffffffffa0000006>] open+0x6/0x14 [oops]
@@ -1239,25 +1313,34 @@ kernel module:
 function graph tracer
 ---------------------------
 
-This tracer is similar to the function tracer except that it probes
-a function on its entry and its exit.
-This is done by setting a dynamically allocated stack of return addresses on each
-task_struct. Then the tracer overwrites the return address of each function traced
-to set a custom probe. Thus the original return address is stored on the stack of return
-address in the task_struct.
+This tracer is similar to the function tracer except that it
+probes a function on its entry and its exit. This is done by
+using a dynamically allocated stack of return addresses in each
+task_struct. On function entry the tracer overwrites the return
+address of each function traced to set a custom probe. Thus the
+original return address is stored on the stack of return address
+in the task_struct.
 
-Probing on both extremities of a function leads to special features such as
+Probing on both ends of a function leads to special features
+such as:
 
-_ measure of function's time execution
-_ having a reliable call stack to draw function calls graph
+- measure of a function's time execution
+- having a reliable call stack to draw function calls graph
 
 This tracer is useful in several situations:
 
-_ you want to find the reason of a strange kernel behavior and need to see
-  what happens in detail on any areas (or specific ones).
-_ you are experiencing weird latencies but it's difficult to find its origin.
-_ you want to find quickly which path is taken by a specific function
-_ you just want to see what happens inside your kernel
+- you want to find the reason of a strange kernel behavior and
+  need to see what happens in detail on any areas (or specific
+  ones).
+
+- you are experiencing weird latencies but it's difficult to
+  find its origin.
+
+- you want to find quickly which path is taken by a specific
+  function
+
+- you just want to peek inside a working kernel and want to see
+  what happens there.
 
 # tracer: function_graph
 #
@@ -1282,24 +1365,28 @@ _ you just want to see what happens inside your kernel
  0)   0.586 us    |        _spin_unlock();
 
 
-There are several columns that can be dynamically enabled/disabled.
-You can use every combination of options you want, depending on your needs.
+There are several columns that can be dynamically
+enabled/disabled. You can use every combination of options you
+want, depending on your needs.
 
-_ The cpu number on which the function executed is default enabled.
-  It is sometimes better to only trace one cpu (see tracing_cpu_mask file)
-  or you might sometimes see unordered function calls while cpu tracing switch.
+- The cpu number on which the function executed is default
+  enabled.  It is sometimes better to only trace one cpu (see
+  tracing_cpu_mask file) or you might sometimes see unordered
+  function calls while cpu tracing switch.
 
 	hide: echo nofuncgraph-cpu > /debug/tracing/trace_options
 	show: echo funcgraph-cpu > /debug/tracing/trace_options
 
-_ The duration (function's time of execution) is displayed on the closing bracket
-  line of a function or on the same line than the current function in case of a leaf
-  one. It is default enabled.
+- The duration (function's time of execution) is displayed on
+  the closing bracket line of a function or on the same line
+  than the current function in case of a leaf one. It is default
+  enabled.
 
 	hide: echo nofuncgraph-duration > /debug/tracing/trace_options
 	show: echo funcgraph-duration > /debug/tracing/trace_options
 
-_ The overhead field precedes the duration one in case of reached duration thresholds.
+- The overhead field precedes the duration field in case of
+  reached duration thresholds.
 
 	hide: echo nofuncgraph-overhead > /debug/tracing/trace_options
 	show: echo funcgraph-overhead > /debug/tracing/trace_options
@@ -1328,8 +1415,8 @@ _ The overhead field precedes the duration one in case of reached duration thres
   ! means that the function exceeded 100 usecs.
 
 
-_ The task/pid field displays the thread cmdline and pid which executed the function.
-  It is default disabled.
+- The task/pid field displays the thread cmdline and pid which
+  executed the function. It is default disabled.
 
 	hide: echo nofuncgraph-proc > /debug/tracing/trace_options
 	show: echo funcgraph-proc > /debug/tracing/trace_options
@@ -1351,8 +1438,9 @@ _ The task/pid field displays the thread cmdline and pid which executed the func
   0)    sh-4802     | + 49.370 us   |                }
 
 
-_ The absolute time field is an absolute timestamp given by the clock since
-  it started. A snapshot of this time is given on each entry/exit of functions
+- The absolute time field is an absolute timestamp given by the
+  system clock since it started. A snapshot of this time is
+  given on each entry/exit of functions
 
 	hide: echo nofuncgraph-abstime > /debug/tracing/trace_options
 	show: echo funcgraph-abstime > /debug/tracing/trace_options
@@ -1377,9 +1465,10 @@ _ The absolute time field is an absolute timestamp given by the clock since
   360.774530 |   1)   0.594 us    |                                          __phys_addr();
 
 
-You can put some comments on specific functions by using ftrace_printk()
-For example, if you want to put a comment inside the __might_sleep() function,
-you just have to include <linux/ftrace.h> and call ftrace_printk() inside __might_sleep()
+You can put some comments on specific functions by using
+ftrace_printk() For example, if you want to put a comment inside
+the __might_sleep() function, you just have to include
+<linux/ftrace.h> and call ftrace_printk() inside __might_sleep()
 
 ftrace_printk("I'm a comment!\n")
 
@@ -1390,8 +1479,9 @@ will produce:
  1)   1.449 us    |             }
 
 
-You might find other useful features for this tracer on the "dynamic ftrace"
-section such as tracing only specific functions or tasks.
+You might find other useful features for this tracer in the
+following "dynamic ftrace" section such as tracing only specific
+functions or tasks.
 
 dynamic ftrace
 --------------
@@ -1399,43 +1489,45 @@ dynamic ftrace
 If CONFIG_DYNAMIC_FTRACE is set, the system will run with
 virtually no overhead when function tracing is disabled. The way
 this works is the mcount function call (placed at the start of
-every kernel function, produced by the -pg switch in gcc), starts
-of pointing to a simple return. (Enabling FTRACE will include the
--pg switch in the compiling of the kernel.)
+every kernel function, produced by the -pg switch in gcc),
+starts of pointing to a simple return. (Enabling FTRACE will
+include the -pg switch in the compiling of the kernel.)
 
 At compile time every C file object is run through the
 recordmcount.pl script (located in the scripts directory). This
 script will process the C object using objdump to find all the
-locations in the .text section that call mcount. (Note, only
-the .text section is processed, since processing other sections
-like .init.text may cause races due to those sections being freed).
+locations in the .text section that call mcount. (Note, only the
+.text section is processed, since processing other sections like
+.init.text may cause races due to those sections being freed).
 
-A new section called "__mcount_loc" is created that holds references
-to all the mcount call sites in the .text section. This section is
-compiled back into the original object. The final linker will add
-all these references into a single table.
+A new section called "__mcount_loc" is created that holds
+references to all the mcount call sites in the .text section.
+This section is compiled back into the original object. The
+final linker will add all these references into a single table.
 
 On boot up, before SMP is initialized, the dynamic ftrace code
-scans this table and updates all the locations into nops. It also
-records the locations, which are added to the available_filter_functions
-list.  Modules are processed as they are loaded and before they are
-executed.  When a module is unloaded, it also removes its functions from
-the ftrace function list. This is automatic in the module unload
-code, and the module author does not need to worry about it.
-
-When tracing is enabled, kstop_machine is called to prevent races
-with the CPUS executing code being modified (which can cause the
-CPU to do undesireable things), and the nops are patched back
-to calls. But this time, they do not call mcount (which is just
-a function stub). They now call into the ftrace infrastructure.
+scans this table and updates all the locations into nops. It
+also records the locations, which are added to the
+available_filter_functions list.  Modules are processed as they
+are loaded and before they are executed.  When a module is
+unloaded, it also removes its functions from the ftrace function
+list. This is automatic in the module unload code, and the
+module author does not need to worry about it.
+
+When tracing is enabled, kstop_machine is called to prevent
+races with the CPUS executing code being modified (which can
+cause the CPU to do undesireable things), and the nops are
+patched back to calls. But this time, they do not call mcount
+(which is just a function stub). They now call into the ftrace
+infrastructure.
 
 One special side-effect to the recording of the functions being
 traced is that we can now selectively choose which functions we
-wish to trace and which ones we want the mcount calls to remain as
-nops.
+wish to trace and which ones we want the mcount calls to remain
+as nops.
 
-Two files are used, one for enabling and one for disabling the tracing
-of specified functions. They are:
+Two files are used, one for enabling and one for disabling the
+tracing of specified functions. They are:
 
   set_ftrace_filter
 
@@ -1443,8 +1535,8 @@ and
 
   set_ftrace_notrace
 
-A list of available functions that you can add to these files is listed
-in:
+A list of available functions that you can add to these files is
+listed in:
 
    available_filter_functions
 
@@ -1481,8 +1573,8 @@ hrtimer_interrupt
 sys_nanosleep
 
 
-Perhaps this is not enough. The filters also allow simple wild cards.
-Only the following are currently available
+Perhaps this is not enough. The filters also allow simple wild
+cards. Only the following are currently available
 
   <match>*  - will match functions that begin with <match>
   *<match>  - will match functions that end with <match>
@@ -1492,9 +1584,9 @@ These are the only wild cards which are supported.
 
   <match>*<match> will not work.
 
-Note: It is better to use quotes to enclose the wild cards, otherwise
-  the shell may expand the parameters into names of files in the local
-  directory.
+Note: It is better to use quotes to enclose the wild cards,
+      otherwise the shell may expand the parameters into names
+      of files in the local directory.
 
  # echo 'hrtimer_*' > /debug/tracing/set_ftrace_filter
 
@@ -1540,7 +1632,8 @@ This is because the '>' and '>>' act just like they do in bash.
 To rewrite the filters, use '>'
 To append to the filters, use '>>'
 
-To clear out a filter so that all functions will be recorded again:
+To clear out a filter so that all functions will be recorded
+again:
 
  # echo > /debug/tracing/set_ftrace_filter
  # cat /debug/tracing/set_ftrace_filter
@@ -1572,7 +1665,8 @@ hrtimer_get_res
 hrtimer_init_sleeper
 
 
-The set_ftrace_notrace prevents those functions from being traced.
+The set_ftrace_notrace prevents those functions from being
+traced.
 
  # echo '*preempt*' '*lock*' > /debug/tracing/set_ftrace_notrace
 
@@ -1595,18 +1689,20 @@ Produces:
 We can see that there's no more lock or preempt tracing.
 
 
-* Dynamic ftrace with the function graph tracer *
+Dynamic ftrace with the function graph tracer
+---------------------------------------------
 
+Although what has been explained above concerns both the
+function tracer and the function-graph-tracer, there are some
+special features only available in the function-graph tracer.
 
-Although what has been explained above concerns both the function tracer and
-the function_graph_tracer, the following concerns only the latter.
+If you want to trace only one function and all of its children,
+you just have to echo its name into set_graph_function:
 
-If you want to trace only one function and all of its childs, you just have
-to echo its name on set_graph_function:
+ echo __do_fault > set_graph_function
 
-echo __do_fault > set_graph_function
-
-will produce the following:
+will produce the following "expanded" trace of the __do_fault()
+function:
 
  0)               |  __do_fault() {
  0)               |    filemap_fault() {
@@ -1643,23 +1739,24 @@ will produce the following:
  0)   2.793 us    |    }
  0) + 14.012 us   |  }
 
-You can also select several functions:
+You can also expand several functions at once:
 
-echo sys_open > set_graph_function
-echo sys_close >> set_graph_function
+ echo sys_open > set_graph_function
+ echo sys_close >> set_graph_function
 
-Now if you want to go back to trace all functions
+Now if you want to go back to trace all functions you can clear
+this special filter via:
 
-echo > set_graph_function
+ echo > set_graph_function
 
 
 trace_pipe
 ----------
 
-The trace_pipe outputs the same content as the trace file, but the effect
-on the tracing is different. Every read from trace_pipe is consumed.
-This means that subsequent reads will be different. The trace
-is live.
+The trace_pipe outputs the same content as the trace file, but
+the effect on the tracing is different. Every read from
+trace_pipe is consumed. This means that subsequent reads will be
+different. The trace is live.
 
  # echo function > /debug/tracing/current_tracer
  # cat /debug/tracing/trace_pipe > /tmp/trace.out &
@@ -1687,38 +1784,45 @@ is live.
             bash-4043  [00] 41.267111: select_task_rq_rt <-try_to_wake_up
 
 
-Note, reading the trace_pipe file will block until more input is added.
-By changing the tracer, trace_pipe will issue an EOF. We needed
-to set the function tracer _before_ we "cat" the trace_pipe file.
+Note, reading the trace_pipe file will block until more input is
+added. By changing the tracer, trace_pipe will issue an EOF. We
+needed to set the function tracer _before_ we "cat" the
+trace_pipe file.
 
 
 trace entries
 -------------
 
-Having too much or not enough data can be troublesome in diagnosing
-an issue in the kernel. The file buffer_size_kb is used to modify
-the size of the internal trace buffers. The number listed
-is the number of entries that can be recorded per CPU. To know
-the full size, multiply the number of possible CPUS with the
-number of entries.
+Having too much or not enough data can be troublesome in
+diagnosing an issue in the kernel. The file buffer_size_kb is
+used to modify the size of the internal trace buffers. The
+number listed is the number of entries that can be recorded per
+CPU. To know the full size, multiply the number of possible CPUS
+with the number of entries.
 
  # cat /debug/tracing/buffer_size_kb
 1408 (units kilobytes)
 
-Note, to modify this, you must have tracing completely disabled. To do that,
-echo "nop" into the current_tracer. If the current_tracer is not set
-to "nop", an EINVAL error will be returned.
+Note, to modify this, you must have tracing completely disabled.
+To do that, echo "nop" into the current_tracer. If the
+current_tracer is not set to "nop", an EINVAL error will be
+returned.
 
  # echo nop > /debug/tracing/current_tracer
  # echo 10000 > /debug/tracing/buffer_size_kb
  # cat /debug/tracing/buffer_size_kb
 10000 (units kilobytes)
 
-The number of pages which will be allocated is limited to a percentage
-of available memory. Allocating too much will produce an error.
+The number of pages which will be allocated is limited to a
+percentage of available memory. Allocating too much will produce
+an error.
 
  # echo 1000000000000 > /debug/tracing/buffer_size_kb
 -bash: echo: write error: Cannot allocate memory
  # cat /debug/tracing/buffer_size_kb
 85
 
+-----------
+
+More details can be found in the source code, in the
+kernel/tracing/*.c files.

commit 4cd0332db7e8f57cc082bab11d82c064a9721737
Merge: 40999096e8b9 712406a6bf59
Author: Ingo Molnar <mingo@elte.hu>
Date:   Thu Feb 19 12:13:33 2009 +0100

    Merge branch 'mainline/function-graph' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into tracing/function-graph-tracer

diff --cc arch/x86/kernel/ftrace.c
index 2f9c0c8cb4c7,76f7141e0f91..c2e057d9f88c
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@@ -367,81 -368,27 +367,8 @@@ int ftrace_disable_ftrace_graph_caller(
  	return ftrace_mod_jmp(ip, old_offset, new_offset);
  }
  
 -#else /* CONFIG_DYNAMIC_FTRACE */
 -
 -/*
 - * These functions are picked from those used on
 - * this page for dynamic ftrace. They have been
 - * simplified to ignore all traces in NMI context.
 - */
 -static atomic_t in_nmi;
 -
 -void ftrace_nmi_enter(void)
 -{
 -	atomic_inc(&in_nmi);
 -}
 -
 -void ftrace_nmi_exit(void)
 -{
 -	atomic_dec(&in_nmi);
 -}
 -
  #endif /* !CONFIG_DYNAMIC_FTRACE */
  
- /* Add a function return address to the trace stack on thread info.*/
- static int push_return_trace(unsigned long ret, unsigned long long time,
- 				unsigned long func, int *depth)
- {
- 	int index;
- 
- 	if (!current->ret_stack)
- 		return -EBUSY;
- 
- 	/* The return trace stack is full */
- 	if (current->curr_ret_stack == FTRACE_RETFUNC_DEPTH - 1) {
- 		atomic_inc(&current->trace_overrun);
- 		return -EBUSY;
- 	}
- 
- 	index = ++current->curr_ret_stack;
- 	barrier();
- 	current->ret_stack[index].ret = ret;
- 	current->ret_stack[index].func = func;
- 	current->ret_stack[index].calltime = time;
- 	*depth = index;
- 
- 	return 0;
- }
- 
- /* Retrieve a function return address to the trace stack on thread info.*/
- static void pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret)
- {
- 	int index;
- 
- 	index = current->curr_ret_stack;
- 
- 	if (unlikely(index < 0)) {
- 		ftrace_graph_stop();
- 		WARN_ON(1);
- 		/* Might as well panic, otherwise we have no where to go */
- 		*ret = (unsigned long)panic;
- 		return;
- 	}
- 
- 	*ret = current->ret_stack[index].ret;
- 	trace->func = current->ret_stack[index].func;
- 	trace->calltime = current->ret_stack[index].calltime;
- 	trace->overrun = atomic_read(&current->trace_overrun);
- 	trace->depth = index;
- 	barrier();
- 	current->curr_ret_stack--;
- 
- }
- 
- /*
-  * Send the trace to the ring-buffer.
-  * @return the original return address.
-  */
- unsigned long ftrace_return_to_handler(void)
- {
- 	struct ftrace_graph_ret trace;
- 	unsigned long ret;
- 
- 	pop_return_trace(&trace, &ret);
- 	trace.rettime = cpu_clock(raw_smp_processor_id());
- 	ftrace_graph_return(&trace);
- 
- 	if (unlikely(!ret)) {
- 		ftrace_graph_stop();
- 		WARN_ON(1);
- 		/* Might as well panic. What else to do? */
- 		ret = (unsigned long)panic;
- 	}
- 
- 	return ret;
- }
- 
  /*
   * Hook the return address and push it in the stack of return addrs
   * in current thread info.
@@@ -492,9 -439,16 +419,9 @@@ void prepare_ftrace_return(unsigned lon
  		return;
  	}
  
 -	if (unlikely(!__kernel_text_address(old))) {
 -		ftrace_graph_stop();
 -		*parent = old;
 -		WARN_ON(1);
 -		return;
 -	}
 -
  	calltime = cpu_clock(raw_smp_processor_id());
  
- 	if (push_return_trace(old, calltime,
+ 	if (ftrace_push_return_trace(old, calltime,
  				self_addr, &trace.depth) == -EBUSY) {
  		*parent = old;
  		return;
diff --cc kernel/trace/trace_functions_graph.c
index 0ff5cb661900,dce71a5b51bc..6c7738e4f98b
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@@ -48,11 -40,91 +48,86 @@@ static struct tracer_flags tracer_flag
  };
  
  /* pid on the last trace processed */
 -static pid_t last_pid[NR_CPUS] = { [0 ... NR_CPUS-1] = -1 };
 +
  
+ /* Add a function return address to the trace stack on thread info.*/
+ int
+ ftrace_push_return_trace(unsigned long ret, unsigned long long time,
+ 			 unsigned long func, int *depth)
+ {
+ 	int index;
+ 
+ 	if (!current->ret_stack)
+ 		return -EBUSY;
+ 
+ 	/* The return trace stack is full */
+ 	if (current->curr_ret_stack == FTRACE_RETFUNC_DEPTH - 1) {
+ 		atomic_inc(&current->trace_overrun);
+ 		return -EBUSY;
+ 	}
+ 
+ 	index = ++current->curr_ret_stack;
+ 	barrier();
+ 	current->ret_stack[index].ret = ret;
+ 	current->ret_stack[index].func = func;
+ 	current->ret_stack[index].calltime = time;
+ 	*depth = index;
+ 
+ 	return 0;
+ }
+ 
+ /* Retrieve a function return address to the trace stack on thread info.*/
+ void
+ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret)
+ {
+ 	int index;
+ 
+ 	index = current->curr_ret_stack;
+ 
+ 	if (unlikely(index < 0)) {
+ 		ftrace_graph_stop();
+ 		WARN_ON(1);
+ 		/* Might as well panic, otherwise we have no where to go */
+ 		*ret = (unsigned long)panic;
+ 		return;
+ 	}
+ 
+ 	*ret = current->ret_stack[index].ret;
+ 	trace->func = current->ret_stack[index].func;
+ 	trace->calltime = current->ret_stack[index].calltime;
+ 	trace->overrun = atomic_read(&current->trace_overrun);
+ 	trace->depth = index;
+ 	barrier();
+ 	current->curr_ret_stack--;
+ 
+ }
+ 
+ /*
+  * Send the trace to the ring-buffer.
+  * @return the original return address.
+  */
+ unsigned long ftrace_return_to_handler(void)
+ {
+ 	struct ftrace_graph_ret trace;
+ 	unsigned long ret;
+ 
+ 	ftrace_pop_return_trace(&trace, &ret);
+ 	trace.rettime = cpu_clock(raw_smp_processor_id());
+ 	ftrace_graph_return(&trace);
+ 
+ 	if (unlikely(!ret)) {
+ 		ftrace_graph_stop();
+ 		WARN_ON(1);
+ 		/* Might as well panic. What else to do? */
+ 		ret = (unsigned long)panic;
+ 	}
+ 
+ 	return ret;
+ }
+ 
  static int graph_trace_init(struct trace_array *tr)
  {
 -	int cpu, ret;
 -
 -	for_each_online_cpu(cpu)
 -		tracing_reset(tr, cpu);
 -
 -	ret = register_ftrace_graph(&trace_graph_return,
 +	int ret = register_ftrace_graph(&trace_graph_return,
  					&trace_graph_entry);
  	if (ret)
  		return ret;

commit 40999096e8b9872199bf56ecd0c4d98397ccaf2f
Merge: 72c26c9a26ea fa7c7f6e11f7 ed4a2f374d71
Author: Ingo Molnar <mingo@elte.hu>
Date:   Thu Feb 19 10:20:17 2009 +0100

    Merge branches 'tracing/blktrace', 'tracing/ftrace' and 'tracing/urgent' into tracing/core

diff --cc kernel/trace/trace_selftest.c
index 0c9aa1457e51,01415f4edaa5,bc8e80a86bca..7238646b8723
--- a/kernel/trace/trace_selftest.c
+++ b/kernel/trace/trace_selftest.c
@@@@ -619,10 -619,10 -589,10 +638,10 @@@@ trace_selftest_startup_sysprof(struct t
   	int ret;
   
   	/* start the tracing */
  -	ret = trace->init(tr);
  +	ret = tracer_init(trace, tr);
   	if (ret) {
   		warn_failed_init_tracer(trace, ret);
- -		return 0;
+ +		return ret;
   	}
   
   	/* Sleep for a 1/10 of a second */

commit ed4a2f374d71770796789ed559c35a36bab82f1a
Merge: 5b058bcde961 4b3e3d228429
Author: Ingo Molnar <mingo@elte.hu>
Date:   Thu Feb 19 10:16:45 2009 +0100

    Merge branch 'tip/tracing/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into tracing/urgent

commit 72c26c9a26ea7f2f3d14f162c2ebb07805f724ea
Merge: 37bd824a35a6 ba95fd47d177
Author: Ingo Molnar <mingo@elte.hu>
Date:   Thu Feb 19 09:00:35 2009 +0100

    Merge branch 'linus' into tracing/blktrace
    
    Conflicts:
            block/blktrace.c
    
    Semantic merge:
            kernel/trace/blktrace.c
    
    Signed-off-by: Ingo Molnar <mingo@elte.hu>

commit f04b30de3c82528f1ab4c58b3dd4c975f5341901
Author: Ingo Molnar <mingo@elte.hu>
Date:   Wed Feb 18 14:48:43 2009 -0800

    inotify: fix GFP_KERNEL related deadlock
    
    Enhanced lockdep coverage of __GFP_NOFS turned up this new lockdep
    assert:
    
    [ 1093.677775]
    [ 1093.677781] =================================
    [ 1093.680031] [ INFO: inconsistent lock state ]
    [ 1093.680031] 2.6.29-rc5-tip-01504-gb49eca1-dirty #1
    [ 1093.680031] ---------------------------------
    [ 1093.680031] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
    [ 1093.680031] kswapd0/308 [HC0[0]:SC0[0]:HE1:SE1] takes:
    [ 1093.680031]  (&inode->inotify_mutex){+.+.?.}, at: [<c0205942>] inotify_inode_is_dead+0x20/0x80
    [ 1093.680031] {RECLAIM_FS-ON-W} state was registered at:
    [ 1093.680031]   [<c01696b9>] mark_held_locks+0x43/0x5b
    [ 1093.680031]   [<c016baa4>] lockdep_trace_alloc+0x6c/0x6e
    [ 1093.680031]   [<c01cf8b0>] kmem_cache_alloc+0x20/0x150
    [ 1093.680031]   [<c040d0ec>] idr_pre_get+0x27/0x6c
    [ 1093.680031]   [<c02056e3>] inotify_handle_get_wd+0x25/0xad
    [ 1093.680031]   [<c0205f43>] inotify_add_watch+0x7a/0x129
    [ 1093.680031]   [<c020679e>] sys_inotify_add_watch+0x20f/0x250
    [ 1093.680031]   [<c010389e>] sysenter_do_call+0x12/0x35
    [ 1093.680031]   [<ffffffff>] 0xffffffff
    [ 1093.680031] irq event stamp: 60417
    [ 1093.680031] hardirqs last  enabled at (60417): [<c018d5f5>] call_rcu+0x53/0x59
    [ 1093.680031] hardirqs last disabled at (60416): [<c018d5b9>] call_rcu+0x17/0x59
    [ 1093.680031] softirqs last  enabled at (59656): [<c0146229>] __do_softirq+0x157/0x16b
    [ 1093.680031] softirqs last disabled at (59651): [<c0106293>] do_softirq+0x74/0x15d
    [ 1093.680031]
    [ 1093.680031] other info that might help us debug this:
    [ 1093.680031] 2 locks held by kswapd0/308:
    [ 1093.680031]  #0:  (shrinker_rwsem){++++..}, at: [<c01b0502>] shrink_slab+0x36/0x189
    [ 1093.680031]  #1:  (&type->s_umount_key#4){+++++.}, at: [<c01e6d77>] shrink_dcache_memory+0x110/0x1fb
    [ 1093.680031]
    [ 1093.680031] stack backtrace:
    [ 1093.680031] Pid: 308, comm: kswapd0 Not tainted 2.6.29-rc5-tip-01504-gb49eca1-dirty #1
    [ 1093.680031] Call Trace:
    [ 1093.680031]  [<c016947a>] valid_state+0x12a/0x13d
    [ 1093.680031]  [<c016954e>] mark_lock+0xc1/0x1e9
    [ 1093.680031]  [<c016a5b4>] ? check_usage_forwards+0x0/0x3f
    [ 1093.680031]  [<c016ab74>] __lock_acquire+0x2c6/0xac8
    [ 1093.680031]  [<c01688d9>] ? register_lock_class+0x17/0x228
    [ 1093.680031]  [<c016b3d3>] lock_acquire+0x5d/0x7a
    [ 1093.680031]  [<c0205942>] ? inotify_inode_is_dead+0x20/0x80
    [ 1093.680031]  [<c08824c4>] __mutex_lock_common+0x3a/0x4cb
    [ 1093.680031]  [<c0205942>] ? inotify_inode_is_dead+0x20/0x80
    [ 1093.680031]  [<c08829ed>] mutex_lock_nested+0x2e/0x36
    [ 1093.680031]  [<c0205942>] ? inotify_inode_is_dead+0x20/0x80
    [ 1093.680031]  [<c0205942>] inotify_inode_is_dead+0x20/0x80
    [ 1093.680031]  [<c01e6672>] dentry_iput+0x90/0xc2
    [ 1093.680031]  [<c01e67a3>] d_kill+0x21/0x45
    [ 1093.680031]  [<c01e6a46>] __shrink_dcache_sb+0x27f/0x355
    [ 1093.680031]  [<c01e6dc5>] shrink_dcache_memory+0x15e/0x1fb
    [ 1093.680031]  [<c01b05ed>] shrink_slab+0x121/0x189
    [ 1093.680031]  [<c01b0d12>] kswapd+0x39f/0x561
    [ 1093.680031]  [<c01ae499>] ? isolate_pages_global+0x0/0x233
    [ 1093.680031]  [<c0157eae>] ? autoremove_wake_function+0x0/0x43
    [ 1093.680031]  [<c01b0973>] ? kswapd+0x0/0x561
    [ 1093.680031]  [<c0157daf>] kthread+0x41/0x82
    [ 1093.680031]  [<c0157d6e>] ? kthread+0x0/0x82
    [ 1093.680031]  [<c01043ab>] kernel_thread_helper+0x7/0x10
    
    inotify_handle_get_wd() does idr_pre_get() which does a
    kmem_cache_alloc() without __GFP_FS - and is hence deadlockable under
    extreme MM pressure.
    
    Signed-off-by: Ingo Molnar <mingo@elte.hu>
    Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: MinChan Kim <minchan.kim@gmail.com>
    Cc: Nick Piggin <nickpiggin@yahoo.com.au>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

diff --git a/fs/notify/inotify/inotify.c b/fs/notify/inotify/inotify.c
index dae3f28f30d4..331f2e88e284 100644
--- a/fs/notify/inotify/inotify.c
+++ b/fs/notify/inotify/inotify.c
@@ -156,7 +156,7 @@ static int inotify_handle_get_wd(struct inotify_handle *ih,
 	int ret;
 
 	do {
-		if (unlikely(!idr_pre_get(&ih->idr, GFP_KERNEL)))
+		if (unlikely(!idr_pre_get(&ih->idr, GFP_NOFS)))
 			return -ENOSPC;
 		ret = idr_get_new_above(&ih->idr, watch, ih->last_wd+1, &watch->wd);
 	} while (ret == -EAGAIN);

commit 74019224ac34b044b44a31dd89a54e3477db4896
Author: Ingo Molnar <mingo@elte.hu>
Date:   Wed Feb 18 12:23:29 2009 +0100

    timers: add mod_timer_pending()
    
    Impact: new timer API
    
    Based on an idea from Martin Josefsson with the help of
    Patrick McHardy and Stephen Hemminger:
    
    introduce the mod_timer_pending() API which is a mod_timer()
    offspring that is an invariant on already removed timers.
    
    (regular mod_timer() re-activates non-pending timers.)
    
    This is useful for the networking code in that it can
    allow unserialized mod_timer_pending() timer-forwarding
    calls, but a single del_timer*() will stop the timer
    from being reactivated again.
    
    Also while at it:
    
    - optimize the regular mod_timer() path some more, the
      timer-stat and a debug check was needlessly duplicated
      in __mod_timer().
    
    - make the exports come straight after the function, as
      most other exports in timer.c already did.
    
    - eliminate __mod_timer() as an external API, change the
      users to mod_timer().
    
    The regular mod_timer() code path is not impacted
    significantly, due to inlining optimizations and due to
    the simplifications.
    
    Based-on-patch-from: Stephen Hemminger <shemminger@vyatta.com>
    Acked-by: Stephen Hemminger <shemminger@vyatta.com>
    Cc: "David S. Miller" <davem@davemloft.net>
    Cc: Patrick McHardy <kaber@trash.net>
    Cc: netdev@vger.kernel.org
    Cc: Oleg Nesterov <oleg@redhat.com>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Ingo Molnar <mingo@elte.hu>

diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c
index 6a0ad196aeb3..f085369301b1 100644
--- a/arch/powerpc/platforms/cell/spufs/sched.c
+++ b/arch/powerpc/platforms/cell/spufs/sched.c
@@ -508,7 +508,7 @@ static void __spu_add_to_rq(struct spu_context *ctx)
 		list_add_tail(&ctx->rq, &spu_prio->runq[ctx->prio]);
 		set_bit(ctx->prio, spu_prio->bitmap);
 		if (!spu_prio->nr_waiting++)
-			__mod_timer(&spusched_timer, jiffies + SPUSCHED_TICK);
+			mod_timer(&spusched_timer, jiffies + SPUSCHED_TICK);
 	}
 }
 
diff --git a/drivers/infiniband/hw/ipath/ipath_driver.c b/drivers/infiniband/hw/ipath/ipath_driver.c
index 69c0ce321b4e..cb9daa6ac029 100644
--- a/drivers/infiniband/hw/ipath/ipath_driver.c
+++ b/drivers/infiniband/hw/ipath/ipath_driver.c
@@ -2715,7 +2715,7 @@ static void ipath_hol_signal_up(struct ipath_devdata *dd)
  * to prevent HoL blocking, then start the HoL timer that
  * periodically continues, then stop procs, so they can detect
  * link down if they want, and do something about it.
- * Timer may already be running, so use __mod_timer, not add_timer.
+ * Timer may already be running, so use mod_timer, not add_timer.
  */
 void ipath_hol_down(struct ipath_devdata *dd)
 {
@@ -2724,7 +2724,7 @@ void ipath_hol_down(struct ipath_devdata *dd)
 	dd->ipath_hol_next = IPATH_HOL_DOWNCONT;
 	dd->ipath_hol_timer.expires = jiffies +
 		msecs_to_jiffies(ipath_hol_timeout_ms);
-	__mod_timer(&dd->ipath_hol_timer, dd->ipath_hol_timer.expires);
+	mod_timer(&dd->ipath_hol_timer, dd->ipath_hol_timer.expires);
 }
 
 /*
@@ -2763,7 +2763,7 @@ void ipath_hol_event(unsigned long opaque)
 	else {
 		dd->ipath_hol_timer.expires = jiffies +
 			msecs_to_jiffies(ipath_hol_timeout_ms);
-		__mod_timer(&dd->ipath_hol_timer,
+		mod_timer(&dd->ipath_hol_timer,
 			dd->ipath_hol_timer.expires);
 	}
 }
diff --git a/include/linux/timer.h b/include/linux/timer.h
index daf9685b861c..e2d662e3416e 100644
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -86,8 +86,8 @@ static inline int timer_pending(const struct timer_list * timer)
 
 extern void add_timer_on(struct timer_list *timer, int cpu);
 extern int del_timer(struct timer_list * timer);
-extern int __mod_timer(struct timer_list *timer, unsigned long expires);
 extern int mod_timer(struct timer_list *timer, unsigned long expires);
+extern int mod_timer_pending(struct timer_list *timer, unsigned long expires);
 
 /*
  * The jiffies value which is added to now, when there is no timer
@@ -146,25 +146,7 @@ static inline void timer_stats_timer_clear_start_info(struct timer_list *timer)
 }
 #endif
 
-/**
- * add_timer - start a timer
- * @timer: the timer to be added
- *
- * The kernel will do a ->function(->data) callback from the
- * timer interrupt at the ->expires point in the future. The
- * current time is 'jiffies'.
- *
- * The timer's ->expires, ->function (and if the handler uses it, ->data)
- * fields must be set prior calling this function.
- *
- * Timers with an ->expires field in the past will be executed in the next
- * timer tick.
- */
-static inline void add_timer(struct timer_list *timer)
-{
-	BUG_ON(timer_pending(timer));
-	__mod_timer(timer, timer->expires);
-}
+extern void add_timer(struct timer_list *timer);
 
 #ifdef CONFIG_SMP
   extern int try_to_del_timer_sync(struct timer_list *timer);
diff --git a/kernel/relay.c b/kernel/relay.c
index 9d79b7854fa6..8f2179c8056f 100644
--- a/kernel/relay.c
+++ b/kernel/relay.c
@@ -750,7 +750,7 @@ size_t relay_switch_subbuf(struct rchan_buf *buf, size_t length)
 			 * from the scheduler (trying to re-grab
 			 * rq->lock), so defer it.
 			 */
-			__mod_timer(&buf->timer, jiffies + 1);
+			mod_timer(&buf->timer, jiffies + 1);
 	}
 
 	old = buf->data;
diff --git a/kernel/timer.c b/kernel/timer.c
index 13dd64fe143d..9b77fc9a9ac8 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -589,11 +589,14 @@ static struct tvec_base *lock_timer_base(struct timer_list *timer,
 	}
 }
 
-int __mod_timer(struct timer_list *timer, unsigned long expires)
+static inline int
+__mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only)
 {
 	struct tvec_base *base, *new_base;
 	unsigned long flags;
-	int ret = 0;
+	int ret;
+
+	ret = 0;
 
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(!timer->function);
@@ -603,6 +606,9 @@ int __mod_timer(struct timer_list *timer, unsigned long expires)
 	if (timer_pending(timer)) {
 		detach_timer(timer, 0);
 		ret = 1;
+	} else {
+		if (pending_only)
+			goto out_unlock;
 	}
 
 	debug_timer_activate(timer);
@@ -629,42 +635,28 @@ int __mod_timer(struct timer_list *timer, unsigned long expires)
 
 	timer->expires = expires;
 	internal_add_timer(base, timer);
+
+out_unlock:
 	spin_unlock_irqrestore(&base->lock, flags);
 
 	return ret;
 }
 
-EXPORT_SYMBOL(__mod_timer);
-
 /**
- * add_timer_on - start a timer on a particular CPU
- * @timer: the timer to be added
- * @cpu: the CPU to start it on
+ * mod_timer_pending - modify a pending timer's timeout
+ * @timer: the pending timer to be modified
+ * @expires: new timeout in jiffies
  *
- * This is not very scalable on SMP. Double adds are not possible.
+ * mod_timer_pending() is the same for pending timers as mod_timer(),
+ * but will not re-activate and modify already deleted timers.
+ *
+ * It is useful for unserialized use of timers.
  */
-void add_timer_on(struct timer_list *timer, int cpu)
+int mod_timer_pending(struct timer_list *timer, unsigned long expires)
 {
-	struct tvec_base *base = per_cpu(tvec_bases, cpu);
-	unsigned long flags;
-
-	timer_stats_timer_set_start_info(timer);
-	BUG_ON(timer_pending(timer) || !timer->function);
-	spin_lock_irqsave(&base->lock, flags);
-	timer_set_base(timer, base);
-	debug_timer_activate(timer);
-	internal_add_timer(base, timer);
-	/*
-	 * Check whether the other CPU is idle and needs to be
-	 * triggered to reevaluate the timer wheel when nohz is
-	 * active. We are protected against the other CPU fiddling
-	 * with the timer by holding the timer base lock. This also
-	 * makes sure that a CPU on the way to idle can not evaluate
-	 * the timer wheel.
-	 */
-	wake_up_idle_cpu(cpu);
-	spin_unlock_irqrestore(&base->lock, flags);
+	return __mod_timer(timer, expires, true);
 }
+EXPORT_SYMBOL(mod_timer_pending);
 
 /**
  * mod_timer - modify a timer's timeout
@@ -688,9 +680,6 @@ void add_timer_on(struct timer_list *timer, int cpu)
  */
 int mod_timer(struct timer_list *timer, unsigned long expires)
 {
-	BUG_ON(!timer->function);
-
-	timer_stats_timer_set_start_info(timer);
 	/*
 	 * This is a common optimization triggered by the
 	 * networking code - if the timer is re-modified
@@ -699,11 +688,61 @@ int mod_timer(struct timer_list *timer, unsigned long expires)
 	if (timer->expires == expires && timer_pending(timer))
 		return 1;
 
-	return __mod_timer(timer, expires);
+	return __mod_timer(timer, expires, false);
 }
-
 EXPORT_SYMBOL(mod_timer);
 
+/**
+ * add_timer - start a timer
+ * @timer: the timer to be added
+ *
+ * The kernel will do a ->function(->data) callback from the
+ * timer interrupt at the ->expires point in the future. The
+ * current time is 'jiffies'.
+ *
+ * The timer's ->expires, ->function (and if the handler uses it, ->data)
+ * fields must be set prior calling this function.
+ *
+ * Timers with an ->expires field in the past will be executed in the next
+ * timer tick.
+ */
+void add_timer(struct timer_list *timer)
+{
+	BUG_ON(timer_pending(timer));
+	mod_timer(timer, timer->expires);
+}
+EXPORT_SYMBOL(add_timer);
+
+/**
+ * add_timer_on - start a timer on a particular CPU
+ * @timer: the timer to be added
+ * @cpu: the CPU to start it on
+ *
+ * This is not very scalable on SMP. Double adds are not possible.
+ */
+void add_timer_on(struct timer_list *timer, int cpu)
+{
+	struct tvec_base *base = per_cpu(tvec_bases, cpu);
+	unsigned long flags;
+
+	timer_stats_timer_set_start_info(timer);
+	BUG_ON(timer_pending(timer) || !timer->function);
+	spin_lock_irqsave(&base->lock, flags);
+	timer_set_base(timer, base);
+	debug_timer_activate(timer);
+	internal_add_timer(base, timer);
+	/*
+	 * Check whether the other CPU is idle and needs to be
+	 * triggered to reevaluate the timer wheel when nohz is
+	 * active. We are protected against the other CPU fiddling
+	 * with the timer by holding the timer base lock. This also
+	 * makes sure that a CPU on the way to idle can not evaluate
+	 * the timer wheel.
+	 */
+	wake_up_idle_cpu(cpu);
+	spin_unlock_irqrestore(&base->lock, flags);
+}
+
 /**
  * del_timer - deactive a timer.
  * @timer: the timer to be deactivated
@@ -733,7 +772,6 @@ int del_timer(struct timer_list *timer)
 
 	return ret;
 }
-
 EXPORT_SYMBOL(del_timer);
 
 #ifdef CONFIG_SMP
@@ -767,7 +805,6 @@ int try_to_del_timer_sync(struct timer_list *timer)
 
 	return ret;
 }
-
 EXPORT_SYMBOL(try_to_del_timer_sync);
 
 /**
@@ -796,7 +833,6 @@ int del_timer_sync(struct timer_list *timer)
 		cpu_relax();
 	}
 }
-
 EXPORT_SYMBOL(del_timer_sync);
 #endif
 
@@ -1268,7 +1304,7 @@ signed long __sched schedule_timeout(signed long timeout)
 	expire = timeout + jiffies;
 
 	setup_timer_on_stack(&timer, process_timeout, (unsigned long)current);
-	__mod_timer(&timer, expire);
+	__mod_timer(&timer, expire, false);
 	schedule();
 	del_singleshot_timer_sync(&timer);
 

commit ac07bcaa8259841905ead3f8cd60b1923ca6c0e5
Merge: 37bd824a35a6 35ebf1caa485
Author: Ingo Molnar <mingo@elte.hu>
Date:   Wed Feb 18 01:09:07 2009 +0100

    Merge branch 'tip/tracing/ftrace' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into tracing/ftrace

commit 37bd824a35a60abc73e5fa8816bd5f50c913d69b
Merge: 73d3fd96e777 5b058bcde961
Author: Ingo Molnar <mingo@elte.hu>
Date:   Wed Feb 18 01:08:13 2009 +0100

    Merge branches 'tracing/ftrace' and 'tracing/urgent' into tracing/core