From: Michael Jeanson Date: Wed, 8 Jun 2022 16:56:36 +0000 (-0400) Subject: fix: mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked() (v5.19) X-Git-Url: https://git.lttng.org/?p=lttng-modules.git;a=commitdiff_plain;h=6229bbaa423832f6b7c7a658ad11e1d4242752ff fix: mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked() (v5.19) See upstream commit : commit 10e0f7530205799e7e971aba699a7cb3a47456de Author: Wonhyuk Yang Date: Thu May 19 14:08:54 2022 -0700 mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked() Currently, trace point mm_page_alloc_zone_locked() doesn't show correct information. First, when alloc_flag has ALLOC_HARDER/ALLOC_CMA, page can be allocated from MIGRATE_HIGHATOMIC/MIGRATE_CMA. Nevertheless, tracepoint use requested migration type not MIGRATE_HIGHATOMIC and MIGRATE_CMA. Second, after commit 44042b4498728 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists") percpu-list can store high order pages. But trace point determine whether it is a refiil of percpu-list by comparing requested order and 0. To handle these problems, make mm_page_alloc_zone_locked() only be called by __rmqueue_smallest with correct migration type. With a new argument called percpu_refill, it can show roughly whether it is a refill of percpu-list. Change-Id: I2e4a57393757f12b9c5a4566c4d1102ee2474a09 Signed-off-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers --- diff --git a/include/instrumentation/events/kmem.h b/include/instrumentation/events/kmem.h index 29c0fb7f..8c19e962 100644 --- a/include/instrumentation/events/kmem.h +++ b/include/instrumentation/events/kmem.h @@ -218,6 +218,50 @@ LTTNG_TRACEPOINT_EVENT_MAP(mm_page_alloc, kmem_mm_page_alloc, ) ) +#if (LTTNG_LINUX_VERSION_CODE >= LTTNG_KERNEL_VERSION(5,19,0)) +LTTNG_TRACEPOINT_EVENT_CLASS(kmem_mm_page, + + TP_PROTO(struct page *page, unsigned int order, int migratetype, + int percpu_refill), + + TP_ARGS(page, order, migratetype, percpu_refill), + + TP_FIELDS( + ctf_integer_hex(struct page *, page, page) + ctf_integer(unsigned long, pfn, + page ? page_to_pfn(page) : -1UL) + ctf_integer(unsigned int, order, order) + ctf_integer(int, migratetype, migratetype) + ctf_integer(int, percpu_refill, percpu_refill) + ) +) + +LTTNG_TRACEPOINT_EVENT_INSTANCE_MAP(kmem_mm_page, mm_page_alloc_zone_locked, + + kmem_mm_page_alloc_zone_locked, + + TP_PROTO(struct page *page, unsigned int order, int migratetype, + int percpu_refill), + + TP_ARGS(page, order, migratetype, percpu_refill) +) + +LTTNG_TRACEPOINT_EVENT_MAP(mm_page_pcpu_drain, + + kmem_mm_page_pcpu_drain, + + TP_PROTO(struct page *page, unsigned int order, int migratetype), + + TP_ARGS(page, order, migratetype), + + TP_FIELDS( + ctf_integer(unsigned long, pfn, + page ? page_to_pfn(page) : -1UL) + ctf_integer(unsigned int, order, order) + ctf_integer(int, migratetype, migratetype) + ) +) +#else LTTNG_TRACEPOINT_EVENT_CLASS(kmem_mm_page, TP_PROTO(struct page *page, unsigned int order, int migratetype), @@ -250,6 +294,7 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE_MAP(kmem_mm_page, mm_page_pcpu_drain, TP_ARGS(page, order, migratetype) ) +#endif #if (LTTNG_LINUX_VERSION_CODE >= LTTNG_KERNEL_VERSION(3,19,2) \ || LTTNG_KERNEL_RANGE(3,14,36, 3,15,0) \