Linux Audio

Check our new training course

Loading...
v6.13.7
 
 
  1==============
  2Page migration
  3==============
  4
  5Page migration allows moving the physical location of pages between
  6nodes in a NUMA system while the process is running. This means that the
  7virtual addresses that the process sees do not change. However, the
  8system rearranges the physical location of those pages.
  9
 10Also see Documentation/mm/hmm.rst for migrating pages to or from device
 11private memory.
 12
 13The main intent of page migration is to reduce the latency of memory accesses
 14by moving pages near to the processor where the process accessing that memory
 15is running.
 16
 17Page migration allows a process to manually relocate the node on which its
 18pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
 19a new memory policy via mbind(). The pages of a process can also be relocated
 20from another process using the sys_migrate_pages() function call. The
 21migrate_pages() function call takes two sets of nodes and moves pages of a
 22process that are located on the from nodes to the destination nodes.
 23Page migration functions are provided by the numactl package by Andi Kleen
 24(a version later than 0.9.3 is required. Get it from
 25https://github.com/numactl/numactl.git). numactl provides libnuma
 26which provides an interface similar to other NUMA functionality for page
 27migration.  cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
 28pages of a process are located. See also the numa_maps documentation in the
 29proc(5) man page.
 30
 31Manual migration is useful if for example the scheduler has relocated
 32a process to a processor on a distant node. A batch scheduler or an
 33administrator may detect the situation and move the pages of the process
 34nearer to the new processor. The kernel itself only provides
 35manual page migration support. Automatic page migration may be implemented
 36through user space processes that move pages. A special function call
 37"move_pages" allows the moving of individual pages within a process.
 38For example, A NUMA profiler may obtain a log showing frequent off-node
 39accesses and may use the result to move pages to more advantageous
 40locations.
 41
 42Larger installations usually partition the system using cpusets into
 43sections of nodes. Paul Jackson has equipped cpusets with the ability to
 44move pages when a task is moved to another cpuset (See
 45:ref:`CPUSETS <cpusets>`).
 46Cpusets allow the automation of process locality. If a task is moved to
 47a new cpuset then also all its pages are moved with it so that the
 48performance of the process does not sink dramatically. Also the pages
 49of processes in a cpuset are moved if the allowed memory nodes of a
 50cpuset are changed.
 51
 52Page migration allows the preservation of the relative location of pages
 53within a group of nodes for all migration techniques which will preserve a
 54particular memory allocation pattern generated even after migrating a
 55process. This is necessary in order to preserve the memory latencies.
 56Processes will run with similar performance after migration.
 57
 58Page migration occurs in several steps. First a high level
 59description for those trying to use migrate_pages() from the kernel
 60(for userspace usage see the Andi Kleen's numactl package mentioned above)
 61and then a low level description of how the low level details work.
 62
 63In kernel use of migrate_pages()
 64================================
 65
 661. Remove folios from the LRU.
 67
 68   Lists of folios to be migrated are generated by scanning over
 69   folios and moving them into lists. This is done by
 70   calling folio_isolate_lru().
 71   Calling folio_isolate_lru() increases the references to the folio
 72   so that it cannot vanish while the folio migration occurs.
 73   It also prevents the swapper or other scans from encountering
 74   the folio.
 75
 762. We need to have a function of type new_folio_t that can be
 77   passed to migrate_pages(). This function should figure out
 78   how to allocate the correct new folio given the old folio.
 79
 803. The migrate_pages() function is called which attempts
 81   to do the migration. It will call the function to allocate
 82   the new folio for each folio that is considered for moving.
 
 83
 84How migrate_pages() works
 85=========================
 86
 87migrate_pages() does several passes over its list of folios. A folio is moved
 88if all references to a folio are removable at the time. The folio has
 89already been removed from the LRU via folio_isolate_lru() and the refcount
 90is increased so that the folio cannot be freed while folio migration occurs.
 91
 92Steps:
 93
 941. Lock the page to be migrated.
 95
 962. Ensure that writeback is complete.
 97
 983. Lock the new page that we want to move to. It is locked so that accesses to
 99   this (not yet up-to-date) page immediately block while the move is in progress.
100
1014. All the page table references to the page are converted to migration
102   entries. This decreases the mapcount of a page. If the resulting
103   mapcount is not zero then we do not migrate the page. All user space
104   processes that attempt to access the page will now wait on the page lock
105   or wait for the migration page table entry to be removed.
106
1075. The i_pages lock is taken. This will cause all processes trying
108   to access the page via the mapping to block on the spinlock.
109
1106. The refcount of the page is examined and we back out if references remain.
111   Otherwise, we know that we are the only one referencing this page.
112
1137. The radix tree is checked and if it does not contain the pointer to this
114   page then we back out because someone else modified the radix tree.
115
1168. The new page is prepped with some settings from the old page so that
117   accesses to the new page will discover a page with the correct settings.
118
1199. The radix tree is changed to point to the new page.
120
12110. The reference count of the old page is dropped because the address space
122    reference is gone. A reference to the new page is established because
123    the new page is referenced by the address space.
124
12511. The i_pages lock is dropped. With that lookups in the mapping
126    become possible again. Processes will move from spinning on the lock
127    to sleeping on the locked new page.
128
12912. The page contents are copied to the new page.
130
13113. The remaining page flags are copied to the new page.
132
13314. The old page flags are cleared to indicate that the page does
134    not provide any information anymore.
135
13615. Queued up writeback on the new page is triggered.
137
13816. If migration entries were inserted into the page table, then replace them
139    with real ptes. Doing so will enable access for user space processes not
140    already waiting for the page lock.
141
14217. The page locks are dropped from the old and new page.
143    Processes waiting on the page lock will redo their page faults
144    and will reach the new page.
145
14618. The new page is moved to the LRU and can be scanned by the swapper,
147    etc. again.
148
149Non-LRU page migration
150======================
151
152Although migration originally aimed for reducing the latency of memory
153accesses for NUMA, compaction also uses migration to create high-order
154pages.  For compaction purposes, it is also useful to be able to move
155non-LRU pages, such as zsmalloc and virtio-balloon pages.
156
157If a driver wants to make its pages movable, it should define a struct
158movable_operations.  It then needs to call __SetPageMovable() on each
159page that it may be able to move.  This uses the ``page->mapping`` field,
160so this field is not available for the driver to use for other purposes.
161
162Monitoring Migration
163=====================
164
165The following events (counters) can be used to monitor page migration.
166
1671. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a
168   page was migrated. If the page was a non-THP and non-hugetlb page, then
169   this counter is increased by one. If the page was a THP or hugetlb, then
170   this counter is increased by the number of THP or hugetlb subpages.
171   For example, migration of a single 2MB THP that has 4KB-size base pages
172   (subpages) will cause this counter to increase by 512.
173
1742. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for
175   PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
176   if it was a THP or hugetlb.
177
1783. THP_MIGRATION_SUCCESS: A THP was migrated without being split.
179
1804. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split.
181
1825. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had
183   to be split. After splitting, a migration retry was used for its sub-pages.
184
185THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or
186PGMIGRATE_FAIL events. For example, a THP migration failure will cause both
187THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase.
188
189Christoph Lameter, May 8, 2006.
190Minchan Kim, Mar 28, 2016.
191
192.. kernel-doc:: include/linux/migrate.h
v6.2
  1.. _page_migration:
  2
  3==============
  4Page migration
  5==============
  6
  7Page migration allows moving the physical location of pages between
  8nodes in a NUMA system while the process is running. This means that the
  9virtual addresses that the process sees do not change. However, the
 10system rearranges the physical location of those pages.
 11
 12Also see :ref:`Heterogeneous Memory Management (HMM) <hmm>`
 13for migrating pages to or from device private memory.
 14
 15The main intent of page migration is to reduce the latency of memory accesses
 16by moving pages near to the processor where the process accessing that memory
 17is running.
 18
 19Page migration allows a process to manually relocate the node on which its
 20pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
 21a new memory policy via mbind(). The pages of a process can also be relocated
 22from another process using the sys_migrate_pages() function call. The
 23migrate_pages() function call takes two sets of nodes and moves pages of a
 24process that are located on the from nodes to the destination nodes.
 25Page migration functions are provided by the numactl package by Andi Kleen
 26(a version later than 0.9.3 is required. Get it from
 27https://github.com/numactl/numactl.git). numactl provides libnuma
 28which provides an interface similar to other NUMA functionality for page
 29migration.  cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
 30pages of a process are located. See also the numa_maps documentation in the
 31proc(5) man page.
 32
 33Manual migration is useful if for example the scheduler has relocated
 34a process to a processor on a distant node. A batch scheduler or an
 35administrator may detect the situation and move the pages of the process
 36nearer to the new processor. The kernel itself only provides
 37manual page migration support. Automatic page migration may be implemented
 38through user space processes that move pages. A special function call
 39"move_pages" allows the moving of individual pages within a process.
 40For example, A NUMA profiler may obtain a log showing frequent off-node
 41accesses and may use the result to move pages to more advantageous
 42locations.
 43
 44Larger installations usually partition the system using cpusets into
 45sections of nodes. Paul Jackson has equipped cpusets with the ability to
 46move pages when a task is moved to another cpuset (See
 47:ref:`CPUSETS <cpusets>`).
 48Cpusets allow the automation of process locality. If a task is moved to
 49a new cpuset then also all its pages are moved with it so that the
 50performance of the process does not sink dramatically. Also the pages
 51of processes in a cpuset are moved if the allowed memory nodes of a
 52cpuset are changed.
 53
 54Page migration allows the preservation of the relative location of pages
 55within a group of nodes for all migration techniques which will preserve a
 56particular memory allocation pattern generated even after migrating a
 57process. This is necessary in order to preserve the memory latencies.
 58Processes will run with similar performance after migration.
 59
 60Page migration occurs in several steps. First a high level
 61description for those trying to use migrate_pages() from the kernel
 62(for userspace usage see the Andi Kleen's numactl package mentioned above)
 63and then a low level description of how the low level details work.
 64
 65In kernel use of migrate_pages()
 66================================
 67
 681. Remove pages from the LRU.
 69
 70   Lists of pages to be migrated are generated by scanning over
 71   pages and moving them into lists. This is done by
 72   calling isolate_lru_page().
 73   Calling isolate_lru_page() increases the references to the page
 74   so that it cannot vanish while the page migration occurs.
 75   It also prevents the swapper or other scans from encountering
 76   the page.
 77
 782. We need to have a function of type new_page_t that can be
 79   passed to migrate_pages(). This function should figure out
 80   how to allocate the correct new page given the old page.
 81
 823. The migrate_pages() function is called which attempts
 83   to do the migration. It will call the function to allocate
 84   the new page for each page that is considered for
 85   moving.
 86
 87How migrate_pages() works
 88=========================
 89
 90migrate_pages() does several passes over its list of pages. A page is moved
 91if all references to a page are removable at the time. The page has
 92already been removed from the LRU via isolate_lru_page() and the refcount
 93is increased so that the page cannot be freed while page migration occurs.
 94
 95Steps:
 96
 971. Lock the page to be migrated.
 98
 992. Ensure that writeback is complete.
100
1013. Lock the new page that we want to move to. It is locked so that accesses to
102   this (not yet up-to-date) page immediately block while the move is in progress.
103
1044. All the page table references to the page are converted to migration
105   entries. This decreases the mapcount of a page. If the resulting
106   mapcount is not zero then we do not migrate the page. All user space
107   processes that attempt to access the page will now wait on the page lock
108   or wait for the migration page table entry to be removed.
109
1105. The i_pages lock is taken. This will cause all processes trying
111   to access the page via the mapping to block on the spinlock.
112
1136. The refcount of the page is examined and we back out if references remain.
114   Otherwise, we know that we are the only one referencing this page.
115
1167. The radix tree is checked and if it does not contain the pointer to this
117   page then we back out because someone else modified the radix tree.
118
1198. The new page is prepped with some settings from the old page so that
120   accesses to the new page will discover a page with the correct settings.
121
1229. The radix tree is changed to point to the new page.
123
12410. The reference count of the old page is dropped because the address space
125    reference is gone. A reference to the new page is established because
126    the new page is referenced by the address space.
127
12811. The i_pages lock is dropped. With that lookups in the mapping
129    become possible again. Processes will move from spinning on the lock
130    to sleeping on the locked new page.
131
13212. The page contents are copied to the new page.
133
13413. The remaining page flags are copied to the new page.
135
13614. The old page flags are cleared to indicate that the page does
137    not provide any information anymore.
138
13915. Queued up writeback on the new page is triggered.
140
14116. If migration entries were inserted into the page table, then replace them
142    with real ptes. Doing so will enable access for user space processes not
143    already waiting for the page lock.
144
14517. The page locks are dropped from the old and new page.
146    Processes waiting on the page lock will redo their page faults
147    and will reach the new page.
148
14918. The new page is moved to the LRU and can be scanned by the swapper,
150    etc. again.
151
152Non-LRU page migration
153======================
154
155Although migration originally aimed for reducing the latency of memory
156accesses for NUMA, compaction also uses migration to create high-order
157pages.  For compaction purposes, it is also useful to be able to move
158non-LRU pages, such as zsmalloc and virtio-balloon pages.
159
160If a driver wants to make its pages movable, it should define a struct
161movable_operations.  It then needs to call __SetPageMovable() on each
162page that it may be able to move.  This uses the ``page->mapping`` field,
163so this field is not available for the driver to use for other purposes.
164
165Monitoring Migration
166=====================
167
168The following events (counters) can be used to monitor page migration.
169
1701. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a
171   page was migrated. If the page was a non-THP and non-hugetlb page, then
172   this counter is increased by one. If the page was a THP or hugetlb, then
173   this counter is increased by the number of THP or hugetlb subpages.
174   For example, migration of a single 2MB THP that has 4KB-size base pages
175   (subpages) will cause this counter to increase by 512.
176
1772. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for
178   PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
179   if it was a THP or hugetlb.
180
1813. THP_MIGRATION_SUCCESS: A THP was migrated without being split.
182
1834. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split.
184
1855. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had
186   to be split. After splitting, a migration retry was used for it's sub-pages.
187
188THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or
189PGMIGRATE_FAIL events. For example, a THP migration failure will cause both
190THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase.
191
192Christoph Lameter, May 8, 2006.
193Minchan Kim, Mar 28, 2016.
194
195.. kernel-doc:: include/linux/migrate.h