Skip to content

Commit

Permalink
powerpc/book3s64/memhotplug: enable memmap on memory for radix
Browse files Browse the repository at this point in the history
Radix vmemmap mapping can map things correctly at the PMD level or PTE
level based on different device boundary checks.  Hence we skip the
restrictions w.r.t vmemmap size to be multiple of PMD_SIZE.  This also
makes the feature widely useful because to use PMD_SIZE vmemmap area we
require a memory block size of 2GiB

We can also use MHP_RESERVE_PAGES_MEMMAP_ON_MEMORY to that the feature can
work with a memory block size of 256MB.  Using altmap.reserve feature to
align things correctly at pageblock granularity.  We can end up losing
some pages in memory with this.  For ex: with a 256MiB memory block size,
we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages.  ie, for every 4096
pages 28 pages get reserved.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Aneesh Kumar K.V <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Vishal Verma <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
  • Loading branch information
kvaneesh authored and akpm00 committed Aug 21, 2023
1 parent 2d1f649 commit 603fd64
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 1 deletion.
1 change: 1 addition & 0 deletions arch/powerpc/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,7 @@ config PPC
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_KEEP_MEMBLOCK
select ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE if PPC_RADIX_MMU
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
Expand Down
21 changes: 21 additions & 0 deletions arch/powerpc/include/asm/pgtable.h
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,27 @@ static inline pgtable_t pmd_pgtable(pmd_t pmd)
int __meminit vmemmap_populated(unsigned long vmemmap_addr, int vmemmap_map_size);
bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long start,
unsigned long page_size);
/*
* mm/memory_hotplug.c:mhp_supports_memmap_on_memory goes into details
* some of the restrictions. We don't check for PMD_SIZE because our
* vmemmap allocation code can fallback correctly. The pageblock
* alignment requirement is met using altmap->reserve blocks.
*/
#define arch_supports_memmap_on_memory arch_supports_memmap_on_memory
static inline bool arch_supports_memmap_on_memory(unsigned long vmemmap_size)
{
if (!radix_enabled())
return false;
/*
* With 4K page size and 2M PMD_SIZE, we can align
* things better with memory block size value
* starting from 128MB. Hence align things with PMD_SIZE.
*/
if (IS_ENABLED(CONFIG_PPC_4K_PAGES))
return IS_ALIGNED(vmemmap_size, PMD_SIZE);
return true;
}

#endif /* CONFIG_PPC64 */

#endif /* __ASSEMBLY__ */
Expand Down
2 changes: 1 addition & 1 deletion arch/powerpc/platforms/pseries/hotplug-memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -637,7 +637,7 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb)
nid = first_online_node;

/* Add the memory */
rc = __add_memory(nid, lmb->base_addr, block_sz, MHP_NONE);
rc = __add_memory(nid, lmb->base_addr, block_sz, MHP_MEMMAP_ON_MEMORY);
if (rc) {
invalidate_lmb_associativity_index(lmb);
return rc;
Expand Down

0 comments on commit 603fd64

Please sign in to comment.