Skip to content

Commit

Permalink
Merge branch 'akpm' (patches from Andrew)
Browse files Browse the repository at this point in the history
Merge more updates from Andrew Morton:
 "Most of the rest of MM and various other things. Some Kconfig rework
  still awaits merges of dependent trees from linux-next.

  Subsystems affected by this patch series: mm/hotfixes, mm/memcg,
  mm/vmstat, mm/thp, procfs, sysctl, misc, notifiers, core-kernel,
  bitops, lib, checkpatch, epoll, binfmt, init, rapidio, uaccess, kcov,
  ubsan, ipc, bitmap, mm/pagemap"

* akpm: (86 commits)
  mm: remove __ARCH_HAS_4LEVEL_HACK and include/asm-generic/4level-fixup.h
  um: add support for folded p4d page tables
  um: remove unused pxx_offset_proc() and addr_pte() functions
  sparc32: use pgtable-nopud instead of 4level-fixup
  parisc/hugetlb: use pgtable-nopXd instead of 4level-fixup
  parisc: use pgtable-nopXd instead of 4level-fixup
  nds32: use pgtable-nopmd instead of 4level-fixup
  microblaze: use pgtable-nopmd instead of 4level-fixup
  m68k: mm: use pgtable-nopXd instead of 4level-fixup
  m68k: nommu: use pgtable-nopud instead of 4level-fixup
  c6x: use pgtable-nopud instead of 4level-fixup
  arm: nommu: use pgtable-nopud instead of 4level-fixup
  alpha: use pgtable-nopud instead of 4level-fixup
  gpio: pca953x: tighten up indentation
  gpio: pca953x: convert to use bitmap API
  gpio: pca953x: use input from regs structure in pca953x_irq_pending()
  gpio: pca953x: remove redundant variable and check in IRQ handler
  lib/bitmap: introduce bitmap_replace() helper
  lib/test_bitmap: fix comment about this file
  lib/test_bitmap: move exp1 and exp2 upper for others to use
  ...
  • Loading branch information
torvalds committed Dec 5, 2019
2 parents 2f13437 + f949286 commit 5ecc9d1
Show file tree
Hide file tree
Showing 154 changed files with 5,252 additions and 1,342 deletions.
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
*.c diff=cpp
*.h diff=cpp
*.dtsi diff=dts
*.dts diff=dts
2 changes: 1 addition & 1 deletion Documentation/core-api/genalloc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ writing of special-purpose memory allocators in the future.
:functions: gen_pool_for_each_chunk

.. kernel-doc:: lib/genalloc.c
:functions: addr_in_gen_pool
:functions: gen_pool_has_addr

.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_avail
Expand Down
129 changes: 129 additions & 0 deletions Documentation/dev-tools/kcov.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ Profiling data will only become accessible once debugfs has been mounted::

Coverage collection
-------------------

The following program demonstrates coverage collection from within a test
program using kcov:

Expand Down Expand Up @@ -128,6 +129,7 @@ only need to enable coverage (disable happens automatically on thread end).

Comparison operands collection
------------------------------

Comparison operands collection is similar to coverage collection:

.. code-block:: c
Expand Down Expand Up @@ -202,3 +204,130 @@ Comparison operands collection is similar to coverage collection:
Note that the kcov modes (coverage collection or comparison operands) are
mutually exclusive.

Remote coverage collection
--------------------------

With KCOV_ENABLE coverage is collected only for syscalls that are issued
from the current process. With KCOV_REMOTE_ENABLE it's possible to collect
coverage for arbitrary parts of the kernel code, provided that those parts
are annotated with kcov_remote_start()/kcov_remote_stop().

This allows to collect coverage from two types of kernel background
threads: the global ones, that are spawned during kernel boot in a limited
number of instances (e.g. one USB hub_event() worker thread is spawned per
USB HCD); and the local ones, that are spawned when a user interacts with
some kernel interface (e.g. vhost workers).

To enable collecting coverage from a global background thread, a unique
global handle must be assigned and passed to the corresponding
kcov_remote_start() call. Then a userspace process can pass a list of such
handles to the KCOV_REMOTE_ENABLE ioctl in the handles array field of the
kcov_remote_arg struct. This will attach the used kcov device to the code
sections, that are referenced by those handles.

Since there might be many local background threads spawned from different
userspace processes, we can't use a single global handle per annotation.
Instead, the userspace process passes a non-zero handle through the
common_handle field of the kcov_remote_arg struct. This common handle gets
saved to the kcov_handle field in the current task_struct and needs to be
passed to the newly spawned threads via custom annotations. Those threads
should in turn be annotated with kcov_remote_start()/kcov_remote_stop().

Internally kcov stores handles as u64 integers. The top byte of a handle
is used to denote the id of a subsystem that this handle belongs to, and
the lower 4 bytes are used to denote the id of a thread instance within
that subsystem. A reserved value 0 is used as a subsystem id for common
handles as they don't belong to a particular subsystem. The bytes 4-7 are
currently reserved and must be zero. In the future the number of bytes
used for the subsystem or handle ids might be increased.

When a particular userspace proccess collects coverage by via a common
handle, kcov will collect coverage for each code section that is annotated
to use the common handle obtained as kcov_handle from the current
task_struct. However non common handles allow to collect coverage
selectively from different subsystems.

.. code-block:: c
struct kcov_remote_arg {
unsigned trace_mode;
unsigned area_size;
unsigned num_handles;
uint64_t common_handle;
uint64_t handles[0];
};
#define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
#define KCOV_DISABLE _IO('c', 101)
#define KCOV_REMOTE_ENABLE _IOW('c', 102, struct kcov_remote_arg)
#define COVER_SIZE (64 << 10)
#define KCOV_TRACE_PC 0
#define KCOV_SUBSYSTEM_COMMON (0x00ull << 56)
#define KCOV_SUBSYSTEM_USB (0x01ull << 56)
#define KCOV_SUBSYSTEM_MASK (0xffull << 56)
#define KCOV_INSTANCE_MASK (0xffffffffull)
static inline __u64 kcov_remote_handle(__u64 subsys, __u64 inst)
{
if (subsys & ~KCOV_SUBSYSTEM_MASK || inst & ~KCOV_INSTANCE_MASK)
return 0;
return subsys | inst;
}
#define KCOV_COMMON_ID 0x42
#define KCOV_USB_BUS_NUM 1
int main(int argc, char **argv)
{
int fd;
unsigned long *cover, n, i;
struct kcov_remote_arg *arg;
fd = open("/sys/kernel/debug/kcov", O_RDWR);
if (fd == -1)
perror("open"), exit(1);
if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
perror("ioctl"), exit(1);
cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if ((void*)cover == MAP_FAILED)
perror("mmap"), exit(1);
/* Enable coverage collection via common handle and from USB bus #1. */
arg = calloc(1, sizeof(*arg) + sizeof(uint64_t));
if (!arg)
perror("calloc"), exit(1);
arg->trace_mode = KCOV_TRACE_PC;
arg->area_size = COVER_SIZE;
arg->num_handles = 1;
arg->common_handle = kcov_remote_handle(KCOV_SUBSYSTEM_COMMON,
KCOV_COMMON_ID);
arg->handles[0] = kcov_remote_handle(KCOV_SUBSYSTEM_USB,
KCOV_USB_BUS_NUM);
if (ioctl(fd, KCOV_REMOTE_ENABLE, arg))
perror("ioctl"), free(arg), exit(1);
free(arg);
/*
* Here the user needs to trigger execution of a kernel code section
* that is either annotated with the common handle, or to trigger some
* activity on USB bus #1.
*/
sleep(2);
n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
for (i = 0; i < n; i++)
printf("0x%lx\n", cover[i + 1]);
if (ioctl(fd, KCOV_DISABLE, 0))
perror("ioctl"), exit(1);
if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
perror("munmap"), exit(1);
if (close(fd))
perror("close"), exit(1);
return 0;
}
22 changes: 11 additions & 11 deletions arch/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -72,19 +72,19 @@ config KPROBES
If in doubt, say "N".

config JUMP_LABEL
bool "Optimize very unlikely/likely branches"
depends on HAVE_ARCH_JUMP_LABEL
depends on CC_HAS_ASM_GOTO
help
This option enables a transparent branch optimization that
bool "Optimize very unlikely/likely branches"
depends on HAVE_ARCH_JUMP_LABEL
depends on CC_HAS_ASM_GOTO
help
This option enables a transparent branch optimization that
makes certain almost-always-true or almost-always-false branch
conditions even cheaper to execute within the kernel.

Certain performance-sensitive kernel code, such as trace points,
scheduler functionality, networking code and KVM have such
branches and include support for this optimization technique.

If it is detected that the compiler has support for "asm goto",
If it is detected that the compiler has support for "asm goto",
the kernel will compile such branches with just a nop
instruction. When the condition flag is toggled to true, the
nop will be converted to a jump instruction to execute the
Expand Down Expand Up @@ -151,8 +151,8 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS
information on the topic of unaligned memory accesses.

config ARCH_USE_BUILTIN_BSWAP
bool
help
bool
help
Modern versions of GCC (since 4.4) have builtin functions
for handling byte-swapping. Using these, instead of the old
inline assembler that the architecture code provides in the
Expand Down Expand Up @@ -221,10 +221,10 @@ config HAVE_DMA_CONTIGUOUS
bool

config GENERIC_SMP_IDLE_THREAD
bool
bool

config GENERIC_IDLE_POLL_SETUP
bool
bool

config ARCH_HAS_FORTIFY_SOURCE
bool
Expand Down Expand Up @@ -257,7 +257,7 @@ config ARCH_HAS_UNCACHED_SEGMENT

# Select if arch init_task must go in the __init_task_data section
config ARCH_TASK_STRUCT_ON_STACK
bool
bool

# Select if arch has its private alloc_task_struct() function
config ARCH_TASK_STRUCT_ALLOCATOR
Expand Down
1 change: 0 additions & 1 deletion arch/alpha/include/asm/mmzone.h
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,6 @@ PLAT_NODE_DATA_LOCALNR(unsigned long p, int n)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)

#define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> 32))
#define pgd_page(pgd) (pfn_to_page(pgd_val(pgd) >> 32))
#define pte_pfn(pte) (pte_val(pte) >> 32)

#define mk_pte(page, pgprot) \
Expand Down
4 changes: 2 additions & 2 deletions arch/alpha/include/asm/pgalloc.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte)
}

static inline void
pgd_populate(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmd)
pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
{
pgd_set(pgd, pmd);
pud_set(pud, pmd);
}

extern pgd_t *pgd_alloc(struct mm_struct *mm);
Expand Down
24 changes: 12 additions & 12 deletions arch/alpha/include/asm/pgtable.h
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
#ifndef _ALPHA_PGTABLE_H
#define _ALPHA_PGTABLE_H

#include <asm-generic/4level-fixup.h>
#include <asm-generic/pgtable-nopud.h>

/*
* This file contains the functions and defines necessary to modify and use
Expand Down Expand Up @@ -226,8 +226,8 @@ extern inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
extern inline void pmd_set(pmd_t * pmdp, pte_t * ptep)
{ pmd_val(*pmdp) = _PAGE_TABLE | ((((unsigned long) ptep) - PAGE_OFFSET) << (32-PAGE_SHIFT)); }

extern inline void pgd_set(pgd_t * pgdp, pmd_t * pmdp)
{ pgd_val(*pgdp) = _PAGE_TABLE | ((((unsigned long) pmdp) - PAGE_OFFSET) << (32-PAGE_SHIFT)); }
extern inline void pud_set(pud_t * pudp, pmd_t * pmdp)
{ pud_val(*pudp) = _PAGE_TABLE | ((((unsigned long) pmdp) - PAGE_OFFSET) << (32-PAGE_SHIFT)); }


extern inline unsigned long
Expand All @@ -238,11 +238,11 @@ pmd_page_vaddr(pmd_t pmd)

#ifndef CONFIG_DISCONTIGMEM
#define pmd_page(pmd) (mem_map + ((pmd_val(pmd) & _PFN_MASK) >> 32))
#define pgd_page(pgd) (mem_map + ((pgd_val(pgd) & _PFN_MASK) >> 32))
#define pud_page(pud) (mem_map + ((pud_val(pud) & _PFN_MASK) >> 32))
#endif

extern inline unsigned long pgd_page_vaddr(pgd_t pgd)
{ return PAGE_OFFSET + ((pgd_val(pgd) & _PFN_MASK) >> (32-PAGE_SHIFT)); }
extern inline unsigned long pud_page_vaddr(pud_t pgd)
{ return PAGE_OFFSET + ((pud_val(pgd) & _PFN_MASK) >> (32-PAGE_SHIFT)); }

extern inline int pte_none(pte_t pte) { return !pte_val(pte); }
extern inline int pte_present(pte_t pte) { return pte_val(pte) & _PAGE_VALID; }
Expand All @@ -256,10 +256,10 @@ extern inline int pmd_bad(pmd_t pmd) { return (pmd_val(pmd) & ~_PFN_MASK) != _P
extern inline int pmd_present(pmd_t pmd) { return pmd_val(pmd) & _PAGE_VALID; }
extern inline void pmd_clear(pmd_t * pmdp) { pmd_val(*pmdp) = 0; }

extern inline int pgd_none(pgd_t pgd) { return !pgd_val(pgd); }
extern inline int pgd_bad(pgd_t pgd) { return (pgd_val(pgd) & ~_PFN_MASK) != _PAGE_TABLE; }
extern inline int pgd_present(pgd_t pgd) { return pgd_val(pgd) & _PAGE_VALID; }
extern inline void pgd_clear(pgd_t * pgdp) { pgd_val(*pgdp) = 0; }
extern inline int pud_none(pud_t pud) { return !pud_val(pud); }
extern inline int pud_bad(pud_t pud) { return (pud_val(pud) & ~_PFN_MASK) != _PAGE_TABLE; }
extern inline int pud_present(pud_t pud) { return pud_val(pud) & _PAGE_VALID; }
extern inline void pud_clear(pud_t * pudp) { pud_val(*pudp) = 0; }

/*
* The following only work if pte_present() is true.
Expand Down Expand Up @@ -301,9 +301,9 @@ extern inline pte_t pte_mkspecial(pte_t pte) { return pte; }
*/

/* Find an entry in the second-level page table.. */
extern inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address)
extern inline pmd_t * pmd_offset(pud_t * dir, unsigned long address)
{
pmd_t *ret = (pmd_t *) pgd_page_vaddr(*dir) + ((address >> PMD_SHIFT) & (PTRS_PER_PAGE - 1));
pmd_t *ret = (pmd_t *) pud_page_vaddr(*dir) + ((address >> PMD_SHIFT) & (PTRS_PER_PAGE - 1));
smp_read_barrier_depends(); /* see above */
return ret;
}
Expand Down
12 changes: 8 additions & 4 deletions arch/alpha/mm/init.c
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,8 @@ callback_init(void * kernel_end)
{
struct crb_struct * crb;
pgd_t *pgd;
p4d_t *p4d;
pud_t *pud;
pmd_t *pmd;
void *two_pages;

Expand Down Expand Up @@ -184,8 +186,10 @@ callback_init(void * kernel_end)
memset(two_pages, 0, 2*PAGE_SIZE);

pgd = pgd_offset_k(VMALLOC_START);
pgd_set(pgd, (pmd_t *)two_pages);
pmd = pmd_offset(pgd, VMALLOC_START);
p4d = p4d_offset(pgd, VMALLOC_START);
pud = pud_offset(p4d, VMALLOC_START);
pud_set(pud, (pmd_t *)two_pages);
pmd = pmd_offset(pud, VMALLOC_START);
pmd_set(pmd, (pte_t *)(two_pages + PAGE_SIZE));

if (alpha_using_srm) {
Expand Down Expand Up @@ -214,9 +218,9 @@ callback_init(void * kernel_end)
/* Newer consoles (especially on larger
systems) may require more pages of
PTEs. Grab additional pages as needed. */
if (pmd != pmd_offset(pgd, vaddr)) {
if (pmd != pmd_offset(pud, vaddr)) {
memset(kernel_end, 0, PAGE_SIZE);
pmd = pmd_offset(pgd, vaddr);
pmd = pmd_offset(pud, vaddr);
pmd_set(pmd, (pte_t *)kernel_end);
kernel_end += PAGE_SIZE;
}
Expand Down
2 changes: 1 addition & 1 deletion arch/arm/include/asm/pgtable.h
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

#ifndef CONFIG_MMU

#include <asm-generic/4level-fixup.h>
#include <asm-generic/pgtable-nopud.h>
#include <asm/pgtable-nommu.h>

#else
Expand Down
2 changes: 1 addition & 1 deletion arch/arm/mm/dma-mapping.c
Original file line number Diff line number Diff line change
Expand Up @@ -529,7 +529,7 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page)

static bool __in_atomic_pool(void *start, size_t size)
{
return addr_in_gen_pool(atomic_pool, (unsigned long)start, size);
return gen_pool_has_addr(atomic_pool, (unsigned long)start, size);
}

static int __free_from_pool(void *start, size_t size)
Expand Down
2 changes: 1 addition & 1 deletion arch/c6x/include/asm/pgtable.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
#ifndef _ASM_C6X_PGTABLE_H
#define _ASM_C6X_PGTABLE_H

#include <asm-generic/4level-fixup.h>
#include <asm-generic/pgtable-nopud.h>

#include <asm/setup.h>
#include <asm/page.h>
Expand Down
7 changes: 0 additions & 7 deletions arch/m68k/include/asm/mcf_pgalloc.h
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,6 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
return (pmd_t *) pgd;
}

#define pmd_alloc_one_fast(mm, address) ({ BUG(); ((pmd_t *)1); })
#define pmd_alloc_one(mm, address) ({ BUG(); ((pmd_t *)2); })

#define pmd_populate(mm, pmd, page) (pmd_val(*pmd) = \
(unsigned long)(page_address(page)))

Expand All @@ -45,8 +42,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t page,
__free_page(page);
}

#define __pmd_free_tlb(tlb, pmd, address) do { } while (0)

static inline struct page *pte_alloc_one(struct mm_struct *mm)
{
struct page *page = alloc_pages(GFP_DMA, 0);
Expand Down Expand Up @@ -100,6 +95,4 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
return new_pgd;
}

#define pgd_populate(mm, pmd, pte) BUG()

#endif /* M68K_MCF_PGALLOC_H */
Loading

0 comments on commit 5ecc9d1

Please sign in to comment.