Loading...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | /* SPDX-License-Identifier: GPL-2.0 * * This file contains the functions and defines necessary to modify and * use the SuperH page table tree. * * Copyright (C) 1999 Niibe Yutaka * Copyright (C) 2002 - 2007 Paul Mundt */ #ifndef __ASM_SH_PGTABLE_H #define __ASM_SH_PGTABLE_H #ifdef CONFIG_X2TLB #include <asm/pgtable-3level.h> #else #include <asm/pgtable-2level.h> #endif #include <asm/page.h> #include <asm/mmu.h> #ifndef __ASSEMBLY__ #include <asm/addrspace.h> #include <asm/fixmap.h> /* * ZERO_PAGE is a global shared page that is always zero: used * for zero-mapped memory areas etc.. */ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) #endif /* !__ASSEMBLY__ */ /* * Effective and physical address definitions, to aid with sign * extension. */ #define NEFF 32 #define NEFF_SIGN (1LL << (NEFF - 1)) #define NEFF_MASK (-1LL << NEFF) static inline unsigned long long neff_sign_extend(unsigned long val) { unsigned long long extended = val; return (extended & NEFF_SIGN) ? (extended | NEFF_MASK) : extended; } #ifdef CONFIG_29BIT #define NPHYS 29 #else #define NPHYS 32 #endif #define NPHYS_SIGN (1LL << (NPHYS - 1)) #define NPHYS_MASK (-1LL << NPHYS) #define PGDIR_SIZE (1UL << PGDIR_SHIFT) #define PGDIR_MASK (~(PGDIR_SIZE-1)) /* Entries per level */ #define PTRS_PER_PTE (PAGE_SIZE / (1 << PTE_MAGNITUDE)) #define PHYS_ADDR_MASK29 0x1fffffff #define PHYS_ADDR_MASK32 0xffffffff static inline unsigned long phys_addr_mask(void) { /* Is the MMU in 29bit mode? */ if (__in_29bit_mode()) return PHYS_ADDR_MASK29; return PHYS_ADDR_MASK32; } #define PTE_PHYS_MASK (phys_addr_mask() & PAGE_MASK) #define PTE_FLAGS_MASK (~(PTE_PHYS_MASK) << PAGE_SHIFT) #define VMALLOC_START (P3SEG) #define VMALLOC_END (FIXADDR_START-2*PAGE_SIZE) #include <asm/pgtable_32.h> /* * SH-X and lower (legacy) SuperH parts (SH-3, SH-4, some SH-4A) can't do page * protection for execute, and considers it the same as a read. Also, write * permission implies read permission. This is the closest we can get.. * * SH-X2 (SH7785) and later parts take this to the opposite end of the extreme, * not only supporting separate execute, read, and write bits, but having * completely separate permission bits for user and kernel space. */ /*xwr*/ typedef pte_t *pte_addr_t; #define pte_pfn(x) ((unsigned long)(((x).pte_low >> PAGE_SHIFT))) struct vm_area_struct; struct mm_struct; extern void __update_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte); extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); static inline void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, unsigned long address, pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; __update_cache(vma, address, pte); __update_tlb(vma, address, pte); } #define update_mmu_cache(vma, addr, ptep) \ update_mmu_cache_range(NULL, vma, addr, ptep, 1) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); extern void page_table_range_init(unsigned long start, unsigned long end, pgd_t *pgd); static inline bool __pte_access_permitted(pte_t pte, u64 prot) { return (pte_val(pte) & (prot | _PAGE_SPECIAL)) == prot; } #ifdef CONFIG_X2TLB static inline bool pte_access_permitted(pte_t pte, bool write) { u64 prot = _PAGE_PRESENT; prot |= _PAGE_EXT(_PAGE_EXT_KERN_READ | _PAGE_EXT_USER_READ); if (write) prot |= _PAGE_EXT(_PAGE_EXT_KERN_WRITE | _PAGE_EXT_USER_WRITE); return __pte_access_permitted(pte, prot); } #else static inline bool pte_access_permitted(pte_t pte, bool write) { u64 prot = _PAGE_PRESENT | _PAGE_USER; if (write) prot |= _PAGE_RW; return __pte_access_permitted(pte, prot); } #endif #define pte_access_permitted pte_access_permitted /* arch/sh/mm/mmap.c */ #define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN #endif /* __ASM_SH_PGTABLE_H */ |