commit 224d99f50f25ec3234b99556c0076a7130e230c6 Author: Greg Kroah-Hartman Date: Sat Jan 29 10:15:58 2022 +0100 Linux 4.9.299 Link: https://lore.kernel.org/r/20220127180257.225641300@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Shuah Khan Tested-by: Jon Hunter Tested-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman commit c47385c73fced27375559d1a2eb10f165a0869b0 Author: Lee Jones Date: Tue Jan 25 14:18:08 2022 +0000 ion: Do not 'put' ION handle until after its final use pass_to_user() eventually calls kref_put() on an ION handle which is still live, potentially allowing for it to be legitimately freed by the client. Prevent this from happening before its final use in both ION_IOC_ALLOC and ION_IOC_IMPORT. Signed-off-by: Lee Jones Signed-off-by: Greg Kroah-Hartman commit a8200613c8c9fbaf7b55d4d438376ebaf0c4ce7e Author: Daniel Rosenberg Date: Tue Jan 25 14:18:07 2022 +0000 ion: Protect kref from userspace manipulation This separates the kref for ion handles into two components. Userspace requests through the ioctl will hold at most one reference to the internally used kref. All additional requests will increment a separate counter, and the original reference is only put once that counter hits 0. This protects the kernel from a poorly behaving userspace. Signed-off-by: Daniel Rosenberg [d-cagle@codeaurora.org: Resolve style issues] Signed-off-by: Dennis Cagle Signed-off-by: Lee Jones Signed-off-by: Greg Kroah-Hartman commit 504e1d6ee65d5b5a053253ae62f46035d774353c Author: Daniel Rosenberg Date: Tue Jan 25 14:18:06 2022 +0000 ion: Fix use after free during ION_IOC_ALLOC If a user happens to call ION_IOC_FREE during an ION_IOC_ALLOC on the just allocated id, and the copy_to_user fails, the cleanup code will attempt to free an already freed handle. This adds a wrapper for ion_alloc that adds an ion_handle_get to avoid this. Signed-off-by: Daniel Rosenberg Signed-off-by: Dennis Cagle Signed-off-by: Patrick Daly Signed-off-by: Lee Jones Signed-off-by: Greg Kroah-Hartman commit d47e16bb32239e4aecb3bb04bd9117016e4884fb Author: Stefan Agner Date: Sun Sep 30 23:02:33 2018 +0100 ARM: 8800/1: use choice for kernel unwinders commit f9b58e8c7d031b0daa5c9a9ee27f5a4028ba53ac upstream. While in theory multiple unwinders could be compiled in, it does not make sense in practise. Use a choice to make the unwinder selection mutually exclusive and mandatory. Already before this commit it has not been possible to deselect FRAME_POINTER. Remove the obsolete comment. Furthermore, to produce a meaningful backtrace with FRAME_POINTER enabled the kernel needs a specific function prologue: mov ip, sp stmfd sp!, {fp, ip, lr, pc} sub fp, ip, #4 To get to the required prologue gcc uses apcs and no-sched-prolog. This compiler options are not available on clang, and clang is not able to generate the required prologue. Make the FRAME_POINTER config symbol depending on !clang. Suggested-by: Arnd Bergmann Signed-off-by: Stefan Agner Reviewed-by: Arnd Bergmann Signed-off-by: Russell King Signed-off-by: Anders Roxell Signed-off-by: Greg Kroah-Hartman commit e262acbda232b6a2a9adb53f5d2b2065f7626625 Author: Lai Jiangshan Date: Mon Jan 24 19:33:29 2022 +0100 KVM: X86: MMU: Use the correct inherited permissions to get shadow page commit b1bd5cba3306691c771d558e94baa73e8b0b96b7 upstream. When computing the access permissions of a shadow page, use the effective permissions of the walk up to that point, i.e. the logic AND of its parents' permissions. Two guest PxE entries that point at the same table gfn need to be shadowed with different shadow pages if their parents' permissions are different. KVM currently uses the effective permissions of the last non-leaf entry for all non-leaf entries. Because all non-leaf SPTEs have full ("uwx") permissions, and the effective permissions are recorded only in role.access and merged into the leaves, this can lead to incorrect reuse of a shadow page and eventually to a missing guest protection page fault. For example, here is a shared pagetable: pgd[] pud[] pmd[] virtual address pointers /->pmd1(u--)->pte1(uw-)->page1 <- ptr1 (u--) /->pud1(uw-)--->pmd2(uw-)->pte2(uw-)->page2 <- ptr2 (uw-) pgd-| (shared pmd[] as above) \->pud2(u--)--->pmd1(u--)->pte1(uw-)->page1 <- ptr3 (u--) \->pmd2(uw-)->pte2(uw-)->page2 <- ptr4 (u--) pud1 and pud2 point to the same pmd table, so: - ptr1 and ptr3 points to the same page. - ptr2 and ptr4 points to the same page. (pud1 and pud2 here are pud entries, while pmd1 and pmd2 here are pmd entries) - First, the guest reads from ptr1 first and KVM prepares a shadow page table with role.access=u--, from ptr1's pud1 and ptr1's pmd1. "u--" comes from the effective permissions of pgd, pud1 and pmd1, which are stored in pt->access. "u--" is used also to get the pagetable for pud1, instead of "uw-". - Then the guest writes to ptr2 and KVM reuses pud1 which is present. The hypervisor set up a shadow page for ptr2 with pt->access is "uw-" even though the pud1 pmd (because of the incorrect argument to kvm_mmu_get_page in the previous step) has role.access="u--". - Then the guest reads from ptr3. The hypervisor reuses pud1's shadow pmd for pud2, because both use "u--" for their permissions. Thus, the shadow pmd already includes entries for both pmd1 and pmd2. - At last, the guest writes to ptr4. This causes no vmexit or pagefault, because pud1's shadow page structures included an "uw-" page even though its role.access was "u--". Any kind of shared pagetable might have the similar problem when in virtual machine without TDP enabled if the permissions are different from different ancestors. In order to fix the problem, we change pt->access to be an array, and any access in it will not include permissions ANDed from child ptes. The test code is: https://lore.kernel.org/kvm/20210603050537.19605-1-jiangshanlai@gmail.com/ Remember to test it with TDP disabled. The problem had existed long before the commit 41074d07c78b ("KVM: MMU: Fix inherited permissions for emulated guest pte updates"), and it is hard to find which is the culprit. So there is no fixes tag here. Signed-off-by: Lai Jiangshan Message-Id: <20210603052455.21023-1-jiangshanlai@gmail.com> Cc: stable@vger.kernel.org Fixes: cea0f0e7ea54 ("[PATCH] KVM: MMU: Shadow page table caching") Signed-off-by: Paolo Bonzini [bwh: Backported to 4.9: - Keep passing vcpu argument to gpte_access functions - Adjust filenames, context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 9c9b899bfc53fdffa97efeffea5a0c870b4493b5 Author: Paolo Bonzini Date: Mon Jan 24 19:32:46 2022 +0100 KVM: nVMX: fix EPT permissions as reported in exit qualification commit 0780516a18f87e881e42ed815f189279b0a1743c upstream. This fixes the new ept_access_test_read_only and ept_access_test_read_write testcases from vmx.flat. The problem is that gpte_access moves bits around to switch from EPT bit order (XWR) to ACC_*_MASK bit order (RWX). This results in an incorrect exit qualification. To fix this, make pt_access and pte_access operate on raw PTE values (only with NX flipped to mean "can execute") and call gpte_access at the end of the walk. This lets us use pte_access to compute the exit qualification with XWR bit order. Signed-off-by: Paolo Bonzini Reviewed-by: Xiao Guangrong Signed-off-by: Radim Krčmář [bwh: Backported to 4.9: - There's no support for EPT accessed/dirty bits, so do not use have_ad flag - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 993892ed82350d0b4eb7d321d2bb225219bd1cfc Author: Trond Myklebust Date: Mon Jan 24 19:34:30 2022 +0100 NFSv4: Initialise connection to the server in nfs4_alloc_client() commit dd99e9f98fbf423ff6d365b37a98e8879170f17c upstream. Set up the connection to the NFSv4 server in nfs4_alloc_client(), before we've added the struct nfs_client to the net-namespace's nfs_client_list so that a downed server won't cause other mounts to hang in the trunking detection code. Reported-by: Michael Wakabayashi Fixes: 5c6e5b60aae4 ("NFS: Fix an Oops in the pNFS files and flexfiles connection setup to the DS") Signed-off-by: Trond Myklebust [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 1795af6435fa5f17ced2d34854fd4871e0780092 Author: Dan Carpenter Date: Mon Jan 24 19:35:29 2022 +0100 media: firewire: firedtv-avc: fix a buffer overflow in avc_ca_pmt() commit 35d2969ea3c7d32aee78066b1f3cf61a0d935a4e upstream. The bounds checking in avc_ca_pmt() is not strict enough. It should be checking "read_pos + 4" because it's reading 5 bytes. If the "es_info_length" is non-zero then it reads a 6th byte so there needs to be an additional check for that. I also added checks for the "write_pos". I don't think these are required because "read_pos" and "write_pos" are tied together so checking one ought to be enough. But they make the code easier to understand for me. The check on write_pos is: if (write_pos + 4 >= sizeof(c->operand) - 4) { The first "+ 4" is because we're writing 5 bytes and the last " - 4" is to leave space for the CRC. The other problem is that "length" can be invalid. It comes from "data_length" in fdtv_ca_pmt(). Cc: stable@vger.kernel.org Reported-by: Luo Likang Signed-off-by: Dan Carpenter Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 84f4ab5b47d955ad2bb30115d7841d3e8f0994f4 Author: Tvrtko Ursulin Date: Tue Oct 19 13:27:10 2021 +0100 drm/i915: Flush TLBs before releasing backing store commit 7938d61591d33394a21bdd7797a245b65428f44c upstream. We need to flush TLBs before releasing backing store otherwise userspace is able to encounter stale entries if a) it is not declaring access to certain buffers and b) it races with the backing store release from a such undeclared execution already executing on the GPU in parallel. The approach taken is to mark any buffer objects which were ever bound to the GPU and to trigger a serialized TLB flush when their backing store is released. Alternatively the flushing could be done on VMA unbind, at which point we would be able to ascertain whether there is potential a parallel GPU execution (which could race), but essentially it boils down to paying the cost of TLB flushes potentially needlessly at VMA unbind time (when the backing store is not known to be going away so not needed for safety), versus potentially needlessly at backing store relase time (since we at that point cannot tell whether there is anything executing on the GPU which uses that object). Thereforce simplicity of implementation has been chosen for now with scope to benchmark and refine later as required. Signed-off-by: Tvrtko Ursulin Reported-by: Sushma Venkatesh Reddy Reviewed-by: Daniel Vetter Acked-by: Dave Airlie Cc: Daniel Vetter Cc: Jon Bloomfield Cc: Joonas Lahtinen Cc: Jani Nikula Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman