lib: Fix __sbi_hfence_gvma_vmid_gpa() and __sbi_hfence_vvma_asid_va()

The arguments/parameters of __sbi_hfence_gvma_vmid_gpa() and
__sbi_hfence_vvma_asid_va() functions are swapped so we fix it.

Currently, we did not face any issues because QEMU does a full
TLB flush for all HFENCE instructions.

We also improve documentation of HFENCE.GVMA and HFENCE.VVMA
instruction encoding.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
This commit is contained in:
Anup Patel
2020-06-06 17:33:48 +05:30
committed by Anup Patel
parent 32f87e5a86
commit e2c3f01af4
3 changed files with 78 additions and 16 deletions

View File

@@ -10,8 +10,9 @@
#ifndef __SBI_FENCE_H__
#define __SBI_FENCE_H__
/** Invalidate Stage2 TLBs for given VMID and guest physical address */
void __sbi_hfence_gvma_vmid_gpa(unsigned long vmid, unsigned long gpa);
void __sbi_hfence_gvma_vmid_gpa(unsigned long gpa, unsigned long vmid);
/** Invalidate Stage2 TLBs for given VMID */
void __sbi_hfence_gvma_vmid(unsigned long vmid);
@@ -23,7 +24,7 @@ void __sbi_hfence_gvma_gpa(unsigned long gpa);
void __sbi_hfence_gvma_all(void);
/** Invalidate unified TLB entries for given asid and guest virtual address */
void __sbi_hfence_vvma_asid_va(unsigned long asid, unsigned long va);
void __sbi_hfence_vvma_asid_va(unsigned long va, unsigned long asid);
/** Invalidate unified TLB entries for given ASID for a guest*/
void __sbi_hfence_vvma_asid(unsigned long asid);
@@ -33,4 +34,5 @@ void __sbi_hfence_vvma_va(unsigned long va);
/** Invalidate all possible Stage2 TLBs */
void __sbi_hfence_vvma_all(void);
#endif