Commit Graph

529 Commits

Author SHA1 Message Date
Samuel Holland
a894187e28 lib: sbi_ipi: Do not ignore errors from sbi_ipi_send()
Currently, failures in sbi_ipi_send() are silently ignored, which makes
them difficult to debug. Instead, abort sending the IPI and pass back
the error, but still synchronize any IPIs already sent.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-18 19:26:11 +05:30
Samuel Holland
35cba92655 lib: sbi_tlb: Check tlb_range_flush_limit only once per request
The tlb_update() callback is called for each destination hart.
Move the size check earlier, so it is executed only once.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-11 10:41:48 +05:30
Xiang W
a2e254e881 lib: sbi: skip wait_for_coldboot when coolboot done
When warmboot via HSM, coolboot has been completed and
wait_for_coldboot can be skipped to speed up.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-11 09:36:57 +05:30
Inochi Amaoto
87aa3069d1 platform: recalculate heap size to support new tlb entry number
Previous patch introduced a change that using hart count as the default
number of tlb entries in the fifo. This makes the default tlb fifo size
grow in square with the number of harts. So the default heap size is
not enough to allocate tlb fifo when the hart count is big.

Fixes: 52fd64b ("platform: Uses hart count as the default size of tlb info")
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-11 09:23:24 +05:30
Nick Hu
a25fc74699 lib: sbi_hsm: Put the resume_pending hart in the interruptible hart mask
Current interruptible hart mask doesn't include the hart which HSM state
is SBI_HSM_STATE_RESUME_PENDING. So when there is a request to send an
IPI to the hart which is in the resume process, this hart would miss the
IPI forever. Put the SBI_HSM_STATE_RESUME_PENDING hart in the
interruptible hart mask to fix the issue.

Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-10 13:24:13 +05:30
Atish Patra
11a0ba5d4b lib: sbi_pmu: Fix the counter info function
The counter info should only return valid hardware counters for the ones
set in the counter mask. Otherwise, it will report incorrect number of
hardware counters to the supervisor if the platform has discontiguous
counters.

Fixes: c744ed77b1 ("lib: sbi_pmu: Enable noncontigous hpm event and counters")
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-08 22:50:23 +05:30
Atish Patra
ee725174ba lib: sbi_pmu: Add PMU snapshot definitions
OpenSBI doesn't support SBI PMU snapshot yet as there is not much benefit
unless the multiple counters overflow at the same time.

Just add the definition and return not supported error at this moment. The
default returned error is also not supported. Thus, no functional change
intended.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
2023-12-08 22:50:21 +05:30
Samuel Holland
93da66b7d4 lib: sbi_hart: Store PMP granularity as log base 2
This minimizes the need to call log2roundup() to recover the log value.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-08 22:43:12 +05:30
Xiang W
07419ec84b lib: sbi: Prevent redundant sbi_ipi_process
Multiple harts may try to send IPI to a particular target hart A
in which case the send_ipi() should be called only when the old
value of the hart A ipi_type is zero.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-08 17:07:03 +05:30
Anup Patel
88398696c8 lib: sbi: Replace __atomic_op_bit_ord with __atomic intrinsics
Simplify atomic-related bit operations through __atomic intrinsics.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-08 14:06:40 +05:30
Xiang W
11bf49b444 lib: sbi: Fix __atomic_op_bit_ord and comments
The original code returns the value of the word before modification.
When modifying the upper 32 bits under RV64, the value returned via
int return will have no meaning. Corrected to return the value of the
bit. And modify the function description.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-08 13:47:31 +05:30
Xiang W
6b9a849482 lib: sbi: Remove xchg/cmpxchg implemented via lr/sc
lr/sc is part of the A extension. If the A extension is not supported,
lr/sc cannot be used. So remove xchg/cmpxchg.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-08 10:54:35 +05:30
Yu Chien Peter Lin
a48f2cfd94 sbi: sbi_pmu: Add hw_counter_filter_mode() to pmu device
Add support for custom PMU extensions to set inhibit bits
on custom CSRs by introducing the PMU device callback
hw_counter_filter_mode(). This allows the perf tool to
restrict event counting under a specified privileged
mode by appending a modifier, e.g. perf record -e event:k
to count events only happening in kernel mode.

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Leo Yu-Chi Liang <ycliang@andestech.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-06 17:30:01 +05:30
Yu Chien Peter Lin
090fa99d7c lib: sbi: Add XAndesPMU in hart extensions
Add the custom extension to hart extension list.

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-12-06 17:27:22 +05:30
Yu Chien Peter Lin
291403f6f2 sbi: sbi_pmu: Improve sbi_pmu_init() error handling
This patch makes the following changes:

- As sbi_platform_pmu_init() returns a negative error code on
  failure, let sbi_pmu_init() print out the error code with
  sbi_dprintf().

- In order to distinguish the SBI_EFAIL error returned by
  sbi_pmu_add_*_counter_map(), return SBI_ENOENT to indicate
  that fdt_pmu_setup() failed to locate "riscv,pmu" node, and
  generic_pmu_init() ignores such case.

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2023-12-06 17:24:38 +05:30
Anup Patel
b70d6285f0 lib: sbi: Allow relaxed MMIO writes in device ipi_clear() callback
Currently, there are no barriers before or after the ipi_clear()
device callback which forces ipi_clear() device callback to always
use non-relaxed MMIO writes.

Instead of above, we use wmb() in after the ipi_clear() device
callback which pairs with the wmb() done before the ipi_send()
device callback. This also allows device ipi_clear() callback
to use relaxed MMIO writes.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reported-by: Bo Gan <ganboing@gmail.com>
2023-11-26 18:45:08 +05:30
Anup Patel
f520256d03 lib: sbi: Allow relaxed MMIO writes in device ipi_send() callback
Currently, we have a smp_wmb() between atomic_raw_set_bit() and
ipi_send() device callback whereas the MMIO writes done by the
device ipi_send() callback will also include a barrier.

We can avoid unnecessary/redundant barriers described above by
allowing relaxed MMIO writes in device ipi_send() callback. To
achieve this, we simply use  wmb() instead of smp_wmb() before
calling device ipi_send().

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reported-by: Bo Gan <ganboing@gmail.com>
2023-11-26 18:45:06 +05:30
Heinrich Schuchardt
574b9c8ec2 lib: sbi_pmu: avoid buffer overflow
total_ctrs is bounded by

    SBI_PMU_FW_CTR_MAX + SBI_PMU_HW_CTR_MAX) == 48

which exceeds BITS_PER_LONG on 32 bit systems.

Iterating over the bits of &cmask results in a buffer overflow when looking
for a bit >= BITS_PER_LONG.

Adjust the iterators in sbi_pmu_ctr_start() and sbi_pmu_ctr_stop()
accordingly.

Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-22 20:55:22 +05:30
Anup Patel
16bb930533 lib: sbi: Fix PMP granularity handling in sbi_hart_map_saddr()
The sbi_hart_map_saddr() must create PMP mapping of size greater
than or equal to PMP granularity otherwise PMP mapping does not
work when size parameter less than sbi_hart_pmp_granularity(scratch).

Fixes: 6e44ef686a ("lib: sbi: Add functions to map/unmap shared memory")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2023-11-22 20:42:24 +05:30
Xiang W
3aaed4fadf lib: sbi: Make console_puts/console_putc interchangeable
console_puts/console_putc should replace each other, but the previous
sbi_putc can only use console_putc. This patch addresses this problem.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-17 16:03:21 +05:30
Xiang W
6602e11de3 lib: sbi: change sbi_hart_features.extensions as an array
In the future there may be a lot of ISA extensions, a 'long' may not
be able to accommodate, changed to an array for the future.

Addresses-Coverity-ID: 1568357 Out-of-bounds access
Fixes: 6259b2ec2d ("lib: utils/fdt: Fix fdt_parse_isa_extensions()
implementation")
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-17 13:23:49 +05:30
Heinrich Schuchardt
6e5b0cfb45 lib: sbi: enable seed access in S-mode
If ISA extension Zkr is available, set

    mseccfg.sseed=1
    mseccfg.useed=0

This enables access to the seed CSR in S-mode but not in U-mode.

Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-17 12:26:22 +05:30
Heinrich Schuchardt
efcac338bd lib: sbi: Add Zkr in hart extensions
- Add Zkr as extension in sbi_hart_extensions enum
- Return "zkr" string for Zkr extension from sbi_hart_extension_id2string

Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-17 12:04:18 +05:30
Inochi Amaoto
3b03cdd60c lib: sbi: Add regions merging when sanitizing domain region
As the domain will reject a new memory region which has a sub-regions
already in the domain, even the new region is bigger and has the same
flags. This problem can be solved by relaxing region restriction and
rechecking when adding and sanitizing domains.

Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-16 21:03:26 +05:30
Inochi Amaoto
5b2f55d65a lib: sbi: separate the swap operation of domain region
Swapping domain region is a common operation when sorting domain region,
so separate it as a function to make code clean.

Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-16 20:58:56 +05:30
Samuel Holland
a140a4e862 lib: sbi: Correctly limit flushes to a single ASID/VMID
Per the SBI specification, the effects of these functions are limited to
a specific ASID and/or VMID. This applies even when flushing the entire
address space.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-16 09:21:47 +05:30
Heinrich Schuchardt
5d0ed1bfb8 lib: sbi: simplify sanitize_domain()
Since commit 112daa2e64 ("lib: sbi: Maximize the use of HART index in
sbi_domain") the platform parameter is unused.

Fixes: 112daa2e64 ("lib: sbi: Maximize the use of HART index in sbi_domain")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-11-14 17:57:54 +05:30
Amanieu d'Antras
ec0559eb31 lib: sbi_misaligned_ldst: Fix handling of C.SWSP and C.SDSP
Unlike C.LWSP/C.LDSP, these encodings can be used with the zero
register, so checking that the rs2 field is non-zero is unnecessary.

Additionally, the previous check was incorrect since it was checking
the immediate field of the instruction instead of the rs2 field.

Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-10-09 13:53:20 +05:30
Heinrich Schuchardt
f831b93357 lib: sbi_pmu: check for index overflows
sbi_pmu_ctr_cfg_match() receives data from a lower privilege level mode.
We must catch maliciously wrong values.

We already check against total_ctrs. But we do not check that total_ctrs is
less than SBI_PMU_HW_CTR_MAX + SBI_PMU_FW_CTR_MAX.

Check that the number of hardware counters is in the valid range.

Addresses-Coverity-ID: 1566114 Out-of-bounds write
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2023-10-06 17:29:07 +05:30
Heinrich Schuchardt
8197c2f1ec lib: sbi: fix sbi_domain_get_assigned_hartmask()
'1' is a 32 bit integer. When shifting it by more than 31 bits it becomes
zero and we get an incorrect return value.

Addresses-Coverity-ID: 1568356 Bad bit shift operation
Fixes: 296e70d69d ("lib: sbi: Extend sbi_hartmask to support both hartid and hartindex")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-10-06 17:06:09 +05:30
Anup Patel
73aea28264 lib: sbi: Populate M-only Smepmp entries before setting mseccfg.MML
Based on sections 4.c and 4.d in Ch.2 of the Smepmp spec the PMP entries
must be programmed as below:
1. Program M-only entries
2. Enable mseccfg.MML
3. Program shared-region entries
4. Program SU-only entries

Co-developed-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 16:45:15 +05:30
Anup Patel
2b51a9dd9c lib: sbi: Fix pmp_flags for Smepmp read-only shared region
The Smepmp read-only shared region must have pmpcfg.L, pmpcfg.R,
pmpcfg.W, and pmpcfg.X bits set so sbi_hart_get_smepmp_flags()
must return pmp_flags accordingly.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
2023-09-24 16:40:52 +05:30
Anup Patel
5240d312d3 lib: sbi: Don't clear mseccfg.MML bit in sbi_hart_smepmp_configure()
The mseccfg.MML bit is a sticky bit which remains unchanged once set
so no need to clear it in sbi_hart_smepmp_configure().

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
2023-09-24 16:28:16 +05:30
Anup Patel
bff27c1fb4 lib: sbi: Factor-out Smepmp configuration as separate function
Let us factor-out Smepmp configuaration as separate function so
that code is more readable.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
2023-09-24 16:27:40 +05:30
Anup Patel
9560fb38fe include: sbi: Remove sbi_hartmask_for_each_hart() macro
The sbi_hartmask_for_each_hart() macro is slow and has only one user
so let us completely remove the sbi_hartmask_for_each_hart() macro.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 11:48:21 +05:30
Anup Patel
112daa2e64 lib: sbi: Maximize the use of HART index in sbi_domain
Let us maximize the use of HART index in sbi_domain because hartindex
based hartmask access and sbi_scratch lookup is faster.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 11:48:17 +05:30
Anup Patel
22d6ff8675 lib: sbi: Remove sbi_scratch_last_hartid() macro
The sbi_scratch_last_hartid() macro is not of much use on platforms
with really sparse hartids so let us replace use of this macro with
other approaches.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 11:41:54 +05:30
Anup Patel
78c667b6fc lib: sbi: Prefer hartindex over hartid in IPI framework
Let us prefer hartindex over hartid in IPI framework which in-turn
forces IPI users to also prefer hartindex.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 11:39:38 +05:30
Anup Patel
e632cd7c81 lib: sbi: Use sbi_scratch_last_hartindex() in remote TLB managment
The sbi_hartid_to_scratch() involves translating hartid to hartindex
which is expensive so let's use sbi_hartindex_to_scratch() instead.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 11:39:35 +05:30
Xiang W
296e70d69d lib: sbi: Extend sbi_hartmask to support both hartid and hartindex
Currently, the sbi_hartmask is indexed by hartid which puts a
limit on hartid to be less than SBI_HARTMASK_MAX_BITS.

We extend the sbi_hartmask implementation to use hartindex and
support updating sbi_hartmask using hartid. This removes the
limit on hartid and existing code works largely unmodified.

Signed-off-by: Xiang W <wxjstz@126.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 11:39:32 +05:30
Anup Patel
e6125c3c4f lib: sbi: Remove sbi_platform_hart_index/invalid() functions
The hartid to hartindex mapping is now tracked in sbi_scratch so we
don't need sbi_platform_hart_index() and sbi_platform_hart_invalid()
functions hence let us remove them.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 11:39:30 +05:30
Anup Patel
d1e4dff45b lib: sbi: Introduce HART index in sbi_scratch
We introduce HART index and related helper functions in sbi_scratch
where HART index is contiguous and each HART index maps to a physical
HART id such that 0 <= HART index and HART index < SBI_HARTMASK_MAX_BITS.

The HART index to HART id mapping follows the index2id mapping provided
by the platform. If the platform does not provide index2id mapping then
identity mapping is assumed.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-09-24 11:39:21 +05:30
Xiang W
5bd969477f lib: sbi: alloc tlb fifo by sbi_malloc
If the system is defined from tlb_fifo_num_entries, the scratch may be
too small to hold the fifo, so it is alloc through the heap.

Signed-off-by: Xiang W <wxjstz@126.com>
Signed-off-by: Xing Xiaoguang <xiaoguang.xing@sophgo.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-09-10 11:46:44 +05:30
Xiang W
cacfba32cc platform: Allow platforms to specify the size of tlb fifo
For some platforms with a particularly high number of harts, if the
tlb fifo is too small, it case harts to wait. Platforms should be
allowed to specify the size of the tlb fifo.

Signed-off-by: Xiang W <wxjstz@126.com>
Signed-off-by: Xing Xiaoguang <xiaoguang.xing@sophgo.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-09-10 11:21:05 +05:30
Inochi Amaoto
901d3d7bff lib: sbi_pmu: keep overflow interrupt of stopped hpm counter disabled
After the hardware hpm counter is stopped, it should not raise any new
interrupt as it is already stopped. So add the hw_counter_disable_irq
callback to allow the custom pmu device to control this behavior.

Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Samuel Holland <samuel@sholland.org>
2023-09-10 11:05:01 +05:30
Inochi Amaoto
664692f507 lib: sbi_pmu: ensure update hpm counter before starting counting
When detecting features of PMU, the hpm counter may be written to some
value, this will cause some unexpected behavior in some cases. So ensure
the hpm counter is updated before starting the counter and the related
interrupt.

Signed-off-by: Haijiao Liu <haijiao.liu@sophgo.com>
Co-authored-by: Inochi Amaoto <inochiama@outlook.com>
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Samuel Holland <samuel@sholland.org>
2023-09-10 11:04:57 +05:30
Xiang W
b20bd479ef lib: sbi: improve the definition of SBI_IPI_EVENT_MAX
The previous definition had the assumption that the machine word length
is equal to the word length of LONG. Remove this assumption and add a
static check to prevent errors in subsequent modifications.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-09-06 17:06:12 +05:30
Inochi Amaoto
ee1f83ca84 lib: sbi_pmu: remove mhpm_count field in hart feature
After supporting noncontigous hpm event and counters in opensbi, the
number of hpm counters can be calculated by the mhpm_mask. So this field
is unnecessary and can be removed to save some space.

Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel  <anup@brainfault.org>
2023-08-22 13:26:09 +05:30
Kaiwen Xue
c104c60912 lib: sbi: Add support for smcntrpmf
This adds the support for ISA extension smcntrpmf. When some inhibit flags
are set by a lower privilege mode for new CSRs added by smcntrpmf, OpenSBI
sets the appropriate values correspondingly.

Signed-off-by: Kaiwen Xue <kaiwenx@andrew.cmu.edu>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2023-08-18 14:50:38 +05:30
Kaiwen Xue
f46a5643bc lib: sbi: Fix typo for finding fixed event counter
Cycle and instructions are hardware events instead of firmware ones. Fix
the typo in the name of this function.

Signed-off-by: Kaiwen Xue <kaiwenx@andrew.cmu.edu>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Anup patel <anup@brainfault.org>
2023-08-06 10:22:58 +05:30