Add BIT_ULL and GENMASK_ULL for dealing with 64-bits data on
32-bits CPU, then we don't need to separate the operation to
low part and high part. For instance, the MMIO register is
64 bits wide.
Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Add support for controlling the pointer masking mode on harts which
support the Smnpm extension. This extension can only exist on harts
where XLEN >= 64 bits. This implementation selects the mode with the
smallest PMLEN that satisfies the caller's requested lower bound.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Writes to the low half CSR should not affect the high half of the value.
Make this separation explicit by writing to the delta in memory as two
adjacent XLEN-sized values.
Fixes: 1e9f88889f ("lib: Emulate HTIMEDELTA CSR for platforms not having TIME CSR")
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
This removes the compile-time limit on the number of domains. It also
reduces firmware size by about 200 bytes by removing the lookup table.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
These comments are inaccurate as of commit db56341dfa ("lib: sbi:
Allow platforms to provide root domain memory regions"), which modified
root domain registration to go through sbi_domain_register() like other
domains.
Fixes: db56341dfa ("lib: sbi: Allow platforms to provide root domain memory regions")
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Help tracking the lifecycle of the FDT blob by indicating which parts of
the firmware modify it, and thus invalidate any previously-obtained
offsets or pointers to data inside the blob.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Distinguish between functions which modify the devicetree and those
which only extract information from it. Other than the iterators in
fdt_domain.c, this is a mechanical conversion.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The function prototype should use the same parameter name as the
documentation and the function definition.
Fixes: 33bf917460 ("lib: utils: Add fdt_add_cpu_idle_states() helper function")
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
- Completed TODO in `system_opcode_insn` to ensure CSR read/write
instruction handling.
- Refactored to use new macros `GET_RS1_NUM` and `GET_CSR_NUM`.
- Updated `GET_RM` macro and replaced hardcoded funct3 values with
constants (`CSRRW`, `CSRRS`, `CSRRC`, etc.).
- Removed redundant `GET_RM` from `riscv_fp.h`.
- Improved validation and error handling for CSR instructions.
This patch enhances the clarity and correctness of CSR handling
in `system_opcode_insn`.
Signed-off-by: Dongdong Zhang <zhangdongdong@eswincomputing.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
This change adds a simple implementation of sbi_aligned_alloc(), for future use
in allocating aligned memory for SMMTT tables.
Signed-off-by: Gregor Haas <gregorhaas1997@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The upcoming SMMTT implementation will require some larger contiguous memory
regions for the memory tracking tables. We plan to specify the memory region
for these tables as a reserved-memory node in the device tree, and then
dynamically allocate individual tables out of this region. These changes to the
SBI heap allocator will allow us to explicitly create and allocate from a
dedicated heap tied to the table memory region.
Signed-off-by: Gregor Haas <gregorhaas1997@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Currently, OpenSBI reserves the upper 16 bits in mhpmevent for
the Sscofpmf extension.
However, according to the Sscofpmf extension specification[1],
it only defines the upper 8 bits in mhpmevent for privilege mode
inhibit and counter overflow disable. Other bits are defined by
the platform for event selection.
Since vendors might define raw event encoding exceeding 48 bits in
mhpmevent, we should adjust the MHPMEVENT_SSCOF_MASK to support it.
Link: https://github.com/riscvarchive/riscv-count-overflow [1]
Signed-off-by: Eric Lin <eric.lin@sifive.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The FIFO data structure is quite handy of variety of use-case so add
SBI_FIFO_INITIALIZER() and SBI_FIFO_DEFINE() helper macros to create
FIFO as local or global variable.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-By: Himanshu Chauhan <hchauhan@ventanamicro.com>
Extend sbi_fifo_enqueue() to allow forceful queueing by droping
data from the tail.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-By: Himanshu Chauhan <hchauhan@ventanamicro.com>
Now that all platforms have been updated to initialize serial console
device in early_init(), the sbi_console_init() and console_init()
platform callback are redundant hence remove them.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-By: Himanshu Chauhan <hchauhan@ventanamicro.com>
This patch updates OpenSBI version to 1.5 as part of
release preparation.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Add support for Svade and Svadu extensions. When both are present in the
device tree, the M-mode firmware should select the Svade extension
to comply with the RVA23 profile, which mandates Svade and lists Svadu as
an optional extension.
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Add spinlock protection to avoid race condition on assigned_harts
during domain context switching. Also, rename/add variables for
accessing the corresponding domain of target/current context.
Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Allow to define an init function for the test suite. It could help us
to initialize global variable once, and use them in multiple test cases
after the initialization.
For instance, if multiple test cases use the same atomic_t var, it
could be helpful to call ATOMIC_INIT once during the suite
initialization.
Signed-off-by: Ivan Orlov <ivan.orlov0322@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
On QEMU virt machine with large number of HARTs, some of the HARTs
randomly fail to come out of wait_for_coldboot() due to one of the
following race-conditions:
1) Failing HARTs are not able to acquire the coldboot_lock and
update the coldboot_hartmask in wait_for_coldboot() before
the coldboot HART acquires the coldboot_lock and sends IPI
in wake_coldboot_harts() hence the failing HARTs never
receive IPI from the coldboot HART.
2) Failing HARTs acquire the coldbood_lock and update the
coldboot_hartmask before coldboot HART does sbi_scratch_init()
so the sbi_hartmask_set_hartid() does not update the
coldboot_hartmask on the failing HARTs hence they never
receive IPI from the coldboot HART.
To address this, use a simple busy-loop in wait_for_coldboot() for
polling on coldboot_done flag.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Avoid using C types and casts if sbi/sbi_byteorder.h is included in
assembly code
Signed-off-by: Vivian Wang <dramforever@live.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Add SSE callbacks registration to PMU driver in order to disable
interrupt delegation for PMU interrupts. When interrupts are
undelegated send the PMU SSE event upon LCOFIP IRQ.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
This extension [1] allows to deliver events from SBI to supervisor via
a software mechanism. This extension defines events (either local or
global) which are signaled by the SBI on specific signal sources (IRQ,
exceptions, etc) and are injected to be executed in supervisor mode.
[1] https://lists.riscv.org/g/tech-prs/message/798
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The struct sbi_trap_context already has the information needed by
sbi_illegal_insn_handler() so directly pass struct sbi_trap_context
pointer to this function.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Samuel Holland <samuel.holland@sifive.com>
The struct sbi_trap_context already has the information needed by
misaligned load/store and access fault load/store handlers so directly
pass struct sbi_trap_context pointer to these functions.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Clément Léger <cleger@rivosinc.com>
Club the struct sbi_trap_regs and struct sbi_trap_info a new
struct sbi_trap_context (aka trap context) which must be saved
by low-level trap handler before calling sbi_trap_handler().
To track nested traps, the struct sbi_scratch points to the current
trap context and the trap context has pointer to pervious context
of previous trap.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Samuel Holland <samuel.holland@sifive.com>
To track nested traps, the struct sbi_scratch needs a pointer the
current trap context so add trap_context pointer in struct sbi_context.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Samuel Holland <samuel.holland@sifive.com>
Over the years, no uses of sbi_trap_exit() have been found so remove
it and also remove related code from fw_base.S and sbi_scratch.h.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Samuel Holland <samuel.holland@sifive.com>
In the only places this value is used, it duplicates mepc from
struct sbi_trap_regs.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
sbi_load/store_access_handler now tries to call platform emulators
if defined. Otherwise, redirects the fault. If the platform code
returns failure, this means the H/S/U has accessed the emulated
devices in an unexpected manner, which is very likely caused by
buggy code in H/S/U. We redirect the fault, so lower privileged
level can get notified, and act accordingly. (E.g., oops in Linux)
We let the handler truly fail if the trap was originated from M mode.
In this case, something must be very wrong and we should just fail.
Signed-off-by: Bo Gan <ganboing@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
This patch allows the platform to define load/store emulators. This
enables a platform to trap-and-emulate special devices or filter
access to existing physical devices.
Signed-off-by: Bo Gan <ganboing@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
This patch abstracts out the instruction decoding part of misaligned ld/st
fault handlers, so it can be reused by ld/st access fault handlers.
Also Added lb/lbu/sb decoding. (previously unreachable by misaligned fault)
sbi_trap_emulate_load/store is now the common handler which takes a `emu`
parameter that is responsible for emulating the misaligned or access fault.
The `emu` callback is expected to fixup the fault, and based on the return
code of `emu`, sbi_trap_emulate_load/store will:
r/wlen => the fixup is successful and regs/mepc needs to be updated.
0 => the fixup is successful, but regs/mepc should be left untouched
(this is usually used if `emu` does `sbi_trap_redirect`)
-err => failed, sbi_trap_error will be called
For now, load/store access faults are blindly redirected. It will be
enhanced in the following patches.
Signed-off-by: Bo Gan <ganboing@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>