RISC-V Debug specification includes Sdtrig ISA extension
which describes Trigger Module. Triggers can cause
a breakpoint exception or trace action without execution
of a special instruction. They can be used to implement
hardware breakpoints and watchpoints for native debugging.
The SBI Debut Trigger extension (Draft v6) can be found at:
https://lists.riscv.org/g/tech-debug/topic/99825362#1302
This patch is an initial implementation of SBI Debug
Trigger Extension (Draft v6) in OpenSBI.
The following features are supported:
* mcontrol, mcontrol6 triggers
* Breakpoint and trace actions
NOTE: Chained triggers are not supported
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Detect if debug triggers, sdtrig extension, is supported
by the CPU. The support is detected by access traps and
ISA string parsing.
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
OpenSBI uses time CSR if Zicntr extension present which causes
it to crash on an older QEMU because QEMU generates Zicntr in
the ISA string for unleashed machine which only has CYCLE and
INSTRET counters.
Fixes: 776770d2ad ("lib: sbi: Using one array to define the
name of extensions")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
There is a problem with judging whether the current hart belongs to
hmask. If cur_hartid minus hbase is greater than BITS_PER_LONG, the
previous hmask will also have a bit cleared incorrectly, which will
cause some harts to lose ipi.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
On platforms with Smepmp, the previous booting stage must enter
OpenSBI with mseccfg.MML == 0. This allows OpenSBI to configure
it's own M-mode only regions without depending on the previous
booting stage.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
The SBI_ETRAP error code was introduced only for doing trap
redirection in generic sbi_ecall_handler(). Now the trap
redirection is moved into sbi_ecall_legacy.c and SBI_ETRAP
error code is only used in this source file so let us remove
it.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Some of the upcoming SBI extensions (such as SSE) will directly
update register state so improve the prototype of ecall handler
to accommodate this. Further, this flexibility allows us to
push the trap redirection from sbi_ecall_handler() to the
sbi_ecall_legacy_handler().
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Let us reduce the size of struct sbi_tlb_info by doing the
following:
1) Change the data type of asid and vmid fields to uint16_t
2) Replace local_fn() function pointer with an enum
Based on the above, the size of struct sbi_tlb_info is reduced
by 16 bytes on RV64 and 4 bytes on RV32.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Enable access to some extensions through menvcfg and show them in "Boot
HART ISA Extensions" if they are present in the device tree.
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Define an array sbi_hart_ext to map extension ID and name , and use it
for ISA parsing and printing out the supported extensions.
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
We can enhance the code by creating 2 unified interface with macro for
privilege mode and extensions detection, which relies on supported
privilege modes and CSRs.
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The original code has multiple conversions between hartid and
hartindex. Can call sbi_hartmask_set_hartindex directly to
avoid conversion.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
sbi_ipi_event_create() disallows registering an IPI event with a NULL
.process callback, so the function pointer will never be NULL here.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Xiang W <wxjstz@126.com>
An IPI sent to the local hart can be processed directly instead of
triggering the IPI device. This is more efficient, and it avoids a
deadlock when the .sync callback is defined. Since interrupts are
disabled while handling an ecall, the IPI would not get delivered
until the next mret, but sbi_ipi_sync() is called before then.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Xiang W <wxjstz@126.com>
Currently, failures in sbi_ipi_send() are silently ignored, which makes
them difficult to debug. Instead, abort sending the IPI and pass back
the error, but still synchronize any IPIs already sent.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
The tlb_update() callback is called for each destination hart.
Move the size check earlier, so it is executed only once.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
When warmboot via HSM, coolboot has been completed and
wait_for_coldboot can be skipped to speed up.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Previous patch introduced a change that using hart count as the default
number of tlb entries in the fifo. This makes the default tlb fifo size
grow in square with the number of harts. So the default heap size is
not enough to allocate tlb fifo when the hart count is big.
Fixes: 52fd64b ("platform: Uses hart count as the default size of tlb info")
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Current interruptible hart mask doesn't include the hart which HSM state
is SBI_HSM_STATE_RESUME_PENDING. So when there is a request to send an
IPI to the hart which is in the resume process, this hart would miss the
IPI forever. Put the SBI_HSM_STATE_RESUME_PENDING hart in the
interruptible hart mask to fix the issue.
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The counter info should only return valid hardware counters for the ones
set in the counter mask. Otherwise, it will report incorrect number of
hardware counters to the supervisor if the platform has discontiguous
counters.
Fixes: c744ed77b1 ("lib: sbi_pmu: Enable noncontigous hpm event and counters")
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
OpenSBI doesn't support SBI PMU snapshot yet as there is not much benefit
unless the multiple counters overflow at the same time.
Just add the definition and return not supported error at this moment. The
default returned error is also not supported. Thus, no functional change
intended.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Multiple harts may try to send IPI to a particular target hart A
in which case the send_ipi() should be called only when the old
value of the hart A ipi_type is zero.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Simplify atomic-related bit operations through __atomic intrinsics.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The original code returns the value of the word before modification.
When modifying the upper 32 bits under RV64, the value returned via
int return will have no meaning. Corrected to return the value of the
bit. And modify the function description.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
lr/sc is part of the A extension. If the A extension is not supported,
lr/sc cannot be used. So remove xchg/cmpxchg.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Add support for custom PMU extensions to set inhibit bits
on custom CSRs by introducing the PMU device callback
hw_counter_filter_mode(). This allows the perf tool to
restrict event counting under a specified privileged
mode by appending a modifier, e.g. perf record -e event:k
to count events only happening in kernel mode.
Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Leo Yu-Chi Liang <ycliang@andestech.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
This patch makes the following changes:
- As sbi_platform_pmu_init() returns a negative error code on
failure, let sbi_pmu_init() print out the error code with
sbi_dprintf().
- In order to distinguish the SBI_EFAIL error returned by
sbi_pmu_add_*_counter_map(), return SBI_ENOENT to indicate
that fdt_pmu_setup() failed to locate "riscv,pmu" node, and
generic_pmu_init() ignores such case.
Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Currently, there are no barriers before or after the ipi_clear()
device callback which forces ipi_clear() device callback to always
use non-relaxed MMIO writes.
Instead of above, we use wmb() in after the ipi_clear() device
callback which pairs with the wmb() done before the ipi_send()
device callback. This also allows device ipi_clear() callback
to use relaxed MMIO writes.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reported-by: Bo Gan <ganboing@gmail.com>
Currently, we have a smp_wmb() between atomic_raw_set_bit() and
ipi_send() device callback whereas the MMIO writes done by the
device ipi_send() callback will also include a barrier.
We can avoid unnecessary/redundant barriers described above by
allowing relaxed MMIO writes in device ipi_send() callback. To
achieve this, we simply use wmb() instead of smp_wmb() before
calling device ipi_send().
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reported-by: Bo Gan <ganboing@gmail.com>
total_ctrs is bounded by
SBI_PMU_FW_CTR_MAX + SBI_PMU_HW_CTR_MAX) == 48
which exceeds BITS_PER_LONG on 32 bit systems.
Iterating over the bits of &cmask results in a buffer overflow when looking
for a bit >= BITS_PER_LONG.
Adjust the iterators in sbi_pmu_ctr_start() and sbi_pmu_ctr_stop()
accordingly.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The sbi_hart_map_saddr() must create PMP mapping of size greater
than or equal to PMP granularity otherwise PMP mapping does not
work when size parameter less than sbi_hart_pmp_granularity(scratch).
Fixes: 6e44ef686a ("lib: sbi: Add functions to map/unmap shared memory")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
console_puts/console_putc should replace each other, but the previous
sbi_putc can only use console_putc. This patch addresses this problem.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
In the future there may be a lot of ISA extensions, a 'long' may not
be able to accommodate, changed to an array for the future.
Addresses-Coverity-ID: 1568357 Out-of-bounds access
Fixes: 6259b2ec2d ("lib: utils/fdt: Fix fdt_parse_isa_extensions()
implementation")
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
If ISA extension Zkr is available, set
mseccfg.sseed=1
mseccfg.useed=0
This enables access to the seed CSR in S-mode but not in U-mode.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
- Add Zkr as extension in sbi_hart_extensions enum
- Return "zkr" string for Zkr extension from sbi_hart_extension_id2string
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
As the domain will reject a new memory region which has a sub-regions
already in the domain, even the new region is bigger and has the same
flags. This problem can be solved by relaxing region restriction and
rechecking when adding and sanitizing domains.
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Swapping domain region is a common operation when sorting domain region,
so separate it as a function to make code clean.
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Per the SBI specification, the effects of these functions are limited to
a specific ASID and/or VMID. This applies even when flushing the entire
address space.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Since commit 112daa2e64 ("lib: sbi: Maximize the use of HART index in
sbi_domain") the platform parameter is unused.
Fixes: 112daa2e64 ("lib: sbi: Maximize the use of HART index in sbi_domain")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Unlike C.LWSP/C.LDSP, these encodings can be used with the zero
register, so checking that the rs2 field is non-zero is unnecessary.
Additionally, the previous check was incorrect since it was checking
the immediate field of the instruction instead of the rs2 field.
Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
sbi_pmu_ctr_cfg_match() receives data from a lower privilege level mode.
We must catch maliciously wrong values.
We already check against total_ctrs. But we do not check that total_ctrs is
less than SBI_PMU_HW_CTR_MAX + SBI_PMU_FW_CTR_MAX.
Check that the number of hardware counters is in the valid range.
Addresses-Coverity-ID: 1566114 Out-of-bounds write
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
'1' is a 32 bit integer. When shifting it by more than 31 bits it becomes
zero and we get an incorrect return value.
Addresses-Coverity-ID: 1568356 Bad bit shift operation
Fixes: 296e70d69d ("lib: sbi: Extend sbi_hartmask to support both hartid and hartindex")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Based on sections 4.c and 4.d in Ch.2 of the Smepmp spec the PMP entries
must be programmed as below:
1. Program M-only entries
2. Enable mseccfg.MML
3. Program shared-region entries
4. Program SU-only entries
Co-developed-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
The Smepmp read-only shared region must have pmpcfg.L, pmpcfg.R,
pmpcfg.W, and pmpcfg.X bits set so sbi_hart_get_smepmp_flags()
must return pmp_flags accordingly.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
The mseccfg.MML bit is a sticky bit which remains unchanged once set
so no need to clear it in sbi_hart_smepmp_configure().
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Let us factor-out Smepmp configuaration as separate function so
that code is more readable.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
The sbi_hartmask_for_each_hart() macro is slow and has only one user
so let us completely remove the sbi_hartmask_for_each_hart() macro.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>