The sbi_hart_map_saddr() must create PMP mapping of size greater
than or equal to PMP granularity otherwise PMP mapping does not
work when size parameter less than sbi_hart_pmp_granularity(scratch).
Fixes: 6e44ef686a ("lib: sbi: Add functions to map/unmap shared memory")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
console_puts/console_putc should replace each other, but the previous
sbi_putc can only use console_putc. This patch addresses this problem.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
In the future there may be a lot of ISA extensions, a 'long' may not
be able to accommodate, changed to an array for the future.
Addresses-Coverity-ID: 1568357 Out-of-bounds access
Fixes: 6259b2ec2d ("lib: utils/fdt: Fix fdt_parse_isa_extensions()
implementation")
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
If ISA extension Zkr is available, set
mseccfg.sseed=1
mseccfg.useed=0
This enables access to the seed CSR in S-mode but not in U-mode.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
- Add Zkr as extension in sbi_hart_extensions enum
- Return "zkr" string for Zkr extension from sbi_hart_extension_id2string
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
As the domain will reject a new memory region which has a sub-regions
already in the domain, even the new region is bigger and has the same
flags. This problem can be solved by relaxing region restriction and
rechecking when adding and sanitizing domains.
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Swapping domain region is a common operation when sorting domain region,
so separate it as a function to make code clean.
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Per the SBI specification, the effects of these functions are limited to
a specific ASID and/or VMID. This applies even when flushing the entire
address space.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Since commit 112daa2e64 ("lib: sbi: Maximize the use of HART index in
sbi_domain") the platform parameter is unused.
Fixes: 112daa2e64 ("lib: sbi: Maximize the use of HART index in sbi_domain")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Unlike C.LWSP/C.LDSP, these encodings can be used with the zero
register, so checking that the rs2 field is non-zero is unnecessary.
Additionally, the previous check was incorrect since it was checking
the immediate field of the instruction instead of the rs2 field.
Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
sbi_pmu_ctr_cfg_match() receives data from a lower privilege level mode.
We must catch maliciously wrong values.
We already check against total_ctrs. But we do not check that total_ctrs is
less than SBI_PMU_HW_CTR_MAX + SBI_PMU_FW_CTR_MAX.
Check that the number of hardware counters is in the valid range.
Addresses-Coverity-ID: 1566114 Out-of-bounds write
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
'1' is a 32 bit integer. When shifting it by more than 31 bits it becomes
zero and we get an incorrect return value.
Addresses-Coverity-ID: 1568356 Bad bit shift operation
Fixes: 296e70d69d ("lib: sbi: Extend sbi_hartmask to support both hartid and hartindex")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Based on sections 4.c and 4.d in Ch.2 of the Smepmp spec the PMP entries
must be programmed as below:
1. Program M-only entries
2. Enable mseccfg.MML
3. Program shared-region entries
4. Program SU-only entries
Co-developed-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
The Smepmp read-only shared region must have pmpcfg.L, pmpcfg.R,
pmpcfg.W, and pmpcfg.X bits set so sbi_hart_get_smepmp_flags()
must return pmp_flags accordingly.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
The mseccfg.MML bit is a sticky bit which remains unchanged once set
so no need to clear it in sbi_hart_smepmp_configure().
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Let us factor-out Smepmp configuaration as separate function so
that code is more readable.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
The sbi_hartmask_for_each_hart() macro is slow and has only one user
so let us completely remove the sbi_hartmask_for_each_hart() macro.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Let us maximize the use of HART index in sbi_domain because hartindex
based hartmask access and sbi_scratch lookup is faster.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
The sbi_scratch_last_hartid() macro is not of much use on platforms
with really sparse hartids so let us replace use of this macro with
other approaches.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Let us prefer hartindex over hartid in IPI framework which in-turn
forces IPI users to also prefer hartindex.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
The sbi_hartid_to_scratch() involves translating hartid to hartindex
which is expensive so let's use sbi_hartindex_to_scratch() instead.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Currently, the sbi_hartmask is indexed by hartid which puts a
limit on hartid to be less than SBI_HARTMASK_MAX_BITS.
We extend the sbi_hartmask implementation to use hartindex and
support updating sbi_hartmask using hartid. This removes the
limit on hartid and existing code works largely unmodified.
Signed-off-by: Xiang W <wxjstz@126.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
The hartid to hartindex mapping is now tracked in sbi_scratch so we
don't need sbi_platform_hart_index() and sbi_platform_hart_invalid()
functions hence let us remove them.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
We introduce HART index and related helper functions in sbi_scratch
where HART index is contiguous and each HART index maps to a physical
HART id such that 0 <= HART index and HART index < SBI_HARTMASK_MAX_BITS.
The HART index to HART id mapping follows the index2id mapping provided
by the platform. If the platform does not provide index2id mapping then
identity mapping is assumed.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
If the system is defined from tlb_fifo_num_entries, the scratch may be
too small to hold the fifo, so it is alloc through the heap.
Signed-off-by: Xiang W <wxjstz@126.com>
Signed-off-by: Xing Xiaoguang <xiaoguang.xing@sophgo.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
For some platforms with a particularly high number of harts, if the
tlb fifo is too small, it case harts to wait. Platforms should be
allowed to specify the size of the tlb fifo.
Signed-off-by: Xiang W <wxjstz@126.com>
Signed-off-by: Xing Xiaoguang <xiaoguang.xing@sophgo.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
After the hardware hpm counter is stopped, it should not raise any new
interrupt as it is already stopped. So add the hw_counter_disable_irq
callback to allow the custom pmu device to control this behavior.
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Samuel Holland <samuel@sholland.org>
When detecting features of PMU, the hpm counter may be written to some
value, this will cause some unexpected behavior in some cases. So ensure
the hpm counter is updated before starting the counter and the related
interrupt.
Signed-off-by: Haijiao Liu <haijiao.liu@sophgo.com>
Co-authored-by: Inochi Amaoto <inochiama@outlook.com>
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Samuel Holland <samuel@sholland.org>
The previous definition had the assumption that the machine word length
is equal to the word length of LONG. Remove this assumption and add a
static check to prevent errors in subsequent modifications.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
After supporting noncontigous hpm event and counters in opensbi, the
number of hpm counters can be calculated by the mhpm_mask. So this field
is unnecessary and can be removed to save some space.
Signed-off-by: Inochi Amaoto <inochiama@outlook.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
This adds the support for ISA extension smcntrpmf. When some inhibit flags
are set by a lower privilege mode for new CSRs added by smcntrpmf, OpenSBI
sets the appropriate values correspondingly.
Signed-off-by: Kaiwen Xue <kaiwenx@andrew.cmu.edu>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Cycle and instructions are hardware events instead of firmware ones. Fix
the typo in the name of this function.
Signed-off-by: Kaiwen Xue <kaiwenx@andrew.cmu.edu>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Anup patel <anup@brainfault.org>
Platforms may implement hpm events/counters non contiguously but the current
implementation assumes them to be always contigous. Add a bitmap that
captures the hpm events/counters as implemented in the hardware and use
it to set the max limit of hardware counters visible to the OS. Counters
not implemented in the hardware can't be used by the OS because those
wont be described in the DT.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
commit 68e66106120f ("SUSP: Add SBI_ERR_DENIED") of the SBI spec adds
a new error code, SBI_ERR_DENIED, which is returned when entry criteria
has not be meant. Update the system suspend implementation to return
this error when it has detected that not all harts are in the STOPPED
state.
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
With Smepmp enabled, it is necessary for shared memory from
S/U mode to be mapped/unmapped before and after read/write
of the memory region. This patch maps the debug console
shared memory before accessing it.
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
When Smepmp is enabled, M-mode will need to map/unmap the
shared memory before it can read/write to it. This patch
adds functions to create dynamic short-lived mappings.
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
- If Smepmp is enabled, the access flags of an entry are determined based on
truth table defined in Smepmp.
- First PMP entry (index 0) is reserved.
- Existing boot PMP entries start from index 1.
- Since enabling Smepmp revokes the access privileges of the M-mode software
on S/U-mode region, first PMP entry is used to map/unmap the shared memory
between M and S/U-mode. This allows a temporary access window for the M-mode
software to read/write to S/U-mode memory region.
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Configure PMP at last when all other initializations have been done.
Because if SMEPMP is detected, M-mode access to the S/U space will be
rescinded.
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Since pmp entries have implicit priority on index, previous entries will
deny access to SU on M-mode region. Also, M-mode will not have access to
SU region while previous entries will allow access to M-mode regions.
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
- Add a function to disable a given PMP entry.
- Add a function to check if a given entry is disabled.
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Fix special case: sbi_snprintf(out, out_len, ...) when out_len equal
1, The previous code will not fill the buffer with any char.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
A single scan of the format char may add multiple characters to the
tbuf, causing a buffer overflow. You should check if tbuf is full in
printc so that it does not cause a buffer overflow.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Because *out needs to reserve a byte to hold '\0', no more characters
should be added to the buffer when *out has one byte left, and the
buffer size *out_len should not be modified. this patch prevents
the correction of *out_len when *out_len is 1.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
When doing width = width - strlen(string) in prints there is no need
to consider the case that witdh may be less than 0. This is because
the code to do filling needs to be executed under the condition that
width > 0.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The information of sg/b/letbase can be obtained by the type character,
simplifying the parameter by passing the type directly.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The space flag is used to add a space before positive numbers, and
apostrophe is used to print the thousand separator. Add code to
ignore these two flags
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Adds + flags for print, prefixing positive numbers with + when this
flags is present
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>