227 Commits
v1.6 ... master

Author SHA1 Message Date
Andrew Waterman
111738090c lib: sbi: Flush TLBs upon FWFT ADUE change
A clarification has been added to the RISC-V privileged specification
regarding synchronization requirements when xenvcfg.ADUE changes.
(Refer, the following commit in the RISC-V Privileged ISA spec
4e540263db)

As-per these requirements, the SBI FWFT ADUE implementation must
flush TLBs upon changes in ADUE state on a hart.

Signed-off-by: Andrew Waterman <andrew@sifive.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251127112121.334023-3-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-16 20:19:21 +05:30
Andrew Waterman
843e916dca lib: sbi: Expose __sbi_sfence_vma_all() function
The __sbi_sfence_vma_all() can be shared by different parts of
OpenSBI so rename __tlb_flush_all() to __sbi_sfence_vma_all()
and make it global function.

Signed-off-by: Andrew Waterman <andrew@sifive.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251127112121.334023-2-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-16 20:19:21 +05:30
Anup Patel
5eec86eec8 lib: sbi: Factor-out PMP programming into separate sources
The PMP programming is a significant part of sbi_hart.c so factor-out
this into separate sources sbi_hart_pmp.c and sbi_hart_pmp.h for better
maintainability.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-6-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-16 20:16:47 +05:30
Anup Patel
42139bb9b7 lib: sbi: Replace sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr()
The sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr() functions can
now be replaced by various sbi_hart_protection_xyz() functions.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-5-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-16 20:16:47 +05:30
Anup Patel
b6da690ffb lib: sbi: Implement hart protection for PMP and ePMP
Implement PMP and ePMP based hart protection abstraction so
that usage of sbi_hart_pmp_xyz() functions can be replaced
with sbi_hart_protection_xyz() functions.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-4-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-16 20:16:47 +05:30
Anup Patel
809df05c35 lib: sbi: Introduce hart protection abstraction
Currently, PMP and ePMP are the only hart protection mechanisms
available in OpenSBI but new protection mechanisms (such as Smmpt)
will be added in the near future.

To allow multiple hart protection mechanisms, introduce hart
protection abstraction and related APIs.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-3-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-16 20:16:47 +05:30
Anup Patel
644a344226 lib: sbi: Introduce sbi_hart_pmp_unconfigure() function
Currently, the unconfiguring PMP is implemented directly inside
switch_to_next_domain_context() whereas rest of the PMP programming
is done via functions implemented in sbi_hart.c.

Introduce a separate sbi_hart_pmp_unconfigure() function so that
all PMP programming is in one place.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209135235.423391-2-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-16 20:16:47 +05:30
Anup Patel
4339e85794 platform: generic: Keep some empty space in FDT passed to next stage
Leaving no empty space in the FDT passed to the next booting stage
causes the following U-Boot crash on Ventana internal platforms:

Unhandled exception: Load access fault
EPC: 00000000fffa6372 RA: 00000000fffa7418 TVAL: 0001746174730068
EPC: 0000000080245372 RA: 0000000080246418 reloc adjusted

SP:  00000000fef38440 GP:  00000000fef40e60 TP:  0000000000000000
T0:  00000000fef40a70 T1:  000000000000ff00 T2:  0000000000000000
S0:  00000000fffc17a8 S1:  00000000fef38d40 A0:  7375746174730068
A1:  00000000fffc17a8 A2:  0000000000000010 A3:  0000000000000010
A4:  0000000000000000 A5:  00000000fffc17b8 A6:  0000000000ff0000
A7:  000000000000b100 S2:  0000000000000000 S3:  0000000000000001
S4:  00000000fef38d40 S5:  7375746174730068 S6:  0000000000000000
S7:  00000000fef4eef0 S8:  00000000fef4ef90 S9:  0000000000000000
S10: 0000000000000000 S11: 00000000fef4efc0 T3:  00000000fef40ea8
T4:  0000000000ff0000 T5:  00000000fef40a60 T6:  00000000fef40a6c

To address the above issue, keep some minimal empty space in the
FDT instead of no empty space.

Fixes: bbe9a23060 ("platform: generic: Pack the FDT after applying fixups")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20251209053130.407935-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-16 20:14:08 +05:30
Benedikt Freisen
afc24152bb include: sbi: Ignore rs1 and rd fields in FENCE.TSO.
While FENCE.TSO is only specified with them set to zero, it is a special
case of FENCE, which needs to ignore these otherwise reserved fields, but
in some implementations, namely XuanTie C906 and C910, apparently does not.
See the RISCVuzz paper by Thomas et al. for details.

Signed-off-by: Benedikt Freisen <b.freisen@gmx.net>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251114203842.13396-5-b.freisen@gmx.net
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-15 18:42:00 +05:30
Benedikt Freisen
dffa24b7f5 include: sbi: Fix tab alignment.
A previous editor or formatter script appears to have been confused by a
diff view, where the prepended + or - changes the way tabs are displayed.
Since it is the file itself that matters, adjust that accordingly.

Signed-off-by: Benedikt Freisen <b.freisen@gmx.net>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251114203842.13396-4-b.freisen@gmx.net
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-15 18:42:00 +05:30
Benedikt Freisen
6a20872c91 Makefile: sensible default value for OPENSBI_CC_XLEN.
If guessing the compiler's XLEN fails, use 64 rather than garbage.
The previous behavior could silently break e.g. OPENSBI_CC_SUPPORT_VECTOR
when cross-compiling with a system's native clang.

Signed-off-by: Benedikt Freisen <b.freisen@gmx.net>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251114203842.13396-3-b.freisen@gmx.net
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-15 18:42:00 +05:30
Benedikt Freisen
d65c1e95a7 include: sbi: Make "s8" actually signed.
Since plain "char" is implicitly unsigned on RISC-V, "s8" should be an alias for "signed char".

Signed-off-by: Benedikt Freisen <b.freisen@gmx.net>
Reviewed-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251114203842.13396-2-b.freisen@gmx.net
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-15 18:42:00 +05:30
Samuel Holland
51fe6a8bc9 lib: utils: Use SBI_DOMAIN_MMIO to check MMIO device permissions
Drivers or platforms may create memory regions with the MMIO flag set
that contain S-mode-accessible MMIO devices. This is strictly correct
and should be allowed, along with the existing default case of
S-mode-accessible MMIO devices appearing in non-MMIO memory regions.
When passed SBI_DOMAIN_MMIO, sbi_domain_check_addr() will perform the
correct set of permission checks.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251121193808.1528050-3-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-08 16:47:55 +05:30
Samuel Holland
1f9677582a lib: sbi_domain: Allow MMIO access to non-MMIO ranges
Currently, platforms do not provide complete memory region information
to OpenSBI. Generally, memory regions are only created for the few MMIO
devices that have M-mode drivers. As a result, most MMIO devices fall
inside the default S-mode RWX memory region, which does _not_ have the
MMIO flag set.

In fact, OpenSBI relies on certain S-mode MMIO devices being inside
non-MMIO memory regions. Both fdt_domain_based_fixup_one() and
mpxy_rpmi_sysmis_xfer() call sbi_domain_check_addr() with the MMIO flag
cleared, and that function currently requires an exact flag match. Those
access checks will thus erroneously fail if the platform creates memory
regions with the correct flags for these devices (or for a larger MMIO
region containing these devices).

We should not ignore the MMIO flag entirely, because
sbi_domain_check_addr() is also used to check the permissions of S-mode
shared memory buffers, and S-mode should not be using MMIO device
addresses as memory buffers. But when checking if S-mode is allowed to
do MMIO accesses, we need to recognize that MMIO devices appear in
memory regions both with and without the MMIO flag set.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251121193808.1528050-2-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-08 16:47:55 +05:30
Junhui Liu
126c9d34d2 platform: generic: spacemit: add missing objects.mk
Add the missing objects.mk for the SpacemiT platform, required for the
K1 platform to be included in the build.

Fixes: 1f84ec2a ("platform: generic: spacemit: add K1")
Signed-off-by: Junhui Liu <junhui.liu@pigmoral.tech>
Acked-by: Troy Mitchell <troy.mitchell@linux.spacemit.com>
Link: https://lore.kernel.org/r/20251124-k1-fix-v1-1-8d7e7a29379e@pigmoral.tech
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-08 11:56:37 +05:30
Chen Pei
b8b26fe121 lib: sbi: Enable Ssqosid Ext using mstateen0
The QoS Identifiers extension (Ssqosid) introduces the srmcfg register,
which configures a hart with two identifiers: a Resource Control ID
(RCID) and a Monitoring Counter ID (MCID). These identifiers accompany
each request issued by the hart to shared resource controllers.

If extension Smstateen is implemented together with Ssqosid, then
Ssqosid also requires the SRMCFG bit in mstateen0 to be implemented. If
mstateen0.SRMCFG is 0, attempts to access srmcfg in privilege modes less
privileged than M-mode raise an illegal-instruction exception. If
mstateen0.SRMCFG is 1 or if extension Smstateen is not implemented,
attempts to access srmcfg when V=1 raise a virtual-instruction exception.

This extension can be found in the RISC-V Instruction Set Manual:
https://github.com/riscv/riscv-isa-manual

Changes in v5:
 - Remove SBI_HART_EXT_SSQOSID dependency SBI_HART_PRIV_VER_1_12

Changes in v4:
 - Remove extraneous parentheses around SMSTATEEN0_SRMCFG

Changes in v3:
 - Check SBI_HART_EXT_SSQOSID when swapping SRMCFG

Changes in v2:
 - Remove trap-n-detect
 - Context switch CSR_SRMCFG

Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
Reviewed-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20251114115722.1831-1-cp0613@linux.alibaba.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-08 11:04:06 +05:30
Nick Hu
f71bb323f4 lib: utils/cache: Add SiFive Extensible Cache (EC) driver
Add support for SiFive Extensible Cache (EC) controller with multi-slice
architecture. The driver implements cache maintenance operations through
MMIO register interface.

Co-developed-by: Vincent Chen <vincent.chen@sifive.com>
Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
Co-developed-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Co-developed-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251114-sifive-cache-drivers-v1-3-8423a721924c@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-08 10:01:20 +05:30
Nick Hu
ec51e91eaa lib: utils/cache: Add SiFive PL2 controller
SiFive Private L2(PL2) cache is a private cache owned by each hart. Add
this driver to support private cache flush operations via the MMIO
registers.

Co-developed-by: Eric Lin <eric.lin@sifive.com>
Signed-off-by: Eric Lin <eric.lin@sifive.com>
Co-developed-by: Zong Li <zong.li@sifive.com>
Signed-off-by: Zong Li <zong.li@sifive.com>
Co-developed-by: Vincent Chen <vincent.chen@sifive.com>
Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
Co-developed-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251114-sifive-cache-drivers-v1-2-8423a721924c@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-08 10:01:05 +05:30
Nick Hu
35aece218a lib: utils/cache: Handle last-level cache correctly in fdt_cache_add()
The fdt_cache_add() helper attempts to retrieve the next-level cache and
returns SBI_ENOENT when there is none. Since this condition only indicates
that the current cache is the last-level cache, the helper should not
treat it as an error.

Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251114-sifive-cache-drivers-v1-1-8423a721924c@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-08 09:51:53 +05:30
Vladimir Kondratiev
de376252f4 lib: sbi: Remove static variable root_memregs_count
Calculate number of used memory regions using helper function when needed.

Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@mobileye.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251111104327.1170919-3-vladimir.kondratiev@mobileye.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-02 10:52:52 +05:30
Vladimir Kondratiev
4997eb28da lib: sbi: fix covered regions handling in sanitize_domain()
In the sanitize_domain, code that checks for the case when one
memory region covered by the other, was never executed. Quote:

	/* Sort the memory regions */
	for (i = 0; i < (count - 1); i++) {
<snip>
	}

	/* Remove covered regions */
	while(i < (count - 1)) {

Here "while" loop never executed because condition "i < (count - 1)"
is always false after the "for" loop just above.

In addition, when clearing region, "root_memregs_count"
should be adjusted as well, otherwise code that adds memory region
in the "root_add_memregion" will use wrong position:

	/* Append the memregion to root memregions */
	nreg = &root.regions[root_memregs_count];

empty entry will be created in the middle of regions array, new
regions will be added after this empty entry while sanitizing code
will stop when reaching empty entry.

Fixes: 3b03cdd60c ("lib: sbi: Add regions merging when sanitizing domain region")
Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@mobileye.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251111104327.1170919-2-vladimir.kondratiev@mobileye.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-02 10:52:51 +05:30
Vladimir Kondratiev
825d0e918a Makefile: define C language standard to "gnu11"
C language standard was not specified, implying default that is
depending on the compiler version. Force "gnu11", same as for the
Linux kernel

Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@mobileye.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251113081648.2708990-1-vladimir.kondratiev@mobileye.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-01 11:52:55 +05:30
Rahul Pathak
d28e2fa9cc Makefile: Only enable --print-gc-section for verbose (V=1) build
Earlier this option was enabled during debug build which only prints
the linker logs of removing the unused sections. Instead enable this
for V=1 and keep the debug build clean.

Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251110164352.163801-1-rpathak@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-01 11:27:50 +05:30
Shifrin Dmitry
c9f856e23f lib: sbi_pmu: Fix xINH bits configuring
Before this patch sbi_pmu_ctr_start() ignores flags received in
sbi_pmu_ctr_cfg_match() including inhibit ones. To prevent it,
save flags together with event_data and use them both in
sbi_pmu_ctr_start().

Fixes: 1db95da299 ("lib: sbi: sbi_pmu: fixed hw counters start for hart")
Signed-off-by: Shifrin Dmitry <dmitry.shifrin@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251110113140.80561-1-dmitry.shifrin@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-01 11:06:36 +05:30
Manuel Hernández Méndez
da05980de6 platform: openpiton: use generic early init
Add code for using generic_early_init so that the uart parameters
are parsed from dtb.

Signed-off-by: Manuel Hernández Méndez <manuel.hernandez@openchip.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251107075429.1382-1-manuel.hernandez@openchip.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-01 10:41:11 +05:30
Manuel Hernández Méndez
c75f468ad5 platform: ariane: parse dtb for getting some initial parameters
Add code for getting some uart, clint and plic parameters from
device tree.

Signed-off-by: Manuel Hernández Méndez <manuel.hernandez@openchip.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251107075412.1350-1-manuel.hernandez@openchip.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-12-01 10:33:31 +05:30
Heinrich Schuchardt
fade4399d2 lib: utils/irqchip: plic: context_id is signed
Array context_id in struct plic_data has elements of type s16.
A negative valid indicates an invalid entry.
Copying the array element to a u32 scalar hides the sign.

Use s16 as target type when copying an array element to a scalar.

Addresses-Coverity-ID: 1667176 Unsigned compared against 0
Addresses-Coverity-ID: 1667178 Logically dead code
Addresses-Coverity-ID: 1667179 Unsigned compared against 0
Addresses-Coverity-ID: 1667182 Logically dead code
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251105110121.47130-1-heinrich.schuchardt@canonical.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-10 13:44:17 +05:30
Heinrich Schuchardt
976a6a8612 lib: utils/serial: typo Recieve
%s/Recieve/Receive/

Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251105011648.9413-1-heinrich.schuchardt@canonical.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-10 13:43:27 +05:30
Benoît Monin
2e9dc3b430 lib: utils/timer: mtimer: add MIPS P8700 compatible
The MTIMER of the MIPS P8700 is compliant with the ACLINT specification,
so add a compatible string for it.

Signed-off-by: Benoît Monin <benoit.monin@bootlin.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251107-p8700-aclint-v3-2-93eabb17d54e@bootlin.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-10 13:41:53 +05:30
Benoît Monin
5de1d3240f lib: utils/timer: mtimer: Select the reference mtimer from a DT property
The current selection of the reference MTIMER may fail in some setup.
In a multi-cluster configuration, there is one MTIMER per cluster, each
associated with the HARTS of the cluster. So we do not have a MTIMER
with no associated HARTs to use as our reference.

To be able to select a reference MTIMER in that case, look up an optional
device tree property named "riscv,reference-mtimer" that indicate which
MTIMER is the reference.

Signed-off-by: Benoît Monin <benoit.monin@bootlin.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251107-p8700-aclint-v3-1-93eabb17d54e@bootlin.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-10 13:41:53 +05:30
Benoît Monin
38a6106b10 lib: utils/ipi: mswi: add MIPS P8700 compatible
The MSWI present in the MIPS P8700 is compliant with the ACLINT
specification, so add a dedicated compatible string for it.

Signed-off-by: Benoît Monin <benoit.monin@bootlin.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251027-p8700-aclint-v2-1-f10cbfb66e92@bootlin.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-05 21:37:37 +05:30
Manuel Hernández Méndez
e8dfa55f3d platform: ariane: Move ariane platform from fpga to generic
The Ariane framework has a generic PMU that is not used by OpenSBI.
Due to OpenSBI’s build system we cannot directly reuse the generic
platform functions, so move the Ariane platform to generic. Also due
to the generic platform is where new features are added.

Signed-off-by: Manuel Hernández Méndez <maherme.dev@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251023090347.30746-1-maherme.dev@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-05 21:22:41 +05:30
Joshua Yeong
834d0d9f26 lib: utils: Add MPXY RPMI mailbox driver for performance
Add MPXY RPMI mailbox driver for performance.

Signed-off-by: Joshua Yeong <joshua.yeong@starfivetech.com>
Reviewed-by: Rahul Pathak <rpathak@ventanamicro.com>
Link: https://lore.kernel.org/r/20251013153138.1574512-4-joshua.yeong@starfivetech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-04 10:35:59 +05:30
Joshua Yeong
a28e51016e lib: utils: Add MPXY RPMI mailbox driver for device power
Add MPXY RPMI mailbox driver for device power.

Signed-off-by: Joshua Yeong <joshua.yeong@starfivetech.com>
Reviewed-by: Rahul Pathak <rpathak@ventanamicro.com>
Link: https://lore.kernel.org/r/20251013153138.1574512-3-joshua.yeong@starfivetech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-04 10:34:01 +05:30
Joshua Yeong
fa911ebe72 lib: utils: Add MPXY RPMI mailbox driver for voltage
Add voltage service group for RPMI/MPXY support

Signed-off-by: Joshua Yeong <joshua.yeong@starfivetech.com>
Reviewed-by: Rahul Pathak <rpathak@ventanamicro.com>
Link: https://lore.kernel.org/r/20251013153138.1574512-2-joshua.yeong@starfivetech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-04 10:32:57 +05:30
Yu-Chien Peter Lin
0250db4dad lib: sbi_domain_context: preserve firmware PMP entries during domain context switch
When SmePMP is enabled, clearing firmware PMP entries during a domain
context switch can temporarily revoke access to OpenSBI’s own code and
data, leading to faults.

Keep firmware PMP entries enabled across switches so firmware regions
remain accessible and executable.

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-9-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-02 17:00:49 +05:30
Yu-Chien Peter Lin
b210376fe2 lib: sbi: sbi_hart: track firmware PMP entries for SmePMP
Add fw_smepmp_ids bitmap to track PMP entries that protect firmware
regions. Allow us to preserve these critical entries across domain
transitions and check inconsistent firmware entry allocation.

Also add sbi_hart_smepmp_is_fw_region() helper function to query
whether a given SmePMP entry protects firmware regions.

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-8-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-02 16:49:47 +05:30
Yu-Chien Peter Lin
631efeeb49 lib: sbi_domain: ensure consistent firmware PMP entries
During domain context switches, all PMP entries are reconfigured
which can clear firmware access permissions, causing M-mode access
faults under SmePMP.

Sort domain regions to place firmware regions first, ensuring
consistent firmware PMP entries so they won't be revoked during
domain context switches.

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-7-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-02 16:35:19 +05:30
Yu-Chien Peter Lin
b34caeef81 lib: sbi_domain: add SBI_DOMAIN_MEMREGION_FW memregion flag
Add a new memregion flag, SBI_DOMAIN_MEMREGION_FW and mark the
OpenSBI code and data regions.

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-6-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-02 16:26:19 +05:30
Yu-Chien Peter Lin
34657b377f lib: sbi_hart: return error when insufficient PMP entries available
Previously, when memory regions exceed available PMP entries,
some regions were silently ignored. If the last entry that covers
the full 64-bit address space is not added to a domain, the next
stage S-mode software won't have permission to access and fetch
instructions from its memory. So return early with error message
to catch such situation.

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-5-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-02 16:21:07 +05:30
Yu-Chien Peter Lin
90c3b94094 lib: sbi_domain: print unsupported SmePMP permissions
The reg->flag is encoded with 6 bits to specify RWX
permissions for M-mode and S-/U-mode. However, only
16 of the possible encodings are valid on SmePMP.

Add a warning message when an unsupported permission
encoding is detected.

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-4-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-02 16:06:51 +05:30
Yu-Chien Peter Lin
667eed2266 lib: sbi_domain: allow specifying inaccessible region
According to the RISC‑V Privileged Specification, SmePMP
regions that grant no access in any privilege mode are
valid. Allow such regions to be specified.

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-3-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-02 16:03:35 +05:30
Yu-Chien Peter Lin
32c1d38dcf lib: sbi_hart: move sbi_hart_get_smepmp_flags() to sbi_domain
Move sbi_hart_get_smepmp_flags() from sbi_hart.c to sbi_domain.c and
rename it to sbi_domain_get_smepmp_flags() to better reflect its
purpose of converting domain memory region flags to PMP configuration.

Also removes unused parameters (scratch and dom).

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251008084444.3525615-2-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-11-02 15:55:57 +05:30
Nick Hu
37b72cb575 lib: utils/suspend: Add SiFive SMC0 driver
The SiFive SMC0 controls the clock and power domain of the core complex
on the SiFive platform. The core complex enters the low power state
after the secondary cores enter the tile power gating and last core
execute the `CEASE` instruction with the corresponding SMC0
configurations. The devices that inside both tile power domain and core
complex power domain will be off, including caches and timer. Therefore
we need to flush the last level cache before entering the core complex
power gating and update the timer after waking up.

Reviewed-by: Cyan Yang <cyan.yang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-12-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:28:10 +05:30
Nick Hu
ab23d8a392 lib: sbi: Add system_resume callback for restoring the system
The last core who performs the system suspend is responsible for
restoring the system after waking up. Add the system_resume callback for
restoring the system from suspend.

Suggested-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-11-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:28:09 +05:30
Nick Hu
8f8c393155 lib: utils/timer: Expose timer update function
Exposing the ACLINT timer update APIs so the user can update the mtimer
after waking up from the non-retentive suspend.

Reviewed-by: Cyan Yang <cyan.yang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-10-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:28:06 +05:30
Nick Hu
1514a32730 lib: utils/hsm: Add SiFive TMC0 driver
The SiFive TMC0 controls the tile power domains on SiFive platform. The
CPU enters the low power state via the `CEASE` instruction after
configuring the TMC0. Any devices that inside the tile power domain will
be power gated, including the private cache. Therefore flushing the
private cache before entering the low power state.

Co-developed-by: Vincent Chen <vincent.chen@sifive.com>
Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
Reviewed-by: Cyan Yang <cyan.yang@sifive.com>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-9-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:28:03 +05:30
Nick Hu
94f0f84656 lib: sbi: Extends sbi_ipi_raw_send() to use all available IPI devices
A platform may contain multiple IPI devices. In certain use cases,
such as power management, it may be necessary to send an IPI through a
specific device to wake up a CPU. For example, if an IMSIC is powered
down and reset, the core cannot receive IPIs from it, so the wake-up must
instead be triggered through the CLINT.

Suggested-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-8-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:28:01 +05:30
Nick Hu
c2d2b9140a lib: utils/irqchip: Add APLIC restore function
Since the APLIC may enter a reset state upon system wake-up from a
platform low power state, adding a restore function to reinitialize
the APLIC.

Reviewed-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Reviewed-by: Cyan Yang <cyan.yang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-7-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:27:59 +05:30
Nick Hu
64904e5d5c lib: sbi: Add SiFive proprietary xsfcease
Using ISA string "xsfcease" to detect the support of the custom
instruction "CEASE".

Reviewed-by: Cyan Yang <cyan.yang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-6-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:27:57 +05:30
Nick Hu
8752c809b3 lib: sbi: Add SiFive proprietary xsfcflushdlone
Using ISA string "xsfcflushdlone" to detect the support of the
SiFive L1D cache flush custom instruction.

Reviewed-by: Cyan Yang <cyan.yang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-5-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:27:55 +05:30
Nick Hu
ce4dc7649e lib: utils/cache: Add fdt cmo helpers
Add the helpers to build up the cache hierarchy via FDT and provide some
cmo functions for the user who want to flush the entire cache.

Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-4-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:27:53 +05:30
Vincent Chen
8ea972838c utils: cache: Add SiFive ccache controller
SiFive Composable cache is a L3 share cache of the core complex. Add this
driver to support the share cache maintenance operations via the MMIO
registers.

Co-developed-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
Co-developed-by: Nick Hu <nick.hu@sifive.com>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-3-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:27:51 +05:30
Nick Hu
d6b684ec86 lib: utils: Add FDT cache library
Add the FDT cache library so we can build up the cache topology via the
'next-level-cache' DT property.

Co-developed-by: Vincent Chen <vincent.chen@sifive.com>
Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
Co-developed-by: Andy Chiu <andy.chiu@sifive.com>
Signed-off-by: Andy Chiu <andy.chiu@sifive.com>
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-2-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 11:27:10 +05:30
Nick Hu
1207c7568f lib: utils: Add cache flush library
The current RISC-V CMO only defines how to flush a cache block. However,
certain use cases, such as power management, may require flushing the
entire cache. Therefore, a framework is being introduced to allow vendors
to flush the entire cache using their own methods.

Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20251020-cache-upstream-v7-1-69a132447d8a@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-28 10:39:59 +05:30
Alexander Chuprunov
ac16c6b604 lib: sbi: sbi_pmu: added checks for ctr_idx in match
Previously, in sbi_pmu_ctr_cfg_match() function, ctr_idx was used immediately
after pmu_ctr_find_fw() or pmu_ctr_find_hw() calls. In first case, array index
was (ctr_idx - num_hw_ctrs), in second - ctr_idx. But pmu_ctr_find_fw() and
pmu_ctr_find_hw() functions can return negative value, in which case writing
in arrays with such indexes would corrupt sbi_pmu_hart_state structure.
To avoid this situation, direct ctr_idx value check added.

Signed-off-by: Alexander Chuprunov <alexander.chuprunov@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250918090706.2217603-4-alexander.chuprunov@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-27 16:59:42 +05:30
Alexander Chuprunov
63aacbd782 lib: sbi: sbi_pmu: fixed alignment
Deleted spaces before brace in pmu_ctr_start_fw() for correct alignment.

Signed-off-by: Alexander Chuprunov <alexander.chuprunov@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250918090706.2217603-3-alexander.chuprunov@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-27 16:56:20 +05:30
Alexander Chuprunov
1db95da299 lib: sbi: sbi_pmu: fixed hw counters start for hart
Generally, hardware performance counters can only be started, stopped,
or configured from machine-mode using mcountinhibit and mhpmeventX CSRs.
Also, in opensbi only sbi_pmu_ctr_cfg_match() managed mhpmeventX. But
in generic Linux driver, when perf starts, Linux calls both
sbi_pmu_ctr_cfg_match() and sbi_pmu_ctr_start(), while after hart suspend
only sbi_pmu_ctr_start() command called through SBI interface. This doesn't
work properly in case when suspend state resets HPM registers. In order
to keep counter integrity, sbi_pmu_ctr_start() modified. First, we're saving
hw_counters_data, and after hart suspend this value is restored if
event is currently active.

Signed-off-by: Alexander Chuprunov <alexander.chuprunov@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250918090706.2217603-2-alexander.chuprunov@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-27 16:56:16 +05:30
Anup Patel
55296fd27c lib: Allow custom CSRs in csr_read_num() and csr_write_num()
Some of the platforms use platform specific CSR access functions for
configuring implementation specific CSRs (such as PMA registers).

Extend the common csr_read_num() and csr_write_num() to allow custom
CSRs so that platform specific CSR access functions are not needed.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Link: https://lore.kernel.org/r/20250930153216.89853-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-21 19:34:09 +05:30
Yong-Xuan Wang
3990c8ee07 lib: utils/timer: mtimer: Add SiFive CLINT v2 support
The SiFive CLINT v2 is the HRT that supports the Zicntr extension. It
is incompatible with the SiFive CLINT v0 due to differences in their
control methods.

Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Co-developed-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250917105224.78291-1-yongxuan.wang@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-21 19:32:08 +05:30
Xianbin Zhu
ca380bcb10 platform: generic: Add SpacemiT K1 platform support
Enable CONFIG_PLATFORM_SPACEMIT_K1 in the defconfig for SpacemiT K1 SoC.

Co-authored-by: Troy Mitchell <troy.mitchell@linux.spacemit.com>
Signed-off-by: Troy Mitchell <troy.mitchell@linux.spacemit.com>
Signed-off-by: Xianbin Zhu <xianbin.zhu@linux.spacemit.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250925-smt-k1-8-cores-v3-3-0885a8a70f8e@linux.spacemit.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-20 10:29:51 +05:30
Xianbin Zhu
fb70fe8b98 platform: spacemit: Add HSM driver
Add code to bring up all 8 cores during OpenSBI initialization so
that the Linux kernel can detect and use all cores properly.

Co-authored-by: Troy Mitchell <troy.mitchell@linux.spacemit.com>
Signed-off-by: Troy Mitchell <troy.mitchell@linux.spacemit.com>
Signed-off-by: Xianbin Zhu <xianbin.zhu@linux.spacemit.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250925-smt-k1-8-cores-v3-2-0885a8a70f8e@linux.spacemit.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-20 10:29:50 +05:30
Xianbin Zhu
1f84ec2ac2 platform: generic: spacemit: add K1
Add initial platform support for the SpacemiT K1 SoC, including
early/final init hooks, cold boot handling, and CCI-550 snoop/DVM
enablement.

Co-authored-by: Troy Mitchell <troy.mitchell@linux.spacemit.com>
Signed-off-by: Troy Mitchell <troy.mitchell@linux.spacemit.com>
Signed-off-by: Xianbin Zhu <xianbin.zhu@linux.spacemit.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/all/15169E392597D319+aOcKujCl8mz4XK4L@kernel.org/ [1]
Link: https://lore.kernel.org/r/20250925-smt-k1-8-cores-v3-1-0885a8a70f8e@linux.spacemit.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-20 10:29:44 +05:30
Xiang W
e3eb59a396 lib: sbi: Prevent target domain same as the current
Add error handling code to sbi_domain_context_enter to prevent the
target domain from being the same as the current domain.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250903044619.394019-4-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-06 14:31:52 +05:30
Xiang W
38c31ffb8f lib: sbi: Add hart context init when first call enter
When entering sbi_domain_context_enter for the first time, the hart
context may not be initialized. Add initialization code.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250903044619.394019-3-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-06 14:31:48 +05:30
Xiang W
f7d060c26a lib: sbi: Add error handling to switch_to_next_domain_context
Add error handling to switch_to_next_domain_context to ensure
legal input. When switching contexts, ensure that the target to
be switched is different from the current one.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250903044619.394019-2-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-06 14:28:38 +05:30
Yu-Chien Peter Lin
5de8c1d499 lib: serial: sifive-uart: add shared memory region for SiFive UART
Add shared memory region so the driver has permission
to access it in OpenSBI.

Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250814111012.20151-1-peter.lin@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-10-06 11:13:20 +05:30
Hal Feng
040f3100a9 platform: starfive: jh7110: Add starfive,jh7110s compatible
Add support for VisionFive 2 Lite board.

Link: b7e46979a4
Signed-off-by: Hal Feng <hal.feng@starfivetech.com>
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250906053638.69671-1-heinrich.schuchardt@canonical.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-30 20:45:20 +05:30
Ben Zong-You Xie
8408845cc9 platform: generic: Add Andes QiLai SoC support
Extend generic platform to support Andes QiLai SoC.

Signed-off-by: Ben Zong-You Xie <ben717@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250814104024.3374698-1-ben717@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-30 17:48:11 +05:30
Yang Jialong
944db4eced lib: utils/irqchip: fix aplic lock mechanism in xmsiaddrcfg(h)
The section 4.5.4 "Supervisor MSI address configuration (smsiaddrcfg
and smsiaddrcfgh)" of the AIA specification states that:

"If register mmsiaddrcfgh of the domain has bit L set to one, then
smsiaddrcfg and smsiaddrcfgh are locked as read-only alongside
mmsiaddrcfg and mmsiaddrcfgh."

In other words, the L bit is not defined for smsiaddrcfg[h] registers
so fix aplic_writel_msicfg() accordingly.

Signed-off-by: Yang Jialong <z_bajeer@yeah.net>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250806032924.3532975-1-z_bajeer@yeah.net
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-30 17:24:13 +05:30
Samuel Holland
d9afef57b7 lib: sbi_hsm: Use 64-bit CSR macro for menvcfg
Simplify the code and remove preprocessor checks by treating menvcfg and
menvcfgh together as one 64-bit value.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250908055646.2391370-3-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-16 10:03:53 +05:30
Samuel Holland
f04ae48263 lib: sbi_hart: Do not call delegate_traps() in the resume flow
The only purpose of this function is to program the initial values of
mideleg and medeleg. However, both of these CSRs are now saved/restored
across non-retentive suspend, so the values from this function are
always overwritten by the restored values.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250908055646.2391370-2-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-16 10:03:53 +05:30
Samuel Holland
55135abcd5 lib: sbi_hsm: Save mideleg across non-retentive suspend
OpenSBI updates mideleg when registering or unregistering the PMU SSE
event. The updated CSR value must be saved across non-retentive suspend,
or PMU SSE events will not be delivered after the hart is resumed.

Fixes: b31a0a2427 ("lib: sbi: pmu: Add SSE register/unregister() callbacks")
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250908055646.2391370-1-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-16 10:03:53 +05:30
Anup Patel
cb70dffa0a lib: utils/ipi: Convert IPI drivers as early drivers
The fdt_ipi_init() is already called from generic_early_init() so
let's convert IPI drivers as early drivers.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20250904052410.546818-4-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-16 09:56:31 +05:30
Anup Patel
85f22b38c8 include: sbi: Remove platform specific IPI init
The platform specfic IPI init is not need anymore because using
IPI device rating multiple IPI devices can be registered in any
order as part of the platform specific early init.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20250904052410.546818-3-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-16 09:56:31 +05:30
Anup Patel
ee92afa638 lib: sbi: Introduce IPI device rating
A platform can have multiple IPI devices (such as ACLINT MSWI,
AIA IMSIC, etc). Currently, OpenSBI rely on platform calling
the sbi_ipi_set_device() function in correct order and prefer
the first avaiable IPI device which is fragile.

Instead of the above, introduce IPI device rating and prefer
the highest rated IPI device. This further allows extending
the sbi_ipi_raw_clear() to clear all available IPI devices.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Tested-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20250904052410.546818-2-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-16 09:56:31 +05:30
Rahul Pathak
17b8d1900d lib: utils/reset: Hang the hart after RPMI system reset message
RPMI system reset is a posted message which
does not wait for acknowledgement after sending
the RPMI message to PuC. Call the sbi_hart_hang()
to hang the hart after performing the system reset
via RPMI message.

Fixes: 6a26726e08 ("lib/utils: reset: Add RPMI System Reset driver")
Reported-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Link: https://lore.kernel.org/r/20250903144323.251270-1-rpathak@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-16 09:53:04 +05:30
Samuel Holland
153cdeea53 lib: sbi_heap: Simplify allocation algorithm
Now that the allocator cannot run out of nodes in the middle of an
allocation, the code can be simplified greatly. First it moves bytes
from the beginning and/or end of the node to new nodes in the free
list as necessary. These new nodes are inserted into the free list
in address order. Then it moves the original node to the used list.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250617032306.1494528-4-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-01 10:39:11 +05:30
Samuel Holland
8dcd1448e7 lib: sbi_heap: Allocate list nodes dynamically
Currently the heap has a fixed housekeeping factor of 16, which means
1/16 of the heap is reserved for list nodes. But this is not enough when
there are many small allocations; in the worst case, 1/3 of the heap is
needed for list nodes (32 byte heap_node for each 64 byte allocation).
This has caused allocation failures on some platforms.

Let's avoid trying to guess the best ratio. Instead, allocate more nodes
as needed. To avoid recursion, the nodes are permanent allocations. So
to minimize fragmentation, allocate them in small batches from the end
of the last free space node. Bootstrap the free space list by embedding
one node in the heap control struct.

Some error paths are avoided because the nodes are allocated up front.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250617032306.1494528-3-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-01 10:39:11 +05:30
Samuel Holland
64a38525e6 lib: sbi_list: Add a helper for reverse list iteration
Some use cases require iterating through a list in both directions.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250617032306.1494528-2-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-09-01 10:39:11 +05:30
Chao-ying Fu
1ffbd063c4 generic: mips: support harts to boot from mips_warm_boot
We program reset base for harts (other than hart 0) to boot at
mips_warm_boot that jumps to _start_warm. This helps to skip some code
sequence to speed up.

Signed-off-by: Chao-ying Fu <cfu@mips.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250723204010.9927-1-cfu@mips.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-08-28 11:00:16 +05:30
Jesse Taube
6a1f53bc2d dbtr: Fix sbi_dbtr_read_trig to read from CSRs
sbi_dbtr_read_trig returned the saved state of tdata{1-3}, when it
should have returned the updated state read from CSRs.

Update sbi_dbtr_read_trig to return updated state read from CSRs.

Signed-off-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Jesse Taube <jesse@rivosinc.com>
Link: https://lore.kernel.org/r/20250811152947.851208-1-jesse@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-08-28 10:50:14 +05:30
Jesse Taube
4b687e3669 dbtr: Add support for icount trigger type
The linux kernel needs icount to implement hardware breakpoints.

Signed-off-by: Jesse Taube <jesse@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250724183120.1822667-1-jesse@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-08-28 10:36:52 +05:30
Xiang W
6068efc7f5 Fix license to compatible BSD-2-Clause
OpenSBI is a BSD project. We need to modify some codes to compatible
with BSD-2-Clause license.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Ben Zong-You Xie <ben717@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250728074334.372355-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-08-28 10:32:46 +05:30
Samuel Holland
bbe9a23060 platform: generic: Pack the FDT after applying fixups
This minimizes the size that will be reserved by the OS for the FDT, and
it prevents the FDT buffer from containing uninitialized memory, which
can be important for some simulation platforms and for attestation.

Closes: https://github.com/riscv-software-src/opensbi/issues/388
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250722233923.1356605-1-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-08-28 10:26:25 +05:30
Manuel Hernández Méndez
525ac970b3 platform: openpiton: Move openpiton platform from fpga to generic
The OpenPiton framework has a generic PMU that is not used by OpenSBI.
Due to OpenSBI’s build system we cannot directly reuse the generic
platform functions, so move the OpenPiton platform to generic. Also due
to the generic platform is where new features are added.

Signed-off-by: Manuel Hernández Méndez <maherme.dev@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250813104759.33276-1-maherme.dev@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-08-26 17:08:00 +05:30
Manuel Hernández Méndez
3204d74486 lib: sbi: pmu: Improve loop in pmu_ctr_find_hw
We do not need to iterate over all values in the loop,
we can break the loop when we found a valid counter
that is not started yet.

Signed-off-by: Manuel Hernández Méndez <maherme.dev@gmail.com>
Link: https://lore.kernel.org/r/20250721160712.8766-1-maherme.dev@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-08-26 16:37:47 +05:30
Max Hsu
84044ee83c lib: utils: fdt: fix "ranges" translation
According to the Device Tree Spec, Chapter 2.3.8 "ranges" [1]:
The parent address size will be determined from the #address-cells
property of the node that defines the parent’s address space.

In fdt_translate_address(), which considered the parent address size
is the child address size, this commit fix the two address sizes
and parsing the address independently.

Signed-off-by: Max Hsu <max.hsu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250711-dev-maxh-master_fdt_helper-v2-1-9579e1f02ee1@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-23 10:32:14 +05:30
Jessica Clarke
cc546e1a06 include: sbi: Remove unused (LOG_)REGBYTES
These are no longer used, so remove them.

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250709232932.37622-3-jrtc27@jrtc27.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-22 15:54:27 +05:30
Jessica Clarke
079bf6f0f9 firmware: Replace sole uses of REGBYTES with __SIZEOF_LONG__
This code has nothing to do with the ISA's registers, it's about the
format of ELF relocations. As such, __SIZEOF_LONG__, being a language /
ABI-level property, is a more appropriate constant to use. This also
makes it easier to support CHERI, where general-purpose registers are
extended to be capabilities, not just integers, and so the register size
is not the same as the machine word size. This also happens to make it
more correct for RV64ILP32, where the registers are 64-bit integers but
the ABI is 32-bit (both for long and for the ELF format), though
properly supporting that ABI is not part of the motivation here, just a
consequence of improving the code for CHERI.

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250709232932.37622-2-jrtc27@jrtc27.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-22 15:54:27 +05:30
Jessica Clarke
ffd3ed976d include: sbi: Use array for struct sbi_trap_regs and GET/SET macros
Rather than hand-rolling scaled pointer arithmetic with casts and
shifts, let the compiler do so by indexing an array of GPRs, taking
advantage of the language's type system to scale based on whatever type
the register happens to be. This makes it easier to support CHERI where
the registers are capabilities, not plain integers, and so this pointer
arithmetic would need to change (and currently REGBYTES is both the size
of a register and the size of an integer word upstream).

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250709232932.37622-1-jrtc27@jrtc27.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-22 15:54:27 +05:30
Manuel Hernández Méndez
0b7c2e0d60 platform: openpiton: fix uninitialized plic_data struct
The plic_data struct was uninitialized. This led to misfunction behavior
since it was subsequently assigned to the global plic struct, and some
struct fields, such as flags and irqchip, contained random values.
The fix proposes to initialize the plic_data to the global plic struct,
so, after parsing the fdt, the fields of the struct will be set to the
default values set in global plic struct definition, or the parsed values
in the fdt, or zero.

Fixes: 4c37451 ("platform: openpiton: Read the device configurations from device tree")
Signed-off-by: Manuel Hernández Méndez <maherme.dev@gmail.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250708180914.1131-1-maherme.dev@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-22 15:26:44 +05:30
Jessica Clarke
e10a45752f firmware: Rename __rel_dyn_start/end to __rela_dyn_start/end
We are using and expecting the RELA format, not the REL format, and this
is the conventional linker-generated name for the start/end symbols, so
use it rather than confusing things by making it look like we're
accessing .rel.dyn, which would be in the REL format with no explicit
addend.

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250710002937.44307-1-jrtc27@jrtc27.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-21 16:39:49 +05:30
Jessica Clarke
4825a3f87f include: sbi: Don't use #pragma when preprocessing device tree sources
Since this persists in the preprocessed output (so that it can affect
the subsequent compilation), it ends up in the input to dtc and is a
syntax error, breaking the k210 build. Ideally we wouldn't add the
-include flag to DTSCPPFLAGS in the first place as this header is wholly
pointless there, but that's a more invasive build system change compared
to just making this header safe to include there.

Fixes: 86c01a73ff ("lib: sbi: Avoid GOT indirection for global symbol references")
Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Tested-by: Xiang W <wxjstz@126.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Link: https://lore.kernel.org/r/20250709232840.37551-1-jrtc27@jrtc27.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-21 16:37:22 +05:30
Xiang W
3876f8cd1e firmware: payload: test: Add SBI shutdown call after test message
Previously, 'make run' would hang in WFI after printing the test message.
This commit adds an SBI ecall to ensure QEMU exits cleanly after the test
payload runs.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Xiang W <wxjstz@126.com>
Link: https://lore.kernel.org/r/20250721010807.460788-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-21 16:34:03 +05:30
Atish Patra
5b305e30a5 lib: sbi: Only enable TM bit in scounteren
The S-mode should disable Cycle and instruction counter for user space
to avoid side channel attacks. The Linux kernel already does this so that
any random user space code shouldn't be able to monitor cycle/instruction
without higher privilege mode involvement.

Remove the CY/IR bits in scountern in OpenSBI.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Link: https://lore.kernel.org/r/20250513-fix_scounteren-v1-1-01018e0c0b0a@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-21 16:33:03 +05:30
Ben Dooks
663b05a5f7 include: sbi: fix swap errors with newer gcc -Werror=sequence-point
The BSWAPxx() macros are now throwing the following warnings with
newer gcc versions. This is due to throwing an argument in that may
be evaluated more than one (I think) and therefore things like the
example below should be avoided.

Fix by making a set of BSWAPxx() wrappers which specifically only
evaluate 'x' once.

In file included lib/sbi/sbi_mpxy.c:21:
lib/sbi/sbi_mpxy.c: In function ‘sbi_mpxy_write_attrs’:
ib/sbi/sbi_mpxy.c:632:63: error: operation on ‘mem_idx’ may be undefined [-Werror=sequence-point]
  632 |                         attr_val = le32_to_cpu(mem_ptr[mem_idx++]);
      |                                                        ~~~~~~~^~

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Rahul Pathak <rahul@summations.net>
Reviewed-by: Xiang W <wxjstz@126.com>
Link: https://lore.kernel.org/r/20250704122938.897832-1-ben.dooks@codethink.co.uk
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-20 21:15:10 +05:30
Alvin Chang
edfbc1285d firmware: Initial compiler built-in stack protector support
Add __stack_chk_fail() and __stack_chk_guard variable which are used by
compiler built-in stack protector.

This patch just try to support stack-protector so the value of the stack
guard variable is simply fixed for now. It could be improved by
deriving from a random number generator, such as Zkr extension or any
platform-specific random number sources.

Introduce three configurations for the stack protector:
1. CONFIG_STACK_PROTECTOR to enable the stack protector feature by
   providing "-fstack-protector" compiler flag
2. CONFIG_STACK_PROTECTOR_STRONG to provide "-fstack-protector-strong"
3. CONFIG_STACK_PROTECTOR_ALL to provide "-fstack-protector-all"

Instead of fixing the compiler flag of stack-protector feature as
"-fstack-protector", we derive it from the introduced Kconfig
configurations. The compiler flag "stack-protector-cflags-y" is defined
as Makefile "immediately expanded variables" with ":=". Thus, the
stronger configuration of the stack protector can overwrite the
preceding one.

Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250703151957.2545958-3-alvinga@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-20 20:54:34 +05:30
Alvin Chang
ea5abd1f5e lib: sbi: Remove redundant call to sbi_hart_expected_trap_addr()
The variable "sbi_hart_expected_trap" has already been extern variable.
Therefore, the program can directly refer to it instead of calling
sbi_hart_expected_trap_addr().

Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250703151957.2545958-2-alvinga@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-20 20:54:34 +05:30
Yong-Xuan Wang
61083eb504 lib: sbi_list: add a helper for safe list iteration
Some use cases require iterating safe against removal of list entry.

Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250618025416.5331-1-yongxuan.wang@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-20 20:45:59 +05:30
Yi Pei
b8f370aa37 lib: utils/serial: Clear LSR status and check RBR status
On some platforms, read RBR when it is empty may result in an error.

Signed-off-by: Yi Pei <neopimail@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/CAFPVDjQZ1gpf8-u--RBbAL1Y0FfDN2vZ3g=wBw+Bp-8ppuz3HA@mail.gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-07-20 20:37:18 +05:30
Anup Patel
a32a910691 include: Bump-up version to 1.7
Update the OpenSBI version to 1.7 as part of release preparation.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-30 08:37:30 +05:30
Rahul Pathak
c2671bb69f lib: rpmi: Make RPMI drivers as non-experimental
As RPMI v1.0 specification is frozen, disable the
experimental tag for such RPMI drivers.

Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250618053854.2577299-2-rpathak@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-24 08:42:27 +05:30
Rahul Pathak
a5fdef45db lib: utils: Add Implementation ID and Version as RPMI MPXY attributes
The latest frozen RPMI spec has added Implementation ID
and Implementation Version as message protocol specific
mpxy attributes. Add support for these.

Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250618053854.2577299-1-rpathak@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-24 08:42:27 +05:30
Chao-ying Fu
13abda5169 lib: sbi_platform: Add platform specific pmp_set() and pmp_disable()
Allow platforms to implement platform specific PMP setup and
PMP disable functions which are called before actual PMP CSRs
are configured.

Also, implement pmp_set() and pmp_disable() for MIPS P8700.

Signed-off-by: Chao-ying Fu <cfu@mips.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20250614172756.153902-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-17 09:34:01 +05:30
Jesse Taube
324021423d lib: sbi: dbtr: Fix update_triggers to match SBI
OpenSBI implements sbi_dbtr_update_trig as
`sbi_dbtr_update_trig(unsigned long trig_idx_base,
                      unsigned long trig_idx_mask)`
yet SBI v3.0-rc7 Chapter 19. Debug Triggers Extension [0] declares it as
`sbi_debug_update_triggers(unsigned long trig_count)`

Change update_triggers to match SBI.

[0] https://github.com/riscv-non-isa/riscv-sbi-doc/tree/v3.0-rc7/src/ext-debug-triggers.adoc

Fixes: 97f234f15c ("lib: sbi: Introduce the SBI debug triggers extension support")
Signed-off-by: Jesse Taube <jesse@rivosinc.com>
Reviewed-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Tested-by: Charlie Jenkins <charlie@rivosinc.com>
Reviewed-by: Charlie Jenkins <charlie@rivosinc.com>
Link: https://lore.kernel.org/r/20250528154604.571815-1-jesse@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-16 17:01:52 +05:30
Xiang W
03f44e6b82 lib: sbi: Optimize saddr mapping in sbi_dbtr.c
The original implementation mapped saddr individually for each entry.
The updated code now maps saddr for all entries in a single operation.
This change reduces the number of PMP (Physical Memory Protection)
operations, improving efficiency and performance.

Tested-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Signed-off-by: Xiang W <wxjstz@126.com>
Link: https://lore.kernel.org/r/20250514052422.575551-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-16 16:53:50 +05:30
Jesse Taube
033e0e2353 lib: sbi: dbtr: Fix shared memory layout
The existing sbi_dbtr_shmem_entry has a size of 5 * XLEN with the final
entry being idx. This is in contrast to the SBI v3.0-rc7 Chapter 19.
Debug Triggers Extension [0] where idx and trig_state share the same
offset (0) in shared memory, with a total size of 4 * XLEN for all the
SBI calls.

Replace struct with union to match memory layout described in SBI.

[0] https://github.com/riscv-non-isa/riscv-sbi-doc/tree/v3.0-rc7/src/ext-debug-triggers.adoc

Fixes: 97f234f15c ("lib: sbi: Introduce the SBI debug triggers extension support")
Signed-off-by: Jesse Taube <jesse@rivosinc.com>
Reviewed-by: Charlie Jenkins <charlie@rivosinc.com>
Tested-by: Charlie Jenkins <charlie@rivosinc.com>
Reviewed-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Tested-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Link: https://lore.kernel.org/r/20250604135225.842241-1-jesse@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-16 10:14:20 +05:30
Anup Patel
9f64f06193 lib: utils: Fix fdt_parse_aclint_node() for non-contiguous hartid
Currently, the fdt_parse_aclint_node() does not handle non-contiguous
hartid correctly and returns incorrect first_hartid and hart_count.
This is because the for-loop in fdt_parse_aclint_node() skips a hartid
for which hartindex is not available (aka corresponding CPU DT node
is disabled).

For example, on a platform with 4 HARTs (hartid 0, 1, 2, and 3) where
CPU DT nodes with hartid 0 and 2 are disabled, the fdt_parse_aclint_node()
returns first_hartid = 1 and hart_count = 3 which is incorrect.

To address the above issue, drop the sbi_hartid_to_hartindex() check
from the for-loop of fdt_parse_aclint_node().

Fixes: 5e90e54a1a ("lib: utils:Check that hartid is valid")
Reported-by: Maria Mbaye <MameMaria.Mbaye@microchip.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20250606055810.237441-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-16 09:41:03 +05:30
Anup Patel
7dd09bfeca lib: sbi: Revert entry_count before doing hsm stop in hsm wait loop
Using hsm stop in hsm wait loop causes secondary harts to be stuck
forever in OpenSBI on RISC-V platforms where HSM hart hotplug is
available and all harts come-up at the same time during system
power-on.

For example, lets say we have two harts A and B on a RISC-V platform
with HSM hart hotplug which come-up at the same time during system
power-on. The hart A enters OpenSBI before hart B hence it becomes
the primary (or cold-boot) hart whereas hart B becomes the secondary
(or warm-boot) hart. The hart A follows the OpenSBI cold-boot path
and registers hsm device before hart B enters OpenSBI. The hart B
eventually enters OpenSBI and follows the OpenSBI warm-boot path
so it will increment it's own entry_count before entering hsm wait
loop where it sees hsm device and stops itself. Later as part of
the Linux boot-up sequence, hart A issues SBI HSM start call to
bring-up hart B but OpenSBI sees entry_count != init_count for
hart B in sbi_hsm_hart_start() hence hsm_device_hart_start() is
not called for hart B resulting in hart B stuck forever in OpenSBI.

To fix the above issue, revert entry_count before doing hsm stop
in hsm wait loop.

Fixes: d844deadec ("lib: sbi: Use hsm stop for hsm wait")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Nick Hu <nick.hu@sifive.com>
Link: https://lore.kernel.org/r/20250527124821.2113467-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-16 09:40:28 +05:30
Inochi Amaoto
6f8bcae4cb lib: utils/irqchip: always parse msi information for each aplic device
OpenSBI only parses MSI information of the first next level subdomain
for now, which makes the root domain misconfigured in some case:
1. the msi is not enabled on the first subdomain of the root domain,
   but other subdomains enable MSI.
2. the root domain is set as direct mode, but its subdomains enable MSI.

So it is needed to parse all child of the root domain, Otherwise, the
some non-root domains are broken. As the specification says, it is
safe to parse the MSI information of all its subdomain and write the
msiaddrcfg register of the non root domain as they are read only.

Parse the aplic MSI information recursively for all aplic device.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Inochi Amaoto <inochiama@gmail.com>
Link: https://lore.kernel.org/r/20250523085348.1690368-1-inochiama@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-16 09:17:28 +05:30
Samuel Holland
771c656181 lib: sbi: fwft: Use only the provided PMLEN value
As of riscv-sbi-doc commit c7d3d1f7dcaa ("ext-fwft: use the provided
value in fwft_set(POINTER_MASKING_PMLEN)"), the SBI implementation must
use only the provided PMLEN value or else fail. It may not fall back to
a larger PMLEN value.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250522013503.2556053-1-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-15 18:56:33 +05:30
Clément Léger
f30a54f3b3 lib: sbi: pmu: Remove MIP clearing from pmu_sse_enable()
Clearing MIP at that point means that we can probably lose a pending
interrupt. This should not happen, remove MIP clearing from there.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250519083950.739044-3-cleger@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-15 18:44:51 +05:30
Clément Léger
b31a0a2427 lib: sbi: pmu: Add SSE register/unregister() callbacks
As soon as the SSE event is registered, there is no reason not to
delegate the interrupt. Split the PMU SSE enable/disable()
callbacks by moving MIDELEG setting to register/unregister().

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250519083950.739044-2-cleger@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-15 18:44:51 +05:30
Khem Raj
6d23a9c570 Makefile: Add flag for reprodubility compiler flags
Provides mechanism to remove absolute paths from binaries using
-ffile-prefix-map

It will help distros (e.g. yocto based ones ) which want to ship
the .elf files but need to scrub absolute paths in objects

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Link: https://lore.kernel.org/r/20250515025931.3383142-1-raj.khem@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-15 18:28:55 +05:30
Chao-ying Fu
66ab965e54 platform: generic: mips: add P8700
Extend generic platform to support MIPS P8700.

Signed-off-by: Chao-ying Fu <cfu@mips.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250522212141.3198-2-cfu@mips.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-14 21:44:11 +05:30
Ziang Wang
3f8159aa06 lib: utils: hsm: Do not fail on EALREADY in rpmi-hsm fixup.
In case harts are divided into groups that use different
rpmi-hsm channels in different mailboxes, the suspend
state fixup function will return EALREADY on secondary
entry, simply skip on this error.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Ziang Wang <wangziang.ok@bytedance.com>
Link: https://lore.kernel.org/r/20250507074620.3162747-1-wangziang.ok@bytedance.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-14 10:31:32 +05:30
Charlie Jenkins
27347f0902 Makefile: Make $(LLVM) more flexible
Introduce a way for developers to easily switch between LLVM versions
with LLVM=/path/to/llvm/ and LLVM=-version. This is a useful
addition to the existing LLVM=1 variable which will select the first
clang and llvm binutils available on the path.

Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
Link: https://lore.kernel.org/r/20250430-improve_llvm_building-v1-1-caae96cc6be6@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-06-14 10:11:11 +05:30
James Raphael Tiovalen
69a0f0245f lib: sbi: pmu: Return SBI_EINVAL if cidx_mask is 0
Currently, when configuring a matching programmable HPM counter with
Sscofpmf being present, cidx_base > 2, and cidx_mask == 0 to monitor
either the CPU_CYCLES or INSTRUCTIONS hardware event,
sbi_pmu_ctr_cfg_match will succeed but it will configure the
corresponding fixed counter instead of the counter specified by the
cidx_base parameter.

During counter configuration, the following issues may arise:
- If the SKIP_MATCH flag is set, an out-of-bounds memory read of the
phs->active_events array would occur, which could lead to undefined
behavior.

- If the CLEAR_VALUE flag is set, the corresponding fixed counter will
be reset, which could be considered unexpected behavior.

- If the AUTO_START flag is set, pmu_ctr_start_hw will silently start
the fixed counter, even though it has already started. From the
supervisor's perspective, nothing has changed, which could be confusing.
The supervisor will not see the SBI_ERR_ALREADY_STARTED error code since
sbi_pmu_ctr_cfg_match does not return the error code of
pmu_ctr_start_hw.

The only way to detect these issues is to check the ctr_idx return value
of sbi_pmu_ctr_cfg_match and compare it with cidx_base.

Fix these issues by returning the SBI_ERR_INVALID_PARAM error code if
the cidx_mask parameter value being passed in is 0 since an invalid
parameter should not lead to a successful sbi_pmu_ctr_cfg_match but with
unexpected side effects.

Following a similar rationale, add the validation check to
sbi_pmu_ctr_start and sbi_pmu_ctr_stop as well since sbi_fls is
undefined when the mask is 0.

This also aligns OpenSBI's behavior with KVM's.

Signed-off-by: James Raphael Tiovalen <jamestiotio@gmail.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Link: https://lore.kernel.org/r/20250520132533.30974-1-jamestiotio@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 21:01:14 +05:30
Anup Patel
d4f5a16598 include: sbi: Change SBI spec version to 3.0
Now that SBI v3.0 specification is frozen, change runtime SBI version
implemented by OpenSBI to v3.0. Also, mark extensions defined by the
SBI v3.0 specification as non-experimental.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Link: https://lore.kernel.org/r/20250516122844.113423-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 13:47:34 +05:30
Yao Zi
60c3f97de8 lib: utils: fdt: Claim Zicntr if time CSR emulation is possible
OpenSBI is capable of emulating time CSR through an external timer
for HARTs that don't implement a full Zicntr extension. Let's add
Zicntr extension in the FDT if CSR emulation is active.

This avoids hardcoding the extension in the devicetree, which may
confuse pre-SBI bootloaders.

Signed-off-by: Yao Zi <ziyao@disroot.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250516133352.36617-4-ziyao@disroot.org
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 13:25:53 +05:30
Yao Zi
7e31dc8052 lib: sbi: hart: Detect existence of cycle and instret CSRs for Zicntr
Zicntr extension specifies three read-only CSRs, time, cycle and
instret. It isn't sufficient to report Zicntr is fully supported with
only time CSR detected.

This patch introduces a bitmap to sbi_hart_features to record
availability of these CSRs, which are detected using traps. Zicntr is
reported as present if and only if three CSRs are all available on the
HARTs.

Sites originally depending on SBI_HART_EXT_ZICNTR for detecting
existence of time CSR are switched to detect SBI_HART_CSR_TIME instead.

Suggested-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Yao Zi <ziyao@disroot.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250516133352.36617-3-ziyao@disroot.org
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 13:25:53 +05:30
Alvin Chang
2bb7632649 lib: utils: Fix fdt_mpxy_init() not returning error code
It seems that current implementation doesn't fail on fdt_mpxy_init(),
because platforms might not have any MPXY devices. In fact, if there are
no MPXY devices, fdt_driver_init_all() will return SBI_OK.

More importantly, if there is any MPXY device which fails the
initialization, OpenSBI must check the error code and stop the booting.
Thus, this commit adds the return value for fdt_mpxy_init().

Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250430091007.3768180-1-alvinga@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 11:20:57 +05:30
Anup Patel
f3cce5b97f lib: utils/mpxy: Remove p2a_db_index from RPMI system MSI attributes
The discovery of P2A doorbell system MSI index is now through RPMI
shared memory DT node so remove p2a_db_index from RPMI system MSI
attributes and access it as a mailbox channel attribute.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20250512083827.804151-5-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 11:10:35 +05:30
Anup Patel
8fadfebdd1 lib: utils/mailbox: Parse A2P doorbell value from DT
The A2P doorbell value written to the 32-bit A2P doorbell value
must be discoverd from device tree instead of always using the
default value 1.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20250512083827.804151-4-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 11:10:35 +05:30
Anup Patel
a79566175c lib: utils/mailbox: Parse P2A doorbell system MSI index from DT
The P2A doorbell system MSI index is expected to be discovered from
device tree instead of RPMI system MSI service group attribute. This
is based on ARC feedback before RPMI spec was frozen.

Let's parse P2A doorbell system MSI index from device tree and also
expose it as rpmi channel attribute to RPMI client drivers.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20250512083827.804151-3-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 11:10:35 +05:30
Anup Patel
8ca08044c2 lib: utils/mailbox: Update DT register name of A2P doorbell
The latest device tree bindings define A2P doorbell register name as
"a2p-doorbell" so update rpmi_shmem_transport_init() accordingly.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20250512083827.804151-2-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 11:10:35 +05:30
Chao-ying Fu
8a3071222a lib: Emulate AMO instructions when Zaamo is not available
The AMO instructions are very critical for Linux so allow low-end
RISC-V implementations without Zaamo to boot Linux by emulating AMO
instructions using Zalrsc when OpenSBI is compiled without Zaamo.

Signed-off-by: Chao-ying Fu <cfu@mips.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20250519121207.976949-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-20 09:18:03 +05:30
Parshintsev Anatoly
017a161788 Makefile: fix missing .debug_frame DWARF section for GCC
When OpenSBI is built with a relatively new compiler (gcc-13 and greater)
I observed that GDB is unable to produce proper backtraces and some
variable values appear corrupted (even if the associated DWARF
location descriptor is correct).

Turns out that to properly work with debug information, debuggers often
need to unwind the stack. They generally rely on Call Frame Information
(CFI) records provided by the compiler to facilitate this task.
Currently, the GCC compiler offers two mechanisms:

- `.debug_frame` section (as described in the DWARF specification).
- `.eh_frame` sections (as described in LSB documents).

The latter (`.eh_frame`) supports stack unwinding at runtime, providing
a framework for C++ exceptions or enabling backtrace generation using
libraries like libunwind. However, the downside of this approach is that
these sections should be part of loadable segments.

The former (`.debug_frame`) is simply an ordinary debug section.

Starting from GCC 13, Linux targets enable the `-fasynchronous-unwind-tables`
and `-funwind-tables` flags by default. Relevant commit:
https://github.com/gcc-mirror/gcc/commit/3cd08f7168

When these flags are active, the compiler generates `.eh_frame` sections
instead of `.debug_frame`. Since OpenSBI is built using the **Linux
toolchain**, this behavior applies to OpenSBI as well.

The problem arises because the SBI build system uses `-Wl,--gc-sections`,
which discards the `.eh_frame` section.

Possible Fixes:

1. Enforce `.debug_frame` generation – Modify compiler flags to generate
`.debug_frame` instead of `.eh_frame`.
2. Preserve `.eh_frame` in the linker script – Add `KEEP(*(.eh_frame))`
to ensure the section is not discarded.

I chose Option 1 because it avoids any runtime overhead.

Signed-off-by: Parshintsev Anatoly <anatoly.parshintsev@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250421124729.36364-1-anatoly.parshintsev@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-15 18:52:38 +05:30
Nick Hu
d844deadec lib: sbi: Use hsm stop for hsm wait
If we hotplug a core and then perform a suspend-to-RAM operation on a
multi-core platform, the hotplugged CPU may be woken up along with the rest
of the system, particularly on platforms that wake all cores from the
deepest sleep state. When this happens, the hotplugged CPU enters the
sbi_hsm_wait WFI wait loop instead of transitioning into a
platform-specific low-power state. To address this, we add a HSM stop call
within the wait loop. This allows platforms that support HSM stop to enter
a low-power state when the CPU is unexpectedly woken up.

Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250418064506.15771-1-nick.hu@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-05-15 17:31:11 +05:30
Radim Krčmář
316daaf1c2 lib: sbi_hart: properly reset Ssstateen
sstateen* and hstateen* CSRs must be zeroed by M-mode if the mstateen*
registers are missing, to avoid security issues.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-10-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-30 10:14:26 +05:30
Radim Krčmář
937118ca65 lib: sbi_hart: add Ssstateen extension
We already detect Smstateen, but Ssstateen exists as well and it doesn't
have the M-state CSRs.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-9-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-30 10:14:26 +05:30
Radim Krčmář
dac15cb910 lib: sbi_hart: reset mstateen0
The current logic clears some bits based on SBI known extensions.
Be safe and do not leave enabled anything that SBI doesn't control.

This is not a breaking change, because the register must be initialized
to 0 by the ISA on reset, but it is better to not depend on it when we
don't need to.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-8-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-30 10:14:26 +05:30
Radim Krčmář
8c814b5c9b lib: sbi_hart: fix sstateen emulation
The Sstateen extension defines 4 sstateen registers, but SBI currently
configures the execution environment to throw illegal instruction
exception when accessing sstateen1-3.

SBI should implement all sstateen registers, so delegate the
implementation to hardware by setting the SE bit.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-7-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-30 10:14:26 +05:30
Radim Krčmář
6b877fb53b lib: sbi_hart: reset sstateen and hstateen
Not resetting sstateen is a potential security hole, because U might be
able to access state that S does not properly context-switch.
Similar for hstateen with VS and HS.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-6-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-30 10:14:26 +05:30
Radim Krčmář
009f77a9f0 lib: sbi_hart: reset hstatus
hstatus.HU must be cleared, because U-mode could otherwise use the
HLS/HSV instructions.  This would allow U-mode to read physical memory
directly if vgatp and vsatp was 0.

The remaining fields don't seem like a security vulnerability now, but
clearing the whole CSR is not an issue, so do that be safe.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-5-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-30 10:14:26 +05:30
Radim Krčmář
65e8be4fe8 lib: sbi: use 64 bit csr macros
Switch the most obvious cases to new macros.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-4-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-30 10:14:25 +05:30
Radim Krčmář
f82c4bdf8c lib: sbi: add 64 bit csr macros
Most CSRs are XLEN bits wide, but some are 64 bit, so rv32 needs two
accesses, plaguing the code with ifdefs.

Add new helpers that split 64 bit operation into two operations on rv32.

The helpers don't use "csr + 0x10", but append "H" at the end of the csr
name to get a compile-time error when accessing a non 64 bit register.
This has the downside that you have to use the name when accessing them.
e.g. csr_read64(0x1234) or csr_read64(CSR_SATP) won't compile and the
error messages you get for these bugs are not straightforward.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-3-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-30 10:14:25 +05:30
Raj Vishwanathan
99aabc6b84 lib: sbi: Set the scratch allocation to alignment to cacheline size
Set the scratch allocation alignment to cacheline size specified by
riscv,cbom-block-size in the DTS file to avoid two atomic variables
from the same cache line causing livelock on some platforms. If the
cacheline is not specified, we set it a default value.

Signed-off-by: Raj Vishwanathan <Raj.Vishwanathan@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250423225045.267983-1-Raj.Vishwanathan@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-24 09:23:47 +05:30
Alvin Chang
4d0128ec58 lib: sbi_domain: Reduce memory usage of per-domain hart context
In current implementation, the length of hartindex_to_context_table[]
array is fixed as SBI_HARTMASK_MAX_BITS. However, the number of harts
supported by the platform might not be SBI_HARTMASK_MAX_BITS and is
usually smaller than SBI_HARTMASK_MAX_BITS. This means it is unnecessary
to allocate such fixed-length array here.

Precisely, current implementation always allocates 1024 bytes for
hartindex_to_context_table[128] on RV64 platform. However, a platform
supports two harts only needs hartindex_to_context_table[2], which only
needs 16 bytes.

This commit calculates needed size of hartindex_to_context_table[]
according to supported number of harts on the platform when registering
per-domain data, so that memory usage of per-domain context data can be
reduced.

Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250326062051.3763530-1-alvinga@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 17:51:01 +05:30
Samuel Holland
2b09a98701 lib: sbi_platform: Remove the vendor_ext_check hook
Now that the generic platform only sets .vendor_ext_provider if the
function is really implemented, there is no need for a separate hook to
check if a vendor extension is implemented.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-11-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:51 +05:30
Samuel Holland
0dd8a26f1f lib: utils/fdt: Remove fdt_match_node()
This function has been obsoleted by the fdt_driver library and is no
longer used.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-10-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:51 +05:30
Samuel Holland
1c579675be platform: generic: Initialize overrides with fdt_driver
In addition to deduplicating the code, this also improves the match
selection logic to respect the priority order of the compatible strings,
as implemented in commit 0ffe265fd9 ("lib: utils/fdt: Respect
compatible string fallback priority").

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-9-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:51 +05:30
Samuel Holland
b80ded7756 platform: generic: Remove platform override hooks
Now that all of the overrides are modifying generic_platform_ops
directly, remove the unused hooks and forwarding functions. The
remaining members of struct platform_override match struct fdt_driver,
so use that type instead. This allows a future commit to reuse the
fdt_driver-based init function.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-8-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:51 +05:30
Samuel Holland
b353af63e2 platform: generic: Modify platform ops instead of using hooks
Switch all existing platform overrides to use the helper pattern instead
of the platform hooks. After this commit, only the .match_table and
.init members of struct platform_override are used.

There are two minor behavioral differences:
 - For Allwinner D1, fdt_add_cpu_idle_states() is now called before the
   body of generic_final_init(). This should have no functional impact.
 - For StarFive JH7110, if the /chosen/starfive,boot-hart-id property is
   missing, the code now falls back to using generic_coldboot_harts,
   instead of accepting any hart.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-7-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:51 +05:30
Samuel Holland
2489e1421d platform: generic: Allow replacing platform operations
Currently the generic platform follows the middleware pattern: it
implements the sbi_platform hooks, while providing its own set of hooks
for further customization. This has a few disadvantages: each location
where customization is needed requires a separate platform_override
hook, including places where the generic function does nothing except
forward to a platform_override hook, and the extra layer of function
pointers adds runtime overhead.

Let's restructure the generic platform to follow the helper pattern.
Allow platform overrides to treat the generic platform as a template,
adding or replacing the sbi_platform_operations as needed. Export the
generic implementations, so they can be called as helpers from inside
the override functions. With this pattern, the platform_override
function pointers are replaced by direct calls, and the forwarding
functions can be removed.

The forwarding functions are not exported, since there is no reason for
an override to call them. generic_vendor_ext_check() must be rewritten,
since now there is a new way to override vendor_ext_provider.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-6-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:50 +05:30
Samuel Holland
e78a0ebdc4 platform: generic: Add an init hook matching fdt_driver
In preparation for reusing the fdt_driver code to match platform
overrides, add a new .init hook matching the type signature from
fdt_driver. This hook replaces the existing .fw_init hook, since
it is called at roughly the same place in the init process.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-5-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:50 +05:30
Samuel Holland
de777cc633 platform: generic: thead: Avoid casting away const
struct fdt_match expects match data to be const. Follow this expectation
so that no type casting is needed.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-4-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:50 +05:30
Samuel Holland
ac2e428c4b platform: rzfive: Call andes_pma_setup_regions() only during cold boot
This function accesses the FDT blob, which means it is only valid to
call during cold boot, before a lower privilege level has an opportunity
to clobber that memory.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-3-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:50 +05:30
Samuel Holland
2a6f7ddf87 platform: generic: andes: Remove inline definitions
The addresses of these functions are used to set function pointers in
struct platform_override, so it is not valid for them to be inline.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-2-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-23 12:32:50 +05:30
Alvin Chang
fda0742e76 lib: sbi_mpxy: Change MPXY state as per-domain data
OpenSBI supports multiple supervisor domains run on same platform. When
these supervisor domains want to communicate with OpenSBI through MPXY
channels, they will allocate MPXY shared memory from their own memory
regions. Therefore, the MPXY state data structure must be per-domain and
per-hart data structure.

This commit registers per-domain MPXY state data in sbi_mpxy_init(). The
original MPXY state allocated in scratch region is also removed. We also
replace sbi_scratch_thishart_offset_ptr() macro as new
sbi_domain_mpxy_state_thishart_ptr() macro which gets MPXY state from
per-domain data.

Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250325071314.3113941-1-alvinga@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-21 11:35:44 +05:30
Jimmy Ho
d2166a9d40 lib: sbi: Handle length of extension name string exceed buffer size error
print error message and turncat the string when length
of extension name string exceed buffer size

Signed-off-by: Jimmy Ho <jimmy.ho@sifive.com>
Reviewed-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Zong Li <zong.li@sifive.com>
Link: https://lore.kernel.org/r/20250321001450.11189-1-jimmy.ho@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-21 08:42:01 +05:30
Xiang W
190979b4fc lib: sbi: Remove unnecessary SBI_INIT_LIST_HEAD
No need to initialise the nodes to be added to the linked list

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250319123944.505756-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-15 11:38:20 +05:30
Xiang W
169b4b8ae2 lib: sbi: Simplify structure member offset checking
Add a macro assert_member_offset() to perform structure member offset
checking.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250319123919.505443-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-15 11:26:36 +05:30
Xiang W
8b026abc5a lib: sbi: Fix SHMEM_PHYS_ADDR for RV32
Obtaining a 64-bit address under rv32 does not require combining two
32-bit registers because we ignore upper 32-bits on rv32.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250319123832.505033-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-15 11:22:00 +05:30
Xiang W
ce57cb572e lib: sbi: Add parameter check in sbi_mpxy_set_shmem()
Shared memory needs to be accessed in M-Mode so for now the high
address of shared memory can't non-zero.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250319123719.504622-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-15 10:19:13 +05:30
Leo Yu-Chi Liang
0442f1318e lib: sbi: Allow programmable counters to monitor cycle/instret events for Andes PMU
Referencing commit 0c304b6619
("lib: sbi: Allow programmable counters to monitor cycle/instret events")
to support this functionality for Andes PMU.

Signed-off-by: Leo Yu-Chi Liang <ycliang@andestech.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Link: https://lore.kernel.org/r/20250328084142.540807-1-ycliang@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-14 17:28:59 +05:30
Leo Yu-Chi Liang
5ab908d622 docs: pmu_support: fix example typos
The (event ID & "second column mask") should equal
the "first column match value". Modify the example
to fit the description.

Signed-off-by: Leo Yu-Chi Liang <ycliang@andestech.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250324043943.2513070-1-ycliang@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-14 17:25:54 +05:30
Andrew Jones
37eaca4ab3 lib: sbi_ipi: Return error for invalid hartids
sbi_send_ipi() should return SBI_ERR_INVALID_PARAM if even one hartid
constructed from hart_mask_base and hart_mask, is not valid.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250314163021.154530-6-ajones@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-14 15:29:36 +05:30
Andrew Jones
a6e5f8878c sbi: Introduce sbi_hartmask_weight
Provide a function to count the number of set bits in a hartmask,
which builds on a new function for the same that operates on a
bitmask. While at it, improve the performance of sbi_popcount()
which is used in the implementation.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250314163021.154530-5-ajones@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-14 15:29:19 +05:30
Samuel Holland
2142618f12 Makefile: Avoid repeated evaluation of shell commands
Recursively expanded variables (defined with '=') are expanded at
evaluation time. These version information variables are evaluated
inside a recipe as part of GENFLAGS. As a result, the shell commands
are executed separately for each compiler invocation. Convert the
version information variables to be simply expanded, so the shell
commands are executed only once, at Makefile evaluation time. This
speeds up the build by as much as 75%.

A separate check is needed to maintain the behavior of preferring the
value of OPENSBI_BUILD_TIME_STAMP from the environment.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250313035755.3796610-1-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-13 21:49:14 +05:30
Rajnesh Kanwal
aa40c53ce4 lib: sbi: Enable Control Transfer Records (CTR) Ext using xstateen.
The Control Transfer Records (CTR) extension provides a method to
record a limited branch history in register-accessible internal chip
storage.

This extension is similar to Arch LBR in x86 and BRBE in ARM.
The Extension has been stable and the latest release can be found here
https://github.com/riscv/riscv-control-transfer-records/release

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250307124451.122828-1-rkanwal@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-13 06:11:43 +05:30
Samuel Holland
afa0e3091b lib: sbi_trap: Add support for vectored interrupts
When redirecting an exception to S-mode, transform the (v)stvec CSR
value as described in the privileged spec to derive the S-mode PC.
Since OpenSBI never redirects interrupts, only synchronous exceptions,
the only action needed is to mask out the (v)stvec.MODE field.

Reported-by: Jan Reinhard <jan.reinhard@sysgo.com>
Closes: https://github.com/riscv-software-src/opensbi/issues/391
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviwed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250305014729.3143535-1-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-04-13 05:51:17 +05:30
Chao-ying Fu
995f226f3f lib: Emit lr and sc instructions based on -march flags
When -march=rv64im_zalrsc_zicsr is used, provide atomic operations
and locks using lr and sc instructions only.

Signed-off-by: Chao-ying Fu <cfu@mips.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250226014727.19710-1-cfu@mips.com
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-03-28 18:52:05 +05:30
Junhui Liu
8fe835303c lib: utils/serial: Add PXA UARTs support
The PXA variant of the uart8250 adds the UART Unit Enable bit (UUE) that
needs to be set to enable the XScale PXA UART. And it is required for
some RISC-V SoCs like the Spacemit K1 that implement the PXA UART.

This introduces the "intel,xscale-uart" compatible to handle setting the
UUE bit.

Signed-off-by: Junhui Liu <junhui.liu@pigmoral.tech>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250327-pxa-uart-support-v2-1-c4400c1fcd0b@pigmoral.tech
Signed-off-by: Anup Patel <anup@brainfault.org>
2025-03-27 20:20:05 +05:30
Clément Léger
3ac49712e3 lib: sbi: sse: Add support for SSTATUS.SDT
Similarly to what is done for SPELP, handle SSTATUS.SDT upon event
injection. In order to mimick an interrupt, set SDT to 1 for injection and
save its previous value in interrupted_flags[5:5]. Restore it upon
completion.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2025-03-27 18:16:44 +05:30
Clément Léger
b4464b22e4 lib: sbi: sse: Add support for SSTATUS.SPELP
As raised during the ARC review, SPELP was not handled during the event
injection process. Save it as part of the interrupted flags, clear it
before injecting the event and restore it after completion.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2025-03-27 18:16:28 +05:30
Clément Léger
53d322f8ae lib: sbi: sse: Remove superfluous parenthesis around MSTATUS_* values
For some reason, there was a pair of useless parenthesis around MSTATUS_*
value usage. Remove them.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2025-03-27 18:16:19 +05:30
Clément Léger
41fb89cb29 lib: sbi: sse: Rename STATUS* interrupted flags to SSTATUS*
As raised by Andrew on the kvm-unit-test review, this flags are meant to
hold SSTATUS bits in the specification. Rename them to match that.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2025-03-27 18:15:44 +05:30
Clément Léger
1e7258d6a8 lib: sbi: sse: Return SBI_EDENIED for read only parameters.
The SSE specification did specified that read only parameters should
return SBI_EBADRANGE but was modified recently to return SBI_EDENIED.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2025-03-27 18:15:25 +05:30
Clément Léger
5dc7a6db6f lib: sbi: sse: Remove printf from sbi_sse_exit()
This printf is mainly useful for debugging, remove it.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2025-03-27 18:15:10 +05:30
Clément Léger
601bea45c5 lib: sbi: sse: Update SSE event ids
The latest specification added new high priority RAS events and renamed
the PMU to PMU_OVERFLOW.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2025-03-27 18:03:41 +05:30
Raj Vishwanathan
321ca8063b lib: utils: Make sure that hartid and the scratch are aligned
Harts associated with an ACLINT_MSWI need not have sequential hartids.
It is insufficient to use first_hartid and hart_count. To account for
non-sequential hart ids, include the empty hart-ids' generate hart-count.

Signed-off-by: Raj Vishwanathan <Raj.Vishwanathan@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-26 19:11:10 +05:30
Samuel Holland
949c83a799 lib: sbi: Use sbi_hart_count() and sbi_for_each_hartindex()
Simplify the code and improve consistency by using the new macros where
possible. sbi_hart_count() obsoletes sbi_scratch_last_hartindex().

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-24 17:57:20 +05:30
Samuel Holland
757f7acafd lib: sbi_scratch: Add sbi_hart_count() and for_each_hartindex()
There is currently no helper for iterating through the harts in a
system, and code must choose between sbi_scratch_last_hartindex() and
sbi_platform_hart_count() for the loop condition.

sbi_scratch_last_hartindex() has unusual semantics, leading to the
likelihood of off-by-one errors, and sbi_platform_hart_count() is
provided by the platform and so may not be properly bounded.

Add a new helper which definitively reports the number of harts managed
by this OpenSBI instance, i.e. the number of valid hart indexes, and a
convenient iterator macro.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-24 17:56:08 +05:30
Samuel Holland
6b97950cf5 lib: sbi_scratch: Optimize hartid and scratch lookup
The compiler generates much better code for sbi_hartindex_to_hartid()
and sbi_hartindex_to_scratch() when using a constant for the bounds
check. This works out nicely because the underlying arrays are already
a constant size, so the only change needed is to fill the remainder of
each array with the appropriate default/out-of-bounds value. The
ellipsis in the designated initializer is a GCC extension (also
supported by Clang), but avoids runtime initialization of the array.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-24 17:56:05 +05:30
Samuel Holland
ef4ed2dda7 lib: sbi_scratch: Apply bounds check to platform hart_count
The internal limit on the number of harts is SBI_HARTMASK_MAX_BITS, as
this value determines the size of various bitmaps and arrays (including
hartindex_to_hartid_table and hartindex_to_scratch_table). Clamp the
value provided by the platform, and drop the extra array element.

Update the documentation to indicate that hart_index2id must be sized
based on hart_count, and that hart indexes must be contiguous. As of
commit 5e90e54a1a ("lib: utils:Check that hartid is valid"), there is
no restriction on the valid hart ID values.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-24 17:56:04 +05:30
Samuel Holland
86c01a73ff lib: sbi: Avoid GOT indirection for global symbol references
OpenSBI is compiled with -fPIE, which generally implies dynamic linking.
This causes the compiler to generate GOT references for global symbols
in order to support runtime symbol interposition. However, OpenSBI does
not actually perform dynamic linking, so the GOT indirection just adds
unnecessary overhead.

The GOT references can be avoided by declaring global symbols with
hidden visibility, thus making them local to this dynamic object and
non-interposable. GCC/Clang's -fvisibility parameter is insufficient for
this purpose when referencing objects from other translation units;
either __attribute__((visibility(...)) or the pragma is required. Use
the pragma since it is easier to apply to every symbol. Additionally
clean up the one GOT reference from inline assembly.

With this change, a firmware linked with LLD does not contain either a
GOT or a PLT, and a firmware linked with BFD ld contains only a GOT with
a single (unreferenced, legacy) _GLOBAL_OFFSET_TABLE_ entry.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-24 17:00:59 +05:30
Samuel Holland
98c0a3860a Revert "lib: utils/irqchip: Match against more specific compatible strings first"
This reverts commit 6019259dfb.

Now that fdt_driver_init_by_offset() respects the compatible string
fallback priority order, this workaround is no longer necessary.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-24 16:37:52 +05:30
Samuel Holland
0ffe265fd9 lib: utils/fdt: Respect compatible string fallback priority
When matching drivers to DT nodes, always match all drivers against the
first compatible string before considering fallback compatible strings.
This ensures the most specific match is always selected, regardless of
the order of the drivers or match structures, as long as no compatible
string appears in multiple match structures.

Fixes: 1ccc52c427 ("lib: utils/fdt: Add helpers for generic driver initialization")
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-24 16:37:06 +05:30
Himanshu Chauhan
b2e8e6986d lib: sbi: Return SBI_EALREADY error code if SSE event is present
Return SBI_EALREADY error code instead of SBI_EINVAL, in case an
event is already added to the supported list.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-23 21:17:36 +05:30
Anup Patel
8573a9b858 scripts: Fix firmware binaries compilation in create-binary-archive.sh
Currently, the generic libsbi.a is compiled in create-binary-archive.sh
before platform specific firmwares so a libsbi.a without any SBI extension
gets linked to the platform specific firmwares. To address this, remove
the temporary build directory in create-binary-archive.sh before using it.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2025-03-23 21:16:25 +05:30
Dongdong Zhang
3e6bd14246 lib: tests: add bitwise operations unit tests
Added unit tests for various bitwise operations using SBI unit
test framework.

Signed-off-by: Dongdong Zhang <zhangdongdong@eswincomputing.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-23 21:06:25 +05:30
Dongdong Zhang
56341e95ae lib: sbi: Fix potential garbage data in string copy functions
In the original implementation of `sbi_strcpy` and `sbi_strncpy`, if the
destination buffer (`dest`) was longer than the source string (`src`),
the functions did not ensure that the remaining bytes in `dest` were
properly null-terminated. This could result in garbage data being
present in the destination buffer after the copy operation, as the
functions only copied characters from `src` without explicitly
terminating `dest`.

Signed-off-by: Dongdong Zhang <zhangdongdong@eswincomputing.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-23 18:38:57 +05:30
Akshay Behl
0b78665a6c lib: add tests for sbi_ecall functionality
This patch adds unit tests for verifying the sbi_ecall version,
impid handling, and extension registration functions. The tests
ensure that the extension registration and unregistration work
as expected.

Signed-off-by: Akshay Behl <akshaybehl231@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-03-23 16:56:54 +05:30
Clément Léger
1ad1991244 lib: sbi: fwft: Return SBI_ERR_DENIED_LOCKED when setting a locked feature
Latest modifications to the spec mandates that a set on a lock feature
returns SBI_ERR_DENIED_LOCKED.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-19 22:13:21 +05:30
Clément Léger
b91ab20cd2 include: sbi: Add SBI_ERR_DENIED_LOCKED
Add SBI_ERR_DENIED_LOCKED and set it as the SBI_LAST_ERR which was
wrongly set to SBI_ERR_BAD_RANGE.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-19 22:10:59 +05:30
Alex Studer
6019259dfb lib: utils/irqchip: Match against more specific compatible strings first
The T-HEAD C90x PLIC has some special quirks, such as the S-mode
delegation bit. OpenSBI currently handles this by checking the compatible
string in the device tree.

However, this matching is done in the order of the fdt_match array. So if
a device tree contains both strings, for example:

	compatible = "thead,c900-plic", "riscv,plic0";

Then OpenSBI will match against the generic "riscv,plic0" string, since
that appears first in the fdt_match array. This means it will fail to set
the S-mode delegation bit, and Linux will fail to boot. In some cases, it
is not possible to change the compatible string to just the T-HEAD PLIC,
as older versions of Linux only recognize the RISC-V compatible string.

This patch fixes that by moving the RISC-V string to the end, ensuring
that the more specific options get matched first.

Signed-off-by: Alex Studer <alex@studer.dev>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-19 21:49:08 +05:30
Samuel Holland
a2c172f526 lib: utils/fdt: Allocate fdt_pmu_evt_select on the heap
This reduces .bss size by 8 KiB, and should reduce overall memory usage
since most platforms will have significantly fewer than 512 entries in
this table. At the same time, it removes the fixed table size limit.
Since the table is only used within fdt_pmu.c, instead of updating the
extern declaration, make the table local to this file.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-19 18:25:17 +05:30
Samuel Holland
f95d1140f6 lib: utils/fdt: Remove redundant PMU property length checks
If a property value is too small, len will be zero after the division
on the next line, so the property will be ignored. This is the same
behavior as when the length check fails. Furthermore, the first two
length checks were already ineffectual, because each item in those
arrays is 12 bytes long, not 8.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-19 18:22:52 +05:30
Samuel Holland
38df94422b lib: utils: Constify FDT driver definitions
The carray referencing these definitions assumes they are const.

Fixes: 6a26726e08 ("lib/utils: reset: Add RPMI System Reset driver")
Fixes: 13f55f33a1 ("lib: utils/suspend: Add RPMI system suspend driver")
Fixes: 33ee9b8240 ("lib: utils/hsm: Add RPMI HSM driver")
Fixes: 591a98bdd5 ("lib: utils/cppc: Add RPMI CPPC driver")
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-19 17:47:24 +05:30
Clément Léger
f354400ebf lib: sbi: sse: fix invalid errors returned for sse_hart_mask/unmask()
When called twice, sse_hart_mask()/sse_hart_unmask() should return
SBI_EALREADY_STOPPED/SBI_EALREADY_STARTED. This was currently inverted.

Fixes: b919daf495 ("lib: sbi: Add support to mask/unmask SSE events")
Reported-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-19 17:18:13 +05:30
Anup Patel
1f64fef919 lib: sbi: Fix non-root domain startup
Currently, the sbi_sse_init() in cold boot path is called after
sbi_domain_finalize() so boot HART of non-root domains will start
before SSE cold boot init which can cause warm boot of such HARTs
to crash in sbi_sse_init().

To address the above issue, factor-out the non-root domain startup
from sbi_domain_finalize() function as a separate sbi_domain_startup()
function  which can be called after sbi_sse_init() in cold boot path.

Fixes: 93f7d819fd ("lib: sbi: sse: allow adding new events")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2025-02-19 17:11:39 +05:30
Joel Stanley
fe11dee7ea README: Remove comment about boolin toolchains being 64-bit only
As of January 2025 they have riscv32-ilp32d and riscv64-lp64d:

 https://toolchains.bootlin.com/releases_riscv32-ilp32d.html
 https://toolchains.bootlin.com/releases_riscv64-lp64d.html

Signed-off-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-18 14:10:54 +05:30
Joel Stanley
f3dfa6488f README: Update toolchain section to mention PIE requirement
Since commit 76d7e9b8ee ("firmware: remove copy-base relocation"), the
Makefile enforces PIE support.

Signed-off-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-18 14:10:11 +05:30
Joel Stanley
02c7a9bbef README: Any arch host can be used to cross compile
Verified by cross compiling from an arm64 host.

Signed-off-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-18 14:08:29 +05:30
Anup Patel
ec09918426 lib: sbi: Update MPXY framework and SBI extension as per latest spec
The latest SBI 3.0 spec defines a new sbi_mpxy_get_shmem_size()
function and simplifies sbi_mpxy_set_shmem() function so update
the MPXY framework and SBI extension accordingly.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2025-02-13 11:10:03 +05:30
Anup Patel
61abd975f2 lib: utils: Add MPXY RPMI mailbox driver for System MSI service group
The supervisor software can directly receive most of the system MSIs
except P2A doorbell and MSIs preferred to be handled in M-mode.

Add MPXY RPMI mailbox client driver for the System MSI service group.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2025-02-13 11:10:03 +05:30
Anup Patel
b05e2a1956 include: sbi_utils: Update RPMI service group IDs and BASE service group
The service group ID assignment and some of the BASE services have
changes in the latest RPMI specification so let's update the RPMI
implementation accordingly.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2025-02-13 11:10:03 +05:30
Anup Patel
e4bc55930b lib: utils: Populate MPXY channel attributes from RPMI channel attributes
Use the RPMI mailbox channel attributes to populate MPXY channel
attributes instead of hard coding them.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2025-02-13 11:10:03 +05:30
Anup Patel
91012b475d lib: utils: Implement get_attribute() for the RPMI shared memory mailbox
To allow clients query service group version of a RPMI mailbox channel,
implement get_attribute() callback for the RPMI shared memory mailbox
controller.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2025-02-13 11:10:03 +05:30
Anup Patel
f8272946da include: sbi_utils: Include mailbox.h in rpmi_mailbox.h header
The rpmi_mailbox.h uses structures defined in mailbox.h so let's
include mailbox.h in rpmi_mailbox.h header.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2025-02-13 11:10:03 +05:30
Anup Patel
218de6ff7d lib: utils: Improve variable declarations in MPXY RPMI mailbox client
The local variable declarations should be at the start of function
and preferrably organized like a inverted pyramid.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2025-02-13 11:10:03 +05:30
Anup Patel
879ee6859c lib: utils: Drop notifications from MPXY RPMI mailbox client
Currently, the common MPXY RPMI mailbox client does not support
notifications so no need for dummy notifications support.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2025-02-13 11:10:03 +05:30
Anup Patel
a4876e6c6c lib: sbi: Improve local variable declarations in MPXY framework
The local variable declarations should be at the start of function
and preferrably organized like a inverted pyramid.

Signed-off-by: Anup patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2025-02-13 11:10:03 +05:30
Anup Patel
30437eb204 lib: sbi: Fix capability bit assignment in MPXY framework
The capability bit assignment in MPXY framework does not match the
SBI MPXY extension in latest SBI specification so update it.

Fixes: 7939bf1329 ("lib: sbi: Add SBI Message Proxy (MPXY) framework")
Signed-off-by: Anup patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2025-02-13 11:10:03 +05:30
Anup Patel
75c2057a6f lib: utils: Introduce optional MPXY RPMI service group operations
Some of the RPMI service groups may need additional context and
special handling when transferring messages via underlying mailbox
channel so introduce optional MPXY RPMI service group operations
for this purpose.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2025-02-13 11:10:03 +05:30
Anup Patel
fc1232899d lib: utils: Constantify mpxy_rpmi_mbox_data in mpxy_rpmi_mbox
The mpxy_rpmi_mbox_data is provided by RPMI service group specific
MPXY driver to the common MPXY RPMI mailbox client implementation
so let's constantify mpxy_rpmi_mbox_data in mpxy_rpmi_mbox so that
it is not accidently modified.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2025-02-13 11:10:03 +05:30
Anup Patel
d14340cb31 lib: utils: Split the FDT MPXY RPMI mailbox client into two parts
Instead of having one common FDT MPXY RPMI mailbox client drivers
for various RPMI service groups, split this driver into two parts:
1) Common MPXY RPMI mailbox client library
2) MPXY driver for RPMI clock service group

The above split enables having a separate MPXY driver for each
RPMI clock service group and #1 (above) will allow code sharing
between various MPXY RPMI drivers.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2025-02-13 11:10:03 +05:30
Clément Léger
5ce121b7a1 lib: sbi: increase the size of the string used for extension display
With the "max" QEMU cpu, the displayed extension string is truncated due
to the buffer being too small. Increase it to 256 to display the full
set of extensions correctly.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-13 09:16:05 +05:30
Samuel Holland
434add551c lib: utils: Initialize miscellaneous drivers in one pass
For driver subsystems that are not tightly integrated into the OpenSBI
init sequence, it is not important that the drivers are initialized in
any particular order. By putting all of these drivers in one array, they
can all be initialized with a single pass through the devicetree. This
saves about 10 ms of boot time on HiFive Unmatched.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-12 21:39:25 +05:30
Samuel Holland
e84ba96634 lib: utils/fdt: Remove fdt_find_match()
Now that all drivers are using the fdt_driver functions for
initialization, this function is unused and can be removed.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-12 21:27:54 +05:30
Samuel Holland
9e1a1518d4 lib: utils/irqchip: Use fdt_driver for initialization
The irqchip driver subsystem does not need any extra data, so it can use
`struct fdt_driver` directly. The generic fdt_irqchip_init() performs a
best-effort initialization of all matching DT nodes.

Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-12 21:22:37 +05:30
Inochi Amaoto
a7f3c159a0 platform: generic: thead: add Sophgo SG2044
The Sophgo SG2044 is a new version of C920, although it supports
sscofpmf, it still needs this pmu quirks its cores.

Signed-off-by: Inochi Amaoto <inochiama@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-12 18:00:18 +05:30
Xiang W
82da072eb4 firmware: fw_base.S: Fix comments for _wait_for_boot_hart
Due to some historical issues, the value of BOOT_STATUS_BOOT_HART_DONE
has changed and the comment message needs to be corrected.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-12 09:34:06 +05:30
Raj Vishwanathan
5e90e54a1a lib: utils:Check that hartid is valid
It is possible that hartid may not be sequential and it should not be validated
against SBI_HARTMASK_MAX_BITS. Instead we should check the index of the hartid,
hart index, against SBI_HARTMASK_MAX_BITS.

Signed-off-by: Raj Vishwanathan <Raj.Vishwanathan@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-12 09:24:09 +05:30
Raj Vishwanathan
4f12f8b02f include: sbi: Align SBI trap registers to a nice boundary
Align SBI_TRAP_CONTEXT_SIZE to a multiple of 16 bytes. If it is not
aligned to 16 bytes for RV64, it can create performance problems.
Aligning it correctly can fix the performance issues.

Signed-off-by: Raj Vishwanathan <Raj.Vishwanathan@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-12 09:14:43 +05:30
Chao Du
3f25380d85 lib: utils: Make the enforce permission bit configurable from DT
The domain_support.md documentation states that the enforce permission
bit (BIT[6]) could be set in the "regions" property of a domain
instance DT node. However, this bit is masked in the current
implementation. This patch unmasks the bit to make it configurable
from DT.

Signed-off-by: Chao Du <duchao@eswincomputing.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-11 17:56:48 +05:30
Huang Borong
a76aca030d lib: utils/fdt: update fdt_parse_aplic_node()
1. Initialize struct imsic_data imsic to 0 at definition to prevent the
   use of uninitialized memory, ensuring the variable starts with known
   values.

2. Remove the redundant memset call on the "aplic" parameter since the
   memory for aplic is allocated using sbi_zalloc() by the caller
   irqchip_aplic_cold_init(), which guarantees it is already set to 0.

Signed-off-by: Huang Borong <huangborong@bosc.ac.cn>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-02-11 16:58:24 +05:30
Leo Yu-Chi Liang
555055d145 include: utils/fdt_helper: fix typo har't'id
s/hard_id/hartid/

Signed-off-by: Leo Yu-Chi Liang <ycliang@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-01-30 11:15:09 +05:30
Clément Léger
5c7e2c8334 lib: sbi: pmu: add the PMU SSE event only if overflow IRQ is supported
Add the PMU SSE event only if an overflow irq bit is present.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2025-01-30 10:43:16 +05:30
Clément Léger
ecab71e19a lib: sbi: sse: return SBI_ENOTSUPP for unsupported events
If a standard event was not found in the list of events that are handled
by harts but belongs to the standard event list defined by the
specification, return SBI_ENOTSUPP. Without that, we can not
distinguish a non implemented standard event from a non valid one.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2025-01-30 10:42:27 +05:30
Clément Léger
93f7d819fd lib: sbi: sse: allow adding new events
In order to allow events to be dynamically added, remove the existing
static array of events and use a simply linked list of supported events.
This allows us to move the cb_ops into this list and associated it with
an event_id. Drivers can now register cb_ops before bringing up the sse
core to handle additional events (platform ones for instance).

sbi_sse_init() now allocates as many events as present in the linked
list. Events can now be added with sbi_sse_add_event() which allows to
add new supported events with some callback operations if any. If an
event is not to be supported, then sbi_sse_add_event() should not be
called. This approach currently consider that local events are to be
supported on all harts (ie, they all support the same ISA or
dependencies). If per-hart event availability needs to be supported,
then, an is_supported() callback could be added later and called for
each hart.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
2025-01-30 10:40:49 +05:30
Clément Léger
147978f312 include: lib: add a simple singly linked list implementation
Add a simple singly linked list implementation when double linked list
are not needed. This allows to easily have statically defined linked
list that can be extended at runtime.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-01-30 10:35:46 +05:30
Clément Léger
e05782b8ff lib: sbi: sse: return an error value from sse_event_get()
Since event support will be checked in the next commits, return a value
from sse_event_get() to allow propagating it. This will be used to
report SBI_ERR_NOT_SUPPORTED when an event isn't supported.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2025-01-30 10:34:14 +05:30
Clément Léger
9d2c9c6ca0 lib: sbi: move sbi_double_trap_handler() to a dedicated header
We will add new functions to sbi_double_trap.c in order to register an
SSE event, split this to a header as part of preparation work.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2025-01-30 10:32:18 +05:30
Clément Léger
3943ddbaab lib: sbi: pmu: fix usage of sbi_pmu_irq_bit()
While sbi_pmu_irq_bit() was used to delegate irq to S-mode, LCOFIP usage
was still hardcoded in various places. This led to change the returned
value of sbi_pmu_irq_bit() to be a bit number rather than a bit mask
since it returns an 'int' and we need to obtain the bit number itself to
handle it in the IRQs handlers. Add a similar function to return the
irq mask which can also be used where the mask is required rather than
the bit itself.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2025-01-30 10:30:45 +05:30
232 changed files with 8417 additions and 2911 deletions

View File

@@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: BSD-2-Clause
# See here for more information about the format and editor support: # See here for more information about the format and editor support:
# https://editorconfig.org/ # https://editorconfig.org/

View File

@@ -94,20 +94,26 @@ OPENSBI_VERSION_MINOR=`grep "define OPENSBI_VERSION_MINOR" $(include_dir)/sbi/sb
OPENSBI_VERSION_GIT= OPENSBI_VERSION_GIT=
# Detect 'git' presence before issuing 'git' commands # Detect 'git' presence before issuing 'git' commands
GIT_AVAIL=$(shell command -v git 2> /dev/null) GIT_AVAIL := $(shell command -v git 2> /dev/null)
ifneq ($(GIT_AVAIL),) ifneq ($(GIT_AVAIL),)
GIT_DIR=$(shell git rev-parse --git-dir 2> /dev/null) GIT_DIR := $(shell git rev-parse --git-dir 2> /dev/null)
ifneq ($(GIT_DIR),) ifneq ($(GIT_DIR),)
OPENSBI_VERSION_GIT=$(shell if [ -d $(GIT_DIR) ]; then git describe 2> /dev/null; fi) OPENSBI_VERSION_GIT := $(shell if [ -d $(GIT_DIR) ]; then git describe 2> /dev/null; fi)
endif endif
endif endif
# Setup compilation commands # Setup compilation commands
ifneq ($(LLVM),) ifneq ($(LLVM),)
CC = clang ifneq ($(filter %/,$(LLVM)),)
AR = llvm-ar LLVM_PREFIX := $(LLVM)
LD = ld.lld else ifneq ($(filter -%,$(LLVM)),)
OBJCOPY = llvm-objcopy LLVM_SUFFIX := $(LLVM)
endif
CC = $(LLVM_PREFIX)clang$(LLVM_SUFFIX)
AR = $(LLVM_PREFIX)llvm-ar$(LLVM_SUFFIX)
LD = $(LLVM_PREFIX)ld.lld$(LLVM_SUFFIX)
OBJCOPY = $(LLVM_PREFIX)llvm-objcopy$(LLVM_SUFFIX)
else else
ifdef CROSS_COMPILE ifdef CROSS_COMPILE
CC = $(CROSS_COMPILE)gcc CC = $(CROSS_COMPILE)gcc
@@ -145,6 +151,12 @@ endif
# Guess the compiler's XLEN # Guess the compiler's XLEN
OPENSBI_CC_XLEN := $(shell TMP=`$(CC) $(CLANG_TARGET) -dumpmachine | sed 's/riscv\([0-9][0-9]\).*/\1/'`; echo $${TMP}) OPENSBI_CC_XLEN := $(shell TMP=`$(CC) $(CLANG_TARGET) -dumpmachine | sed 's/riscv\([0-9][0-9]\).*/\1/'`; echo $${TMP})
# If guessing XLEN fails, default to 64
ifneq ($(OPENSBI_CC_XLEN),32)
ifneq ($(OPENSBI_CC_XLEN),64)
OPENSBI_CC_XLEN = 64
endif
endif
# Guess the compiler's ABI and ISA # Guess the compiler's ABI and ISA
ifneq ($(CC_IS_CLANG),y) ifneq ($(CC_IS_CLANG),y)
@@ -174,6 +186,11 @@ else
USE_LD_FLAG = -fuse-ld=bfd USE_LD_FLAG = -fuse-ld=bfd
endif endif
REPRODUCIBLE ?= n
ifeq ($(REPRODUCIBLE),y)
REPRODUCIBLE_FLAGS += -ffile-prefix-map=$(src_dir)=
endif
# Check whether the linker supports creating PIEs # Check whether the linker supports creating PIEs
OPENSBI_LD_PIE := $(shell $(CC) $(CLANG_TARGET) $(RELAX_FLAG) $(USE_LD_FLAG) -fPIE -nostdlib -Wl,-pie -x c /dev/null -o /dev/null >/dev/null 2>&1 && echo y || echo n) OPENSBI_LD_PIE := $(shell $(CC) $(CLANG_TARGET) $(RELAX_FLAG) $(USE_LD_FLAG) -fPIE -nostdlib -Wl,-pie -x c /dev/null -o /dev/null >/dev/null 2>&1 && echo y || echo n)
@@ -202,16 +219,18 @@ endif
BUILD_INFO ?= n BUILD_INFO ?= n
ifeq ($(BUILD_INFO),y) ifeq ($(BUILD_INFO),y)
OPENSBI_BUILD_DATE_FMT = +%Y-%m-%d %H:%M:%S %z OPENSBI_BUILD_DATE_FMT = +%Y-%m-%d %H:%M:%S %z
ifndef OPENSBI_BUILD_TIME_STAMP
ifdef SOURCE_DATE_EPOCH ifdef SOURCE_DATE_EPOCH
OPENSBI_BUILD_TIME_STAMP ?= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" \ OPENSBI_BUILD_TIME_STAMP := $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" \
"$(OPENSBI_BUILD_DATE_FMT)" 2>/dev/null || \ "$(OPENSBI_BUILD_DATE_FMT)" 2>/dev/null || \
date -u -r "$(SOURCE_DATE_EPOCH)" \ date -u -r "$(SOURCE_DATE_EPOCH)" \
"$(OPENSBI_BUILD_DATE_FMT)" 2>/dev/null || \ "$(OPENSBI_BUILD_DATE_FMT)" 2>/dev/null || \
date -u "$(OPENSBI_BUILD_DATE_FMT)") date -u "$(OPENSBI_BUILD_DATE_FMT)")
else else
OPENSBI_BUILD_TIME_STAMP ?= $(shell date "$(OPENSBI_BUILD_DATE_FMT)") OPENSBI_BUILD_TIME_STAMP := $(shell date "$(OPENSBI_BUILD_DATE_FMT)")
endif endif
OPENSBI_BUILD_COMPILER_VERSION=$(shell $(CC) -v 2>&1 | grep ' version ' | \ endif
OPENSBI_BUILD_COMPILER_VERSION := $(shell $(CC) -v 2>&1 | grep ' version ' | \
sed 's/[[:space:]]*$$//') sed 's/[[:space:]]*$$//')
endif endif
@@ -350,6 +369,7 @@ ifeq ($(BUILD_INFO),y)
GENFLAGS += -DOPENSBI_BUILD_TIME_STAMP="\"$(OPENSBI_BUILD_TIME_STAMP)\"" GENFLAGS += -DOPENSBI_BUILD_TIME_STAMP="\"$(OPENSBI_BUILD_TIME_STAMP)\""
GENFLAGS += -DOPENSBI_BUILD_COMPILER_VERSION="\"$(OPENSBI_BUILD_COMPILER_VERSION)\"" GENFLAGS += -DOPENSBI_BUILD_COMPILER_VERSION="\"$(OPENSBI_BUILD_COMPILER_VERSION)\""
endif endif
GENFLAGS += -include $(include_dir)/sbi/sbi_visibility.h
ifdef PLATFORM ifdef PLATFORM
GENFLAGS += -include $(KCONFIG_AUTOHEADER) GENFLAGS += -include $(KCONFIG_AUTOHEADER)
endif endif
@@ -359,6 +379,9 @@ GENFLAGS += $(firmware-genflags-y)
CFLAGS = -g -Wall -Werror -ffreestanding -nostdlib -fno-stack-protector -fno-strict-aliasing -ffunction-sections -fdata-sections CFLAGS = -g -Wall -Werror -ffreestanding -nostdlib -fno-stack-protector -fno-strict-aliasing -ffunction-sections -fdata-sections
CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls
CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
CFLAGS += -std=gnu11
CFLAGS += $(REPRODUCIBLE_FLAGS)
# Optionally supported flags # Optionally supported flags
ifeq ($(CC_SUPPORT_VECTOR),y) ifeq ($(CC_SUPPORT_VECTOR),y)
CFLAGS += -DOPENSBI_CC_SUPPORT_VECTOR CFLAGS += -DOPENSBI_CC_SUPPORT_VECTOR
@@ -384,6 +407,7 @@ CPPFLAGS += $(firmware-cppflags-y)
ASFLAGS = -g -Wall -nostdlib ASFLAGS = -g -Wall -nostdlib
ASFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls ASFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls
ASFLAGS += -fPIE ASFLAGS += -fPIE
ASFLAGS += $(REPRODUCIBLE_FLAGS)
# Optionally supported flags # Optionally supported flags
ifeq ($(CC_SUPPORT_SAVE_RESTORE),y) ifeq ($(CC_SUPPORT_SAVE_RESTORE),y)
ASFLAGS += -mno-save-restore ASFLAGS += -mno-save-restore
@@ -427,11 +451,14 @@ DTSCPPFLAGS = $(CPPFLAGS) -nostdinc -nostdlib -fno-builtin -D__DTS__ -x assemble
ifneq ($(DEBUG),) ifneq ($(DEBUG),)
CFLAGS += -O0 CFLAGS += -O0
ELFFLAGS += -Wl,--print-gc-sections
else else
CFLAGS += -O2 CFLAGS += -O2
endif endif
ifeq ($(V), 1)
ELFFLAGS += -Wl,--print-gc-sections
endif
# Setup functions for compilation # Setup functions for compilation
define dynamic_flags define dynamic_flags
-I$(shell dirname $(2)) -D__OBJNAME__=$(subst -,_,$(shell basename $(1) .o)) -I$(shell dirname $(2)) -D__OBJNAME__=$(subst -,_,$(shell basename $(1) .o))

View File

@@ -99,7 +99,7 @@ capable enough to bring up all other non-booting harts using HSM extension.
Required Toolchain and Packages Required Toolchain and Packages
------------------------------- -------------------------------
OpenSBI can be compiled natively or cross-compiled on a x86 host. For OpenSBI can be compiled natively or cross-compiled on a host machine. For
cross-compilation, you can build your own toolchain, download a prebuilt one cross-compilation, you can build your own toolchain, download a prebuilt one
from the [Bootlin toolchain repository] or install a distribution-provided from the [Bootlin toolchain repository] or install a distribution-provided
toolchain; if you opt to use LLVM/Clang, most distribution toolchains will toolchain; if you opt to use LLVM/Clang, most distribution toolchains will
@@ -108,16 +108,12 @@ LLVM/Clang toolchain due to LLVM's ability to support multiple backends in the
same binary, so is often an easy way to obtain a working cross-compilation same binary, so is often an easy way to obtain a working cross-compilation
toolchain. toolchain.
Basically, we prefer toolchains with Position Independent Executable (PIE) Toolchains with Position Independent Executable (PIE) support like
support like *riscv64-linux-gnu-gcc*, *riscv64-unknown-freebsd-gcc*, or *riscv64-linux-gnu-gcc*, *riscv64-unknown-freebsd-gcc*, or *Clang/LLVM* are
*Clang/LLVM* as they generate PIE firmware images that can run at arbitrary required in order to generate PIE firmware images that can run at arbitrary
address with appropriate alignment. If a bare-metal GNU toolchain (e.g. address with appropriate alignment. Bare-metal GNU toolchains (e.g.
*riscv64-unknown-elf-gcc*) is used, static linked firmware images are *riscv64-unknown-elf-gcc*) cannot be used. *Clang/LLVM* can still generate PIE
generated instead. *Clang/LLVM* can still generate PIE images if a bare-metal images if a bare-metal triple is used (e.g. *-target riscv64-unknown-elf*).
triple is used (e.g. *-target riscv64-unknown-elf*).
Please note that only a 64-bit version of the toolchain is available in
the Bootlin toolchain repository for now.
In addition to a toolchain, OpenSBI also requires the following packages on In addition to a toolchain, OpenSBI also requires the following packages on
the host: the host:
@@ -256,6 +252,18 @@ option with:
make LLVM=1 make LLVM=1
``` ```
To build with a specific version of LLVM, a path to a directory containing the
LLVM tools can be provided:
```
make LLVM=/path/to/llvm/
```
If you have versioned llvm tools you would like to use, such as `clang-17`, the LLVM variable can
be set as:
```
make LLVM=-17
```
When using Clang, *CROSS_COMPILE* often does not need to be defined unless When using Clang, *CROSS_COMPILE* often does not need to be defined unless
using GNU binutils with prefixed binary names. *PLATFORM_RISCV_XLEN* will be using GNU binutils with prefixed binary names. *PLATFORM_RISCV_XLEN* will be
used to infer a default triple to pass to Clang, so if *PLATFORM_RISCV_XLEN* used to infer a default triple to pass to Clang, so if *PLATFORM_RISCV_XLEN*

View File

@@ -13,7 +13,7 @@ The FPGA SoC currently contains the following peripherals:
- Bootrom containing zero stage bootloader and device tree. - Bootrom containing zero stage bootloader and device tree.
To build platform specific library and firmwares, provide the To build platform specific library and firmwares, provide the
*PLATFORM=fpga/ariane* parameter to the top level `make` command. *PLATFORM=generic* parameter to the top level `make` command.
Platform Options Platform Options
---------------- ----------------
@@ -26,7 +26,7 @@ Building Ariane FPGA Platform
**Linux Kernel Payload** **Linux Kernel Payload**
``` ```
make PLATFORM=fpga/ariane FW_PAYLOAD_PATH=<linux_build_directory>/arch/riscv/boot/Image make PLATFORM=generic FW_PAYLOAD_PATH=<linux_build_directory>/arch/riscv/boot/Image
``` ```
Booting Ariane FPGA Platform Booting Ariane FPGA Platform

View File

@@ -7,8 +7,8 @@ processor from ETH Zurich. To this end, Ariane has been equipped with a
different L1 cache subsystem that follows a write-through protocol and that has different L1 cache subsystem that follows a write-through protocol and that has
support for cache invalidations and atomics. support for cache invalidations and atomics.
To build platform specific library and firmwares, provide the To build platform specific library and firmwares, provide the *PLATFORM=generic*
*PLATFORM=fpga/openpiton* parameter to the top level `make` command. parameter to the top level `make` command.
Platform Options Platform Options
---------------- ----------------
@@ -21,7 +21,7 @@ Building Ariane FPGA Platform
**Linux Kernel Payload** **Linux Kernel Payload**
``` ```
make PLATFORM=fpga/openpiton FW_PAYLOAD_PATH=<linux_build_directory>/arch/riscv/boot/Image make PLATFORM=generic FW_PAYLOAD_PATH=<linux_build_directory>/arch/riscv/boot/Image
``` ```
Booting Ariane FPGA Platform Booting Ariane FPGA Platform

View File

@@ -47,6 +47,8 @@ RISC-V Platforms Using Generic Platform
* **SiFive HiFive Unleashed** (*[sifive_fu540.md]*) * **SiFive HiFive Unleashed** (*[sifive_fu540.md]*)
* **Spike** (*[spike.md]*) * **Spike** (*[spike.md]*)
* **T-HEAD C9xx series Processors** (*[thead-c9xx.md]*) * **T-HEAD C9xx series Processors** (*[thead-c9xx.md]*)
* **OpenPiton FPGA SoC** (*[fpga-openpiton.md]*)
* **Ariane FPGA SoC** (*[fpga-ariane.md]*)
[andes-ae350.md]: andes-ae350.md [andes-ae350.md]: andes-ae350.md
[qemu_virt.md]: qemu_virt.md [qemu_virt.md]: qemu_virt.md
@@ -55,3 +57,5 @@ RISC-V Platforms Using Generic Platform
[sifive_fu540.md]: sifive_fu540.md [sifive_fu540.md]: sifive_fu540.md
[spike.md]: spike.md [spike.md]: spike.md
[thead-c9xx.md]: thead-c9xx.md [thead-c9xx.md]: thead-c9xx.md
[fpga-openpiton.md]: fpga-openpiton.md
[fpga-ariane.md]: fpga-ariane.md

View File

@@ -21,20 +21,12 @@ OpenSBI currently supports the following virtual and hardware platforms:
* **Kendryte K210 SoC**: Platform support for the Kendryte K210 SoC used on * **Kendryte K210 SoC**: Platform support for the Kendryte K210 SoC used on
boards such as the Kendryte KD233 or the Sipeed MAIX Dock. boards such as the Kendryte KD233 or the Sipeed MAIX Dock.
* **Ariane FPGA SoC**: Platform support for the Ariane FPGA SoC used on
Genesys 2 board. More details on this platform can be found in the file
*[fpga-ariane.md]*.
* **Andes AE350 SoC**: Platform support for the Andes's SoC (AE350). More * **Andes AE350 SoC**: Platform support for the Andes's SoC (AE350). More
details on this platform can be found in the file *[andes-ae350.md]*. details on this platform can be found in the file *[andes-ae350.md]*.
* **Spike**: Platform support for the Spike emulator. More * **Spike**: Platform support for the Spike emulator. More
details on this platform can be found in the file *[spike.md]*. details on this platform can be found in the file *[spike.md]*.
* **OpenPiton FPGA SoC**: Platform support OpenPiton research platform based
on ariane core. More details on this platform can be found in the file
*[fpga-openpiton.md]*.
* **Shakti C-class SoC Platform**: Platform support for Shakti C-class * **Shakti C-class SoC Platform**: Platform support for Shakti C-class
processor based SOCs. More details on this platform can be found in the processor based SOCs. More details on this platform can be found in the
file *[shakti_cclass.md]*. file *[shakti_cclass.md]*.
@@ -52,10 +44,8 @@ comments to facilitate the implementation.
[generic.md]: generic.md [generic.md]: generic.md
[qemu_virt.md]: qemu_virt.md [qemu_virt.md]: qemu_virt.md
[sifive_fu540.md]: sifive_fu540.md [sifive_fu540.md]: sifive_fu540.md
[fpga-ariane.md]: fpga-ariane.md
[andes-ae350.md]: andes-ae350.md [andes-ae350.md]: andes-ae350.md
[thead-c910.md]: thead-c910.md [thead-c910.md]: thead-c910.md
[spike.md]: spike.md [spike.md]: spike.md
[fpga-openpiton.md]: fpga-openpiton.md
[shakti_cclass.md]: shakti_cclass.md [shakti_cclass.md]: shakti_cclass.md
[renesas-rzfive.md]: renesas-rzfive.md [renesas-rzfive.md]: renesas-rzfive.md

View File

@@ -19,6 +19,10 @@ Base Platform Requirements
The base RISC-V platform requirements for OpenSBI are as follows: The base RISC-V platform requirements for OpenSBI are as follows:
1. At least rv32ima_zicsr or rv64ima_zicsr required on all HARTs 1. At least rv32ima_zicsr or rv64ima_zicsr required on all HARTs
* Users may restrict the usage of atomic instructions to lr/sc
via rv32im_zalrsc_zicsr or rv64im_zalrsc_zicsr if preferred
2. At least one HART should have S-mode support because: 2. At least one HART should have S-mode support because:
* SBI calls are meant for RISC-V S-mode (Supervisor mode) * SBI calls are meant for RISC-V S-mode (Supervisor mode)

View File

@@ -74,10 +74,10 @@ pmu {
<0x10000 0x10033 0x000ff000>; <0x10000 0x10033 0x000ff000>;
/* For event ID 0x0002 */ /* For event ID 0x0002 */
riscv,raw-event-to-mhpmcounters = <0x0000 0x0002 0xffffffff 0xffffffff 0x00000f8>, riscv,raw-event-to-mhpmcounters = <0x0000 0x0002 0xffffffff 0xffffffff 0x00000f8>,
/* For event ID 0-4 */ /* For event ID 0-15 */
<0x0 0x0 0xffffffff 0xfffffff0 0x00000ff0>, <0x0 0x0 0xffffffff 0xfffffff0 0x00000ff0>,
/* For event ID 0xffffffff0000000f - 0xffffffff000000ff */ /* For event ID 0xffffffff0000000f - 0xffffffff000000ff */
<0xffffffff 0x0 0xffffffff 0xffffff0f 0x00000ff0>; <0xffffffff 0xf 0xffffffff 0xffffff0f 0x00000ff0>;
}; };
``` ```

View File

@@ -1 +1,28 @@
# SPDX-License-Identifier: BSD-2-Clause # SPDX-License-Identifier: BSD-2-Clause
menu "Stack Protector Support"
config STACK_PROTECTOR
bool "Stack Protector buffer overflow detection"
default n
help
This option turns on the "stack-protector" compiler feature.
config STACK_PROTECTOR_STRONG
bool "Strong Stack Protector"
depends on STACK_PROTECTOR
default n
help
Turn on the "stack-protector" with "-fstack-protector-strong" option.
Like -fstack-protector but includes additional functions to be
protected.
config STACK_PROTECTOR_ALL
bool "Almighty Stack Protector"
depends on STACK_PROTECTOR
default n
help
Turn on the "stack-protector" with "-fstack-protector-all" option.
Like -fstack-protector except that all functions are protected.
endmenu

View File

@@ -59,28 +59,38 @@ _try_lottery:
/* Jump to relocation wait loop if we don't get relocation lottery */ /* Jump to relocation wait loop if we don't get relocation lottery */
lla a6, _boot_lottery lla a6, _boot_lottery
li a7, BOOT_LOTTERY_ACQUIRED li a7, BOOT_LOTTERY_ACQUIRED
#ifdef __riscv_atomic
amoswap.w a6, a7, (a6) amoswap.w a6, a7, (a6)
bnez a6, _wait_for_boot_hart bnez a6, _wait_for_boot_hart
#elif __riscv_zalrsc
_sc_fail:
lr.w t0, (a6)
sc.w t1, a7, (a6)
bnez t1, _sc_fail
bnez t0, _wait_for_boot_hart
#else
#error "need a or zalrsc"
#endif
/* relocate the global table content */ /* relocate the global table content */
li t0, FW_TEXT_START /* link start */ li t0, FW_TEXT_START /* link start */
lla t1, _fw_start /* load start */ lla t1, _fw_start /* load start */
sub t2, t1, t0 /* load offset */ sub t2, t1, t0 /* load offset */
lla t0, __rel_dyn_start lla t0, __rela_dyn_start
lla t1, __rel_dyn_end lla t1, __rela_dyn_end
beq t0, t1, _relocate_done beq t0, t1, _relocate_done
2: 2:
REG_L t5, REGBYTES(t0) /* t5 <-- relocation info:type */ REG_L t5, __SIZEOF_LONG__(t0) /* t5 <-- relocation info:type */
li t3, R_RISCV_RELATIVE /* reloc type R_RISCV_RELATIVE */ li t3, R_RISCV_RELATIVE /* reloc type R_RISCV_RELATIVE */
bne t5, t3, 3f bne t5, t3, 3f
REG_L t3, 0(t0) REG_L t3, 0(t0)
REG_L t5, (REGBYTES * 2)(t0) /* t5 <-- addend */ REG_L t5, (__SIZEOF_LONG__ * 2)(t0) /* t5 <-- addend */
add t5, t5, t2 add t5, t5, t2
add t3, t3, t2 add t3, t3, t2
REG_S t5, 0(t3) /* store runtime address to the GOT entry */ REG_S t5, 0(t3) /* store runtime address to the GOT entry */
3: 3:
addi t0, t0, (REGBYTES * 3) addi t0, t0, (__SIZEOF_LONG__ * 3)
blt t0, t1, 2b blt t0, t1, 2b
_relocate_done: _relocate_done:
/* At this point we are running from link address */ /* At this point we are running from link address */
@@ -292,7 +302,7 @@ _fdt_reloc_done:
REG_S t0, 0(t1) REG_S t0, 0(t1)
j _start_warm j _start_warm
/* waiting for boot hart to be done (_boot_status == 2) */ /* waiting for boot hart to be done (_boot_status == BOOT_STATUS_BOOT_HART_DONE) */
_wait_for_boot_hart: _wait_for_boot_hart:
li t0, BOOT_STATUS_BOOT_HART_DONE li t0, BOOT_STATUS_BOOT_HART_DONE
lla t1, _boot_status lla t1, _boot_status
@@ -726,6 +736,27 @@ _reset_regs:
ret ret
.section .rodata
.Lstack_corrupt_msg:
.string "stack smashing detected\n"
/* This will be called when the stack corruption is detected */
.section .text
.align 3
.globl __stack_chk_fail
.type __stack_chk_fail, %function
__stack_chk_fail:
la a0, .Lstack_corrupt_msg
call sbi_panic
/* Initial value of the stack guard variable */
.section .data
.align 3
.globl __stack_chk_guard
.type __stack_chk_guard, %object
__stack_chk_guard:
RISCV_PTR 0x95B5FF5A
#ifdef FW_FDT_PATH #ifdef FW_FDT_PATH
.section .rodata .section .rodata
.align 4 .align 4

View File

@@ -47,9 +47,9 @@
. = ALIGN(0x1000); /* Ensure next section is page aligned */ . = ALIGN(0x1000); /* Ensure next section is page aligned */
.rela.dyn : { .rela.dyn : {
PROVIDE(__rel_dyn_start = .); PROVIDE(__rela_dyn_start = .);
*(.rela*) *(.rela*)
PROVIDE(__rel_dyn_end = .); PROVIDE(__rela_dyn_end = .);
} }
PROVIDE(_rodata_end = .); PROVIDE(_rodata_end = .);

View File

@@ -66,3 +66,12 @@ endif
ifdef FW_OPTIONS ifdef FW_OPTIONS
firmware-genflags-y += -DFW_OPTIONS=$(FW_OPTIONS) firmware-genflags-y += -DFW_OPTIONS=$(FW_OPTIONS)
endif endif
ifeq ($(CONFIG_STACK_PROTECTOR),y)
stack-protector-cflags-$(CONFIG_STACK_PROTECTOR) := -fstack-protector
stack-protector-cflags-$(CONFIG_STACK_PROTECTOR_STRONG) := -fstack-protector-strong
stack-protector-cflags-$(CONFIG_STACK_PROTECTOR_ALL) := -fstack-protector-all
else
stack-protector-cflags-y := -fno-stack-protector
endif
firmware-cflags-y += $(stack-protector-cflags-y)

View File

@@ -30,7 +30,18 @@ _start:
/* Pick one hart to run the main boot sequence */ /* Pick one hart to run the main boot sequence */
lla a3, _hart_lottery lla a3, _hart_lottery
li a2, 1 li a2, 1
#ifdef __riscv_atomic
amoadd.w a3, a2, (a3) amoadd.w a3, a2, (a3)
#elif __riscv_zalrsc
_sc_fail:
lr.w t0, (a3)
addw t1, t0, a2
sc.w t1, t1, (a3)
bnez t1, _sc_fail
move a3, t0
#else
#error "need a or zalrsc"
#endif
bnez a3, _start_hang bnez a3, _start_hang
/* Save a0 and a1 */ /* Save a0 and a1 */
@@ -86,3 +97,18 @@ _boot_a0:
RISCV_PTR 0 RISCV_PTR 0
_boot_a1: _boot_a1:
RISCV_PTR 0 RISCV_PTR 0
/* This will be called when the stack corruption is detected */
.section .text
.align 3
.globl __stack_chk_fail
.type __stack_chk_fail, %function
.equ __stack_chk_fail, _start_hang
/* Initial value of the stack guard variable */
.section .data
.align 3
.globl __stack_chk_guard
.type __stack_chk_guard, %object
__stack_chk_guard:
RISCV_PTR 0x95B5FF5A

View File

@@ -46,6 +46,13 @@ static inline void sbi_ecall_console_puts(const char *str)
sbi_strlen(str), (unsigned long)str, 0, 0, 0, 0); sbi_strlen(str), (unsigned long)str, 0, 0, 0, 0);
} }
static inline void sbi_ecall_shutdown(void)
{
sbi_ecall(SBI_EXT_SRST, SBI_EXT_SRST_RESET,
SBI_SRST_RESET_TYPE_SHUTDOWN, SBI_SRST_RESET_REASON_NONE,
0, 0, 0, 0);
}
#define wfi() \ #define wfi() \
do { \ do { \
__asm__ __volatile__("wfi" ::: "memory"); \ __asm__ __volatile__("wfi" ::: "memory"); \
@@ -54,7 +61,6 @@ static inline void sbi_ecall_console_puts(const char *str)
void test_main(unsigned long a0, unsigned long a1) void test_main(unsigned long a0, unsigned long a1)
{ {
sbi_ecall_console_puts("\nTest payload running\n"); sbi_ecall_console_puts("\nTest payload running\n");
sbi_ecall_shutdown();
while (1) sbi_ecall_console_puts("sbi_ecall_shutdown failed to execute.\n");
wfi();
} }

View File

@@ -79,36 +79,12 @@ struct fw_dynamic_info {
* Prevent modification of struct fw_dynamic_info from affecting * Prevent modification of struct fw_dynamic_info from affecting
* FW_DYNAMIC_INFO_xxx_OFFSET * FW_DYNAMIC_INFO_xxx_OFFSET
*/ */
_Static_assert( assert_member_offset(struct fw_dynamic_info, magic, FW_DYNAMIC_INFO_MAGIC_OFFSET);
offsetof(struct fw_dynamic_info, magic) assert_member_offset(struct fw_dynamic_info, version, FW_DYNAMIC_INFO_VERSION_OFFSET);
== FW_DYNAMIC_INFO_MAGIC_OFFSET, assert_member_offset(struct fw_dynamic_info, next_addr, FW_DYNAMIC_INFO_NEXT_ADDR_OFFSET);
"struct fw_dynamic_info definition has changed, please redefine " assert_member_offset(struct fw_dynamic_info, next_mode, FW_DYNAMIC_INFO_NEXT_MODE_OFFSET);
"FW_DYNAMIC_INFO_MAGIC_OFFSET"); assert_member_offset(struct fw_dynamic_info, options, FW_DYNAMIC_INFO_OPTIONS_OFFSET);
_Static_assert( assert_member_offset(struct fw_dynamic_info, boot_hart, FW_DYNAMIC_INFO_BOOT_HART_OFFSET);
offsetof(struct fw_dynamic_info, version)
== FW_DYNAMIC_INFO_VERSION_OFFSET,
"struct fw_dynamic_info definition has changed, please redefine "
"FW_DYNAMIC_INFO_VERSION_OFFSET");
_Static_assert(
offsetof(struct fw_dynamic_info, next_addr)
== FW_DYNAMIC_INFO_NEXT_ADDR_OFFSET,
"struct fw_dynamic_info definition has changed, please redefine "
"FW_DYNAMIC_INFO_NEXT_ADDR_OFFSET");
_Static_assert(
offsetof(struct fw_dynamic_info, next_mode)
== FW_DYNAMIC_INFO_NEXT_MODE_OFFSET,
"struct fw_dynamic_info definition has changed, please redefine "
"FW_DYNAMIC_INFO_NEXT_MODE_OFFSET");
_Static_assert(
offsetof(struct fw_dynamic_info, options)
== FW_DYNAMIC_INFO_OPTIONS_OFFSET,
"struct fw_dynamic_info definition has changed, please redefine "
"FW_DYNAMIC_INFO_OPTIONS_OFFSET");
_Static_assert(
offsetof(struct fw_dynamic_info, boot_hart)
== FW_DYNAMIC_INFO_BOOT_HART_OFFSET,
"struct fw_dynamic_info definition has changed, please redefine "
"FW_DYNAMIC_INFO_BOOT_HART_OFFSET");
#endif #endif

View File

@@ -156,6 +156,26 @@
: "memory"); \ : "memory"); \
}) })
#if __riscv_xlen == 64
#define __csrrw64(op, csr, csrh, val) (true ? op(csr, val) : (uint64_t)csrh)
#define __csrr64( op, csr, csrh) (true ? op(csr) : (uint64_t)csrh)
#define __csrw64( op, csr, csrh, val) (true ? op(csr, val) : (uint64_t)csrh)
#elif __riscv_xlen == 32
#define __csrrw64(op, csr, csrh, val) ( op(csr, val) | (uint64_t)op(csrh, val >> 32) << 32)
#define __csrr64( op, csr, csrh) ( op(csr) | (uint64_t)op(csrh) << 32)
#define __csrw64( op, csr, csrh, val) ({ op(csr, val); op(csrh, val >> 32); })
#endif
#define csr_swap64( csr, val) __csrrw64(csr_swap, csr, csr ## H, val)
#define csr_read64( csr) __csrr64 (csr_read, csr, csr ## H)
#define csr_read_relaxed64(csr) __csrr64 (csr_read_relaxed, csr, csr ## H)
#define csr_write64( csr, val) __csrw64 (csr_write, csr, csr ## H, val)
#define csr_read_set64( csr, val) __csrrw64(csr_read_set, csr, csr ## H, val)
#define csr_set64( csr, val) __csrw64 (csr_set, csr, csr ## H, val)
#define csr_clear64( csr, val) __csrw64 (csr_clear, csr, csr ## H, val)
#define csr_read_clear64( csr, val) __csrrw64(csr_read_clear, csr, csr ## H, val)
#define csr_clear64( csr, val) __csrw64 (csr_clear, csr, csr ## H, val)
unsigned long csr_read_num(int csr_num); unsigned long csr_read_num(int csr_num);
void csr_write_num(int csr_num, unsigned long val); void csr_write_num(int csr_num, unsigned long val);

View File

@@ -122,6 +122,50 @@ enum {
RV_DBTR_DECLARE_BIT_MASK(MC, TYPE, 4), RV_DBTR_DECLARE_BIT_MASK(MC, TYPE, 4),
}; };
/* ICOUNT - Match Control Type Register */
enum {
RV_DBTR_DECLARE_BIT(ICOUNT, ACTION, 0),
RV_DBTR_DECLARE_BIT(ICOUNT, U, 6),
RV_DBTR_DECLARE_BIT(ICOUNT, S, 7),
RV_DBTR_DECLARE_BIT(ICOUNT, PENDING, 8),
RV_DBTR_DECLARE_BIT(ICOUNT, M, 9),
RV_DBTR_DECLARE_BIT(ICOUNT, COUNT, 10),
RV_DBTR_DECLARE_BIT(ICOUNT, HIT, 24),
RV_DBTR_DECLARE_BIT(ICOUNT, VU, 25),
RV_DBTR_DECLARE_BIT(ICOUNT, VS, 26),
#if __riscv_xlen == 64
RV_DBTR_DECLARE_BIT(ICOUNT, DMODE, 59),
RV_DBTR_DECLARE_BIT(ICOUNT, TYPE, 60),
#elif __riscv_xlen == 32
RV_DBTR_DECLARE_BIT(ICOUNT, DMODE, 27),
RV_DBTR_DECLARE_BIT(ICOUNT, TYPE, 28),
#else
#error "Unknown __riscv_xlen"
#endif
};
enum {
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, ACTION, 6),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, U, 1),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, S, 1),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, PENDING, 1),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, M, 1),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, COUNT, 14),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, HIT, 1),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, VU, 1),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, VS, 1),
#if __riscv_xlen == 64
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, DMODE, 1),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, TYPE, 4),
#elif __riscv_xlen == 32
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, DMODE, 1),
RV_DBTR_DECLARE_BIT_MASK(ICOUNT, TYPE, 4),
#else
#error "Unknown __riscv_xlen"
#endif
};
/* MC6 - Match Control 6 Type Register */ /* MC6 - Match Control 6 Type Register */
enum { enum {
RV_DBTR_DECLARE_BIT(MC6, LOAD, 0), RV_DBTR_DECLARE_BIT(MC6, LOAD, 0),

View File

@@ -32,7 +32,8 @@
#define MSTATUS_TVM _UL(0x00100000) #define MSTATUS_TVM _UL(0x00100000)
#define MSTATUS_TW _UL(0x00200000) #define MSTATUS_TW _UL(0x00200000)
#define MSTATUS_TSR _UL(0x00400000) #define MSTATUS_TSR _UL(0x00400000)
#define MSTATUS_SPELP _UL(0x00800000) #define MSTATUS_SPELP _UL(0x00800000)
#define MSTATUS_SDT _UL(0x01000000)
#define MSTATUS32_SD _UL(0x80000000) #define MSTATUS32_SD _UL(0x80000000)
#if __riscv_xlen == 64 #if __riscv_xlen == 64
#define MSTATUS_UXL _ULL(0x0000000300000000) #define MSTATUS_UXL _ULL(0x0000000300000000)
@@ -85,6 +86,8 @@
#define HSTATUS_GVA _UL(0x00000040) #define HSTATUS_GVA _UL(0x00000040)
#define HSTATUS_VSBE _UL(0x00000020) #define HSTATUS_VSBE _UL(0x00000020)
#define MTVEC_MODE _UL(0x00000003)
#define MCAUSE_IRQ_MASK (_UL(1) << (__riscv_xlen - 1)) #define MCAUSE_IRQ_MASK (_UL(1) << (__riscv_xlen - 1))
#define IRQ_S_SOFT 1 #define IRQ_S_SOFT 1
@@ -186,7 +189,7 @@
#define TOPI_IID_SHIFT 16 #define TOPI_IID_SHIFT 16
#define TOPI_IID_MASK 0xfff #define TOPI_IID_MASK 0xfff
#define TOPI_IPRIO_MASK 0xff #define TOPI_IPRIO_MASK 0xff
#if __riscv_xlen == 64 #if __riscv_xlen == 64
#define MHPMEVENT_OF (_UL(1) << 63) #define MHPMEVENT_OF (_UL(1) << 63)
@@ -375,6 +378,20 @@
#define CSR_SSTATEEN2 0x10E #define CSR_SSTATEEN2 0x10E
#define CSR_SSTATEEN3 0x10F #define CSR_SSTATEEN3 0x10F
/* Supervisor Resource Management Configuration CSRs */
#define CSR_SRMCFG 0x181
/* Machine-Level Control transfer records CSRs */
#define CSR_MCTRCTL 0x34e
/* Supervisor-Level Control transfer records CSRs */
#define CSR_SCTRCTL 0x14e
#define CSR_SCTRSTATUS 0x14f
#define CSR_SCTRDEPTH 0x15f
/* VS-Level Control transfer records CSRs */
#define CSR_VSCTRCTL 0x24e
/* ===== Hypervisor-level CSRs ===== */ /* ===== Hypervisor-level CSRs ===== */
/* Hypervisor Trap Setup (H-extension) */ /* Hypervisor Trap Setup (H-extension) */
@@ -769,6 +786,40 @@
#define CSR_VTYPE 0xc21 #define CSR_VTYPE 0xc21
#define CSR_VLENB 0xc22 #define CSR_VLENB 0xc22
/* Custom CSR ranges */
#define CSR_CUSTOM0_U_RW_BASE 0x800
#define CSR_CUSTOM0_U_RW_COUNT 0x100
#define CSR_CUSTOM1_U_RO_BASE 0xCC0
#define CSR_CUSTOM1_U_RO_COUNT 0x040
#define CSR_CUSTOM2_S_RW_BASE 0x5C0
#define CSR_CUSTOM2_S_RW_COUNT 0x040
#define CSR_CUSTOM3_S_RW_BASE 0x9C0
#define CSR_CUSTOM3_S_RW_COUNT 0x040
#define CSR_CUSTOM4_S_RO_BASE 0xDC0
#define CSR_CUSTOM4_S_RO_COUNT 0x040
#define CSR_CUSTOM5_HS_RW_BASE 0x6C0
#define CSR_CUSTOM5_HS_RW_COUNT 0x040
#define CSR_CUSTOM6_HS_RW_BASE 0xAC0
#define CSR_CUSTOM6_HS_RW_COUNT 0x040
#define CSR_CUSTOM7_HS_RO_BASE 0xEC0
#define CSR_CUSTOM7_HS_RO_COUNT 0x040
#define CSR_CUSTOM8_M_RW_BASE 0x7C0
#define CSR_CUSTOM8_M_RW_COUNT 0x040
#define CSR_CUSTOM9_M_RW_BASE 0xBC0
#define CSR_CUSTOM9_M_RW_COUNT 0x040
#define CSR_CUSTOM10_M_RO_BASE 0xFC0
#define CSR_CUSTOM10_M_RO_COUNT 0x040
/* ===== Trap/Exception Causes ===== */ /* ===== Trap/Exception Causes ===== */
#define CAUSE_MISALIGNED_FETCH 0x0 #define CAUSE_MISALIGNED_FETCH 0x0
@@ -799,6 +850,10 @@
#define SMSTATEEN0_CS (_ULL(1) << SMSTATEEN0_CS_SHIFT) #define SMSTATEEN0_CS (_ULL(1) << SMSTATEEN0_CS_SHIFT)
#define SMSTATEEN0_FCSR_SHIFT 1 #define SMSTATEEN0_FCSR_SHIFT 1
#define SMSTATEEN0_FCSR (_ULL(1) << SMSTATEEN0_FCSR_SHIFT) #define SMSTATEEN0_FCSR (_ULL(1) << SMSTATEEN0_FCSR_SHIFT)
#define SMSTATEEN0_CTR_SHIFT 54
#define SMSTATEEN0_CTR (_ULL(1) << SMSTATEEN0_CTR_SHIFT)
#define SMSTATEEN0_SRMCFG_SHIFT 55
#define SMSTATEEN0_SRMCFG (_ULL(1) << SMSTATEEN0_SRMCFG_SHIFT)
#define SMSTATEEN0_CONTEXT_SHIFT 57 #define SMSTATEEN0_CONTEXT_SHIFT 57
#define SMSTATEEN0_CONTEXT (_ULL(1) << SMSTATEEN0_CONTEXT_SHIFT) #define SMSTATEEN0_CONTEXT (_ULL(1) << SMSTATEEN0_CONTEXT_SHIFT)
#define SMSTATEEN0_IMSIC_SHIFT 58 #define SMSTATEEN0_IMSIC_SHIFT 58
@@ -885,16 +940,16 @@
#define INSN_MASK_C_FSWSP 0xe003 #define INSN_MASK_C_FSWSP 0xe003
#define INSN_MATCH_C_LHU 0x8400 #define INSN_MATCH_C_LHU 0x8400
#define INSN_MASK_C_LHU 0xfc43 #define INSN_MASK_C_LHU 0xfc43
#define INSN_MATCH_C_LH 0x8440 #define INSN_MATCH_C_LH 0x8440
#define INSN_MASK_C_LH 0xfc43 #define INSN_MASK_C_LH 0xfc43
#define INSN_MATCH_C_SH 0x8c00 #define INSN_MATCH_C_SH 0x8c00
#define INSN_MASK_C_SH 0xfc43 #define INSN_MASK_C_SH 0xfc43
#define INSN_MASK_WFI 0xffffff00 #define INSN_MASK_WFI 0xffffff00
#define INSN_MATCH_WFI 0x10500000 #define INSN_MATCH_WFI 0x10500000
#define INSN_MASK_FENCE_TSO 0xffffffff #define INSN_MASK_FENCE_TSO 0xfff0707f
#define INSN_MATCH_FENCE_TSO 0x8330000f #define INSN_MATCH_FENCE_TSO 0x8330000f
#define INSN_MASK_VECTOR_UNIT_STRIDE 0xfdf0707f #define INSN_MASK_VECTOR_UNIT_STRIDE 0xfdf0707f
@@ -970,13 +1025,14 @@
#define INSN_MATCH_VS4RV 0x62800027 #define INSN_MATCH_VS4RV 0x62800027
#define INSN_MATCH_VS8RV 0xe2800027 #define INSN_MATCH_VS8RV 0xe2800027
#define INSN_MASK_VECTOR_LOAD_STORE 0x7f #define INSN_OPCODE_MASK 0x7f
#define INSN_MATCH_VECTOR_LOAD 0x07 #define INSN_OPCODE_VECTOR_LOAD 0x07
#define INSN_MATCH_VECTOR_STORE 0x27 #define INSN_OPCODE_VECTOR_STORE 0x27
#define INSN_OPCODE_AMO 0x2f
#define IS_VECTOR_LOAD_STORE(insn) \ #define IS_VECTOR_LOAD_STORE(insn) \
((((insn) & INSN_MASK_VECTOR_LOAD_STORE) == INSN_MATCH_VECTOR_LOAD) || \ ((((insn) & INSN_OPCODE_MASK) == INSN_OPCODE_VECTOR_LOAD) || \
(((insn) & INSN_MASK_VECTOR_LOAD_STORE) == INSN_MATCH_VECTOR_STORE)) (((insn) & INSN_OPCODE_MASK) == INSN_OPCODE_VECTOR_STORE))
#define IS_VECTOR_INSN_MATCH(insn, match, mask) \ #define IS_VECTOR_INSN_MATCH(insn, match, mask) \
(((insn) & (mask)) == ((match) & (mask))) (((insn) & (mask)) == ((match) & (mask)))
@@ -1256,7 +1312,7 @@
/* 64-bit read for VS-stage address translation (RV64) */ /* 64-bit read for VS-stage address translation (RV64) */
#define INSN_PSEUDO_VS_LOAD 0x00003000 #define INSN_PSEUDO_VS_LOAD 0x00003000
/* 64-bit write for VS-stage address translation (RV64) */ /* 64-bit write for VS-stage address translation (RV64) */
#define INSN_PSEUDO_VS_STORE 0x00003020 #define INSN_PSEUDO_VS_STORE 0x00003020
#elif __riscv_xlen == 32 #elif __riscv_xlen == 32
@@ -1264,18 +1320,31 @@
#define INSN_PSEUDO_VS_LOAD 0x00002000 #define INSN_PSEUDO_VS_LOAD 0x00002000
/* 32-bit write for VS-stage address translation (RV32) */ /* 32-bit write for VS-stage address translation (RV32) */
#define INSN_PSEUDO_VS_STORE 0x00002020 #define INSN_PSEUDO_VS_STORE 0x00002020
#else #else
#error "Unexpected __riscv_xlen" #error "Unexpected __riscv_xlen"
#endif #endif
#define MASK_FUNCT3 0x7000
#define SHIFT_FUNCT3 12
#define MASK_RS1 0xf8000
#define MASK_RS2 0x1f00000
#define MASK_RD 0xf80
#define MASK_CSR 0xfff00000
#define SHIFT_CSR 20
#define MASK_AQRL 0x06000000
#define SHIFT_AQRL 25
#define VM_MASK 0x1 #define VM_MASK 0x1
#define VIEW_MASK 0x3 #define VIEW_MASK 0x3
#define VSEW_MASK 0x3 #define VSEW_MASK 0x3
#define VLMUL_MASK 0x7 #define VLMUL_MASK 0x7
#define VD_MASK 0x1f #define VD_MASK 0x1f
#define VS2_MASK 0x1f #define VS2_MASK 0x1f
#define INSN_16BIT_MASK 0x3 #define INSN_16BIT_MASK 0x3
#define INSN_32BIT_MASK 0x1c #define INSN_32BIT_MASK 0x1c
@@ -1287,15 +1356,8 @@
#define INSN_LEN(insn) (INSN_IS_16BIT(insn) ? 2 : 4) #define INSN_LEN(insn) (INSN_IS_16BIT(insn) ? 2 : 4)
#if __riscv_xlen == 64 #define SH_VSEW 3
#define LOG_REGBYTES 3 #define SH_VIEW 12
#else
#define LOG_REGBYTES 2
#endif
#define REGBYTES (1 << LOG_REGBYTES)
#define SH_VSEW 3
#define SH_VIEW 12
#define SH_VD 7 #define SH_VD 7
#define SH_VS2 20 #define SH_VS2 20
#define SH_VM 25 #define SH_VM 25
@@ -1328,49 +1390,32 @@
#define SHIFT_RIGHT(x, y) \ #define SHIFT_RIGHT(x, y) \
((y) < 0 ? ((x) << -(y)) : ((x) >> (y))) ((y) < 0 ? ((x) << -(y)) : ((x) >> (y)))
#define REG_MASK \ #define GET_FUNC3(insn) ((insn & MASK_FUNCT3) >> SHIFT_FUNCT3)
((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES)) #define GET_RM(insn) GET_FUNC3(insn)
#define GET_RS1_NUM(insn) ((insn & MASK_RS1) >> SH_RS1)
#define REG_OFFSET(insn, pos) \ #define GET_RS2_NUM(insn) ((insn & MASK_RS2) >> SH_RS2)
(SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK) #define GET_RS1S_NUM(insn) RVC_RS1S(insn)
#define GET_RS2S_NUM(insn) RVC_RS2S(insn)
#define REG_PTR(insn, pos, regs) \ #define GET_RS2C_NUM(insn) RVC_RS2(insn)
(ulong *)((ulong)(regs) + REG_OFFSET(insn, pos)) #define GET_RD_NUM(insn) ((insn & MASK_RD) >> SH_RD)
#define GET_RM(insn) ((insn & MASK_FUNCT3) >> SHIFT_FUNCT3)
#define GET_RS1_NUM(insn) ((insn & MASK_RS1) >> 15)
#define GET_CSR_NUM(insn) ((insn & MASK_CSR) >> SHIFT_CSR) #define GET_CSR_NUM(insn) ((insn & MASK_CSR) >> SHIFT_CSR)
#define GET_AQRL(insn) ((insn & MASK_AQRL) >> SHIFT_AQRL)
#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs))
#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs))
#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs))
#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs))
#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs))
#define GET_SP(regs) (*REG_PTR(2, 0, regs))
#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val))
#define IMM_I(insn) ((s32)(insn) >> 20) #define IMM_I(insn) ((s32)(insn) >> 20)
#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \ #define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \
(s32)(((insn) >> 7) & 0x1f)) (s32)(((insn) >> 7) & 0x1f))
#define IS_MASKED(insn) (((insn >> SH_VM) & VM_MASK) == 0) #define IS_MASKED(insn) (((insn >> SH_VM) & VM_MASK) == 0)
#define GET_VD(insn) ((insn >> SH_VD) & VD_MASK) #define GET_VD(insn) ((insn >> SH_VD) & VD_MASK)
#define GET_VS2(insn) ((insn >> SH_VS2) & VS2_MASK) #define GET_VS2(insn) ((insn >> SH_VS2) & VS2_MASK)
#define GET_VIEW(insn) (((insn) >> SH_VIEW) & VIEW_MASK) #define GET_VIEW(insn) (((insn) >> SH_VIEW) & VIEW_MASK)
#define GET_MEW(insn) (((insn) >> SH_MEW) & 1) #define GET_MEW(insn) (((insn) >> SH_MEW) & 1)
#define GET_VSEW(vtype) (((vtype) >> SH_VSEW) & VSEW_MASK) #define GET_VSEW(vtype) (((vtype) >> SH_VSEW) & VSEW_MASK)
#define GET_VLMUL(vtype) ((vtype) & VLMUL_MASK) #define GET_VLMUL(vtype) ((vtype) & VLMUL_MASK)
#define GET_LEN(view) (1UL << (view)) #define GET_LEN(view) (1UL << (view))
#define GET_NF(insn) (1 + ((insn >> 29) & 7)) #define GET_NF(insn) (1 + ((insn >> 29) & 7))
#define GET_VEMUL(vlmul, view, vsew) ((vlmul + view - vsew) & 7) #define GET_VEMUL(vlmul, view, vsew) ((vlmul + view - vsew) & 7)
#define GET_EMUL(vemul) (1UL << ((vemul) >= 4 ? 0 : (vemul))) #define GET_EMUL(vemul) (1UL << ((vemul) >= 4 ? 0 : (vemul)))
#define MASK_FUNCT3 0x7000
#define MASK_RS1 0xf8000
#define MASK_CSR 0xfff00000
#define SHIFT_FUNCT3 12
#define SHIFT_CSR 20
#define CSRRW 1 #define CSRRW 1
#define CSRRS 2 #define CSRRS 2

View File

@@ -130,4 +130,17 @@ static inline void bitmap_xor(unsigned long *dst, const unsigned long *src1,
__bitmap_xor(dst, src1, src2, nbits); __bitmap_xor(dst, src1, src2, nbits);
} }
static inline int bitmap_weight(const unsigned long *src, int nbits)
{
int i, res = 0;
for (i = 0; i < nbits / BITS_PER_LONG; i++)
res += sbi_popcount(src[i]);
if (nbits % BITS_PER_LONG)
res += sbi_popcount(src[i] & BITMAP_LAST_WORD_MASK(nbits));
return res;
}
#endif #endif

View File

@@ -125,14 +125,22 @@ static inline unsigned long sbi_fls(unsigned long word)
*/ */
static inline unsigned long sbi_popcount(unsigned long word) static inline unsigned long sbi_popcount(unsigned long word)
{ {
unsigned long count = 0; unsigned long count;
while (word) { #if BITS_PER_LONG == 64
word &= word - 1; count = word - ((word >> 1) & 0x5555555555555555ul);
count++; count = (count & 0x3333333333333333ul) + ((count >> 2) & 0x3333333333333333ul);
} count = (count + (count >> 4)) & 0x0F0F0F0F0F0F0F0Ful;
count = count + (count >> 8);
return count; count = count + (count >> 16);
return (count + (count >> 32)) & 0x00000000000000FFul;
#else
count = word - ((word >> 1) & 0x55555555);
count = (count & 0x33333333) + ((count >> 2) & 0x33333333);
count = (count + (count >> 4)) & 0x0F0F0F0F;
count = count + (count >> 8);
return (count + (count >> 16)) & 0x000000FF;
#endif
} }
#define for_each_set_bit(bit, addr, size) \ #define for_each_set_bit(bit, addr, size) \

View File

@@ -14,13 +14,13 @@
# define _conv_cast(type, val) ((type)(val)) # define _conv_cast(type, val) ((type)(val))
#endif #endif
#define BSWAP16(x) ((((x) & 0x00ff) << 8) | \ #define __BSWAP16(x) ((((x) & 0x00ff) << 8) | \
(((x) & 0xff00) >> 8)) (((x) & 0xff00) >> 8))
#define BSWAP32(x) ((((x) & 0x000000ff) << 24) | \ #define __BSWAP32(x) ((((x) & 0x000000ff) << 24) | \
(((x) & 0x0000ff00) << 8) | \ (((x) & 0x0000ff00) << 8) | \
(((x) & 0x00ff0000) >> 8) | \ (((x) & 0x00ff0000) >> 8) | \
(((x) & 0xff000000) >> 24)) (((x) & 0xff000000) >> 24))
#define BSWAP64(x) ((((x) & 0x00000000000000ffULL) << 56) | \ #define __BSWAP64(x) ((((x) & 0x00000000000000ffULL) << 56) | \
(((x) & 0x000000000000ff00ULL) << 40) | \ (((x) & 0x000000000000ff00ULL) << 40) | \
(((x) & 0x0000000000ff0000ULL) << 24) | \ (((x) & 0x0000000000ff0000ULL) << 24) | \
(((x) & 0x00000000ff000000ULL) << 8) | \ (((x) & 0x00000000ff000000ULL) << 8) | \
@@ -29,6 +29,10 @@
(((x) & 0x00ff000000000000ULL) >> 40) | \ (((x) & 0x00ff000000000000ULL) >> 40) | \
(((x) & 0xff00000000000000ULL) >> 56)) (((x) & 0xff00000000000000ULL) >> 56))
#define BSWAP64(x) ({ uint64_t _sv = (x); __BSWAP64(_sv); })
#define BSWAP32(x) ({ uint32_t _sv = (x); __BSWAP32(_sv); })
#define BSWAP16(x) ({ uint16_t _sv = (x); __BSWAP16(_sv); })
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ /* CPU(little-endian) */ #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ /* CPU(little-endian) */
#define cpu_to_be16(x) _conv_cast(uint16_t, BSWAP16(x)) #define cpu_to_be16(x) _conv_cast(uint16_t, BSWAP16(x))
#define cpu_to_be32(x) _conv_cast(uint32_t, BSWAP32(x)) #define cpu_to_be32(x) _conv_cast(uint32_t, BSWAP32(x))

View File

@@ -18,7 +18,7 @@
({ \ ({ \
register ulong tinfo asm("a3") = (ulong)trap; \ register ulong tinfo asm("a3") = (ulong)trap; \
register ulong ttmp asm("a4"); \ register ulong ttmp asm("a4"); \
register ulong mtvec = sbi_hart_expected_trap_addr(); \ register ulong mtvec = (ulong)sbi_hart_expected_trap; \
register ulong ret = 0; \ register ulong ret = 0; \
((struct sbi_trap_info *)(trap))->cause = 0; \ ((struct sbi_trap_info *)(trap))->cause = 0; \
asm volatile( \ asm volatile( \
@@ -37,7 +37,7 @@
({ \ ({ \
register ulong tinfo asm("a3") = (ulong)trap; \ register ulong tinfo asm("a3") = (ulong)trap; \
register ulong ttmp asm("a4"); \ register ulong ttmp asm("a4"); \
register ulong mtvec = sbi_hart_expected_trap_addr(); \ register ulong mtvec = (ulong)sbi_hart_expected_trap; \
((struct sbi_trap_info *)(trap))->cause = 0; \ ((struct sbi_trap_info *)(trap))->cause = 0; \
asm volatile( \ asm volatile( \
"add %[ttmp], %[tinfo], zero\n" \ "add %[ttmp], %[tinfo], zero\n" \

View File

@@ -90,7 +90,7 @@ struct sbi_dbtr_hart_triggers_state {
}while (0); }while (0);
/** SBI shared mem messages layout */ /** SBI shared mem messages layout */
struct sbi_dbtr_shmem_entry { union sbi_dbtr_shmem_entry {
struct sbi_dbtr_data_msg data; struct sbi_dbtr_data_msg data;
struct sbi_dbtr_id_msg id; struct sbi_dbtr_id_msg id;
}; };
@@ -115,8 +115,7 @@ int sbi_dbtr_uninstall_trig(unsigned long trig_idx_base,
int sbi_dbtr_enable_trig(unsigned long trig_idx_base, int sbi_dbtr_enable_trig(unsigned long trig_idx_base,
unsigned long trig_idx_mask); unsigned long trig_idx_mask);
int sbi_dbtr_update_trig(unsigned long smode, int sbi_dbtr_update_trig(unsigned long smode,
unsigned long trig_idx_base, unsigned long trig_count);
unsigned long trig_idx_mask);
int sbi_dbtr_disable_trig(unsigned long trig_idx_base, int sbi_dbtr_disable_trig(unsigned long trig_idx_base,
unsigned long trig_idx_mask); unsigned long trig_idx_mask);

View File

@@ -121,6 +121,9 @@ struct sbi_domain_memregion {
((__flags & SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK) && \ ((__flags & SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK) && \
!(__flags & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK)) !(__flags & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK))
#define SBI_DOMAIN_MEMREGION_IS_FIRMWARE(__flags) \
((__flags & SBI_DOMAIN_MEMREGION_FW) ? true : false) \
/** Bit to control if permissions are enforced on all modes */ /** Bit to control if permissions are enforced on all modes */
#define SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS (1UL << 6) #define SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS (1UL << 6)
@@ -157,6 +160,7 @@ struct sbi_domain_memregion {
SBI_DOMAIN_MEMREGION_M_EXECUTABLE) SBI_DOMAIN_MEMREGION_M_EXECUTABLE)
#define SBI_DOMAIN_MEMREGION_MMIO (1UL << 31) #define SBI_DOMAIN_MEMREGION_MMIO (1UL << 31)
#define SBI_DOMAIN_MEMREGION_FW (1UL << 30)
unsigned long flags; unsigned long flags;
}; };
@@ -249,6 +253,13 @@ void sbi_domain_memregion_init(unsigned long addr,
unsigned long flags, unsigned long flags,
struct sbi_domain_memregion *reg); struct sbi_domain_memregion *reg);
/**
* Return the Smepmp pmpcfg LRWX encoding for the flags in @reg.
*
* @param reg pointer to memory region; its flags field encodes permissions.
*/
unsigned int sbi_domain_get_smepmp_flags(struct sbi_domain_memregion *reg);
/** /**
* Check whether we can access specified address for given mode and * Check whether we can access specified address for given mode and
* memory region flags under a domain * memory region flags under a domain
@@ -307,8 +318,11 @@ int sbi_domain_register(struct sbi_domain *dom,
int sbi_domain_root_add_memrange(unsigned long addr, unsigned long size, int sbi_domain_root_add_memrange(unsigned long addr, unsigned long size,
unsigned long align, unsigned long region_flags); unsigned long align, unsigned long region_flags);
/** Finalize domain tables and startup non-root domains */ /** Startup non-root domains */
int sbi_domain_finalize(struct sbi_scratch *scratch, u32 cold_hartid); int sbi_domain_startup(struct sbi_scratch *scratch, u32 cold_hartid);
/** Finalize domain tables */
int sbi_domain_finalize(struct sbi_scratch *scratch);
/** Initialize domains */ /** Initialize domains */
int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid); int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid);

View File

@@ -0,0 +1,20 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 Rivos Inc.
*
* Authors:
* Clément Léger <cleger@rivosinc.com>
*/
#ifndef __SBI_DOUBLE_TRAP_H__
#define __SBI_DOUBLE_TRAP_H__
#include <sbi/sbi_types.h>
#include <sbi/sbi_trap.h>
int sbi_double_trap_handler(struct sbi_trap_context *tcntx);
void sbi_double_trap_init(struct sbi_scratch *scratch);
#endif

View File

@@ -13,7 +13,7 @@
#include <sbi/sbi_types.h> #include <sbi/sbi_types.h>
#include <sbi/sbi_list.h> #include <sbi/sbi_list.h>
#define SBI_ECALL_VERSION_MAJOR 2 #define SBI_ECALL_VERSION_MAJOR 3
#define SBI_ECALL_VERSION_MINOR 0 #define SBI_ECALL_VERSION_MINOR 0
#define SBI_OPENSBI_IMPID 1 #define SBI_OPENSBI_IMPID 1

View File

@@ -291,6 +291,15 @@ struct sbi_pmu_event_info {
#define SBI_PMU_CFG_FLAG_SET_UINH (1 << 5) #define SBI_PMU_CFG_FLAG_SET_UINH (1 << 5)
#define SBI_PMU_CFG_FLAG_SET_SINH (1 << 6) #define SBI_PMU_CFG_FLAG_SET_SINH (1 << 6)
#define SBI_PMU_CFG_FLAG_SET_MINH (1 << 7) #define SBI_PMU_CFG_FLAG_SET_MINH (1 << 7)
/* Event configuration mask */
#define SBI_PMU_CFG_EVENT_MASK \
( \
SBI_PMU_CFG_FLAG_SET_VUINH | \
SBI_PMU_CFG_FLAG_SET_VSINH | \
SBI_PMU_CFG_FLAG_SET_UINH | \
SBI_PMU_CFG_FLAG_SET_SINH | \
SBI_PMU_CFG_FLAG_SET_MINH \
)
/* Flags defined for counter start function */ /* Flags defined for counter start function */
#define SBI_PMU_START_FLAG_SET_INIT_VALUE (1 << 0) #define SBI_PMU_START_FLAG_SET_INIT_VALUE (1 << 0)
@@ -380,10 +389,12 @@ enum sbi_sse_attr_id {
#define SBI_SSE_ATTR_CONFIG_ONESHOT (1 << 0) #define SBI_SSE_ATTR_CONFIG_ONESHOT (1 << 0)
#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPP BIT(0) #define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP BIT(0)
#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPIE BIT(1) #define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE BIT(1)
#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV BIT(2) #define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV BIT(2)
#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP BIT(3) #define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP BIT(3)
#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP BIT(4)
#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT BIT(5)
enum sbi_sse_state { enum sbi_sse_state {
SBI_SSE_STATE_UNUSED = 0, SBI_SSE_STATE_UNUSED = 0,
@@ -393,48 +404,77 @@ enum sbi_sse_state {
}; };
/* SBI SSE Event IDs. */ /* SBI SSE Event IDs. */
#define SBI_SSE_EVENT_LOCAL_RAS 0x00000000 /* Range 0x00000000 - 0x0000ffff */
#define SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS 0x00000000
#define SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP 0x00000001 #define SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP 0x00000001
#define SBI_SSE_EVENT_LOCAL_RESERVED_0_START 0x00000002
#define SBI_SSE_EVENT_LOCAL_RESERVED_0_END 0x00003fff
#define SBI_SSE_EVENT_LOCAL_PLAT_0_START 0x00004000 #define SBI_SSE_EVENT_LOCAL_PLAT_0_START 0x00004000
#define SBI_SSE_EVENT_LOCAL_PLAT_0_END 0x00007fff #define SBI_SSE_EVENT_LOCAL_PLAT_0_END 0x00007fff
#define SBI_SSE_EVENT_GLOBAL_RAS 0x00008000
#define SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS 0x00008000
#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_START 0x00008001
#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_END 0x0000bfff
#define SBI_SSE_EVENT_GLOBAL_PLAT_0_START 0x0000c000 #define SBI_SSE_EVENT_GLOBAL_PLAT_0_START 0x0000c000
#define SBI_SSE_EVENT_GLOBAL_PLAT_0_END 0x0000ffff #define SBI_SSE_EVENT_GLOBAL_PLAT_0_END 0x0000ffff
#define SBI_SSE_EVENT_LOCAL_PMU 0x00010000 /* Range 0x00010000 - 0x0001ffff */
#define SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW 0x00010000
#define SBI_SSE_EVENT_LOCAL_RESERVED_1_START 0x00010001
#define SBI_SSE_EVENT_LOCAL_RESERVED_1_END 0x00013fff
#define SBI_SSE_EVENT_LOCAL_PLAT_1_START 0x00014000 #define SBI_SSE_EVENT_LOCAL_PLAT_1_START 0x00014000
#define SBI_SSE_EVENT_LOCAL_PLAT_1_END 0x00017fff #define SBI_SSE_EVENT_LOCAL_PLAT_1_END 0x00017fff
#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_START 0x00018000
#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_END 0x0001bfff
#define SBI_SSE_EVENT_GLOBAL_PLAT_1_START 0x0001c000 #define SBI_SSE_EVENT_GLOBAL_PLAT_1_START 0x0001c000
#define SBI_SSE_EVENT_GLOBAL_PLAT_1_END 0x0001ffff #define SBI_SSE_EVENT_GLOBAL_PLAT_1_END 0x0001ffff
#define SBI_SSE_EVENT_LOCAL_PLAT_2_START 0x00024000 /* Range 0x00100000 - 0x0010ffff */
#define SBI_SSE_EVENT_LOCAL_PLAT_2_END 0x00027fff #define SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS 0x00100000
#define SBI_SSE_EVENT_GLOBAL_PLAT_2_START 0x0002c000 #define SBI_SSE_EVENT_LOCAL_RESERVED_2_START 0x00100001
#define SBI_SSE_EVENT_GLOBAL_PLAT_2_END 0x0002ffff #define SBI_SSE_EVENT_LOCAL_RESERVED_2_END 0x00103fff
#define SBI_SSE_EVENT_LOCAL_PLAT_2_START 0x00104000
#define SBI_SSE_EVENT_LOCAL_PLAT_2_END 0x00107fff
#define SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS 0x00108000
#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_START 0x00108001
#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_END 0x0010bfff
#define SBI_SSE_EVENT_GLOBAL_PLAT_2_START 0x0010c000
#define SBI_SSE_EVENT_GLOBAL_PLAT_2_END 0x0010ffff
/* Range 0xffff0000 - 0xffffffff */
#define SBI_SSE_EVENT_LOCAL_SOFTWARE 0xffff0000 #define SBI_SSE_EVENT_LOCAL_SOFTWARE 0xffff0000
#define SBI_SSE_EVENT_LOCAL_RESERVED_3_START 0xffff0001
#define SBI_SSE_EVENT_LOCAL_RESERVED_3_END 0xffff3fff
#define SBI_SSE_EVENT_LOCAL_PLAT_3_START 0xffff4000 #define SBI_SSE_EVENT_LOCAL_PLAT_3_START 0xffff4000
#define SBI_SSE_EVENT_LOCAL_PLAT_3_END 0xffff7fff #define SBI_SSE_EVENT_LOCAL_PLAT_3_END 0xffff7fff
#define SBI_SSE_EVENT_GLOBAL_SOFTWARE 0xffff8000 #define SBI_SSE_EVENT_GLOBAL_SOFTWARE 0xffff8000
#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_START 0xffff8001
#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_END 0xffffbfff
#define SBI_SSE_EVENT_GLOBAL_PLAT_3_START 0xffffc000 #define SBI_SSE_EVENT_GLOBAL_PLAT_3_START 0xffffc000
#define SBI_SSE_EVENT_GLOBAL_PLAT_3_END 0xffffffff #define SBI_SSE_EVENT_GLOBAL_PLAT_3_END 0xffffffff
#define SBI_SSE_EVENT_GLOBAL_BIT (1 << 15) #define SBI_SSE_EVENT_GLOBAL_BIT BIT(15)
#define SBI_SSE_EVENT_PLATFORM_BIT (1 << 14) #define SBI_SSE_EVENT_PLATFORM_BIT BIT(14)
/* SBI function IDs for MPXY extension */ /* SBI function IDs for MPXY extension */
#define SBI_EXT_MPXY_SET_SHMEM 0x0 #define SBI_EXT_MPXY_GET_SHMEM_SIZE 0x0
#define SBI_EXT_MPXY_GET_CHANNEL_IDS 0x1 #define SBI_EXT_MPXY_SET_SHMEM 0x1
#define SBI_EXT_MPXY_READ_ATTRS 0x2 #define SBI_EXT_MPXY_GET_CHANNEL_IDS 0x2
#define SBI_EXT_MPXY_WRITE_ATTRS 0x3 #define SBI_EXT_MPXY_READ_ATTRS 0x3
#define SBI_EXT_MPXY_SEND_MSG_WITH_RESP 0x4 #define SBI_EXT_MPXY_WRITE_ATTRS 0x4
#define SBI_EXT_MPXY_SEND_MSG_NO_RESP 0x5 #define SBI_EXT_MPXY_SEND_MSG_WITH_RESP 0x5
#define SBI_EXT_MPXY_GET_NOTIFICATION_EVENTS 0x6 #define SBI_EXT_MPXY_SEND_MSG_WITHOUT_RESP 0x6
#define SBI_EXT_MPXY_GET_NOTIFICATION_EVENTS 0x7
/* SBI base specification related macros */ /* SBI base specification related macros */
#define SBI_SPEC_VERSION_MAJOR_OFFSET 24 #define SBI_SPEC_VERSION_MAJOR_OFFSET 24
#define SBI_SPEC_VERSION_MAJOR_MASK 0x7f #define SBI_SPEC_VERSION_MAJOR_MASK 0x7f
#define SBI_SPEC_VERSION_MINOR_MASK 0xffffff #define SBI_SPEC_VERSION_MINOR_MASK 0xffffff
#define SBI_EXT_EXPERIMENTAL_START 0x08000000
#define SBI_EXT_EXPERIMENTAL_END 0x08FFFFFF
#define SBI_EXT_VENDOR_START 0x09000000 #define SBI_EXT_VENDOR_START 0x09000000
#define SBI_EXT_VENDOR_END 0x09FFFFFF #define SBI_EXT_VENDOR_END 0x09FFFFFF
#define SBI_EXT_FIRMWARE_START 0x0A000000 #define SBI_EXT_FIRMWARE_START 0x0A000000
@@ -455,8 +495,9 @@ enum sbi_sse_state {
#define SBI_ERR_BAD_RANGE -11 #define SBI_ERR_BAD_RANGE -11
#define SBI_ERR_TIMEOUT -12 #define SBI_ERR_TIMEOUT -12
#define SBI_ERR_IO -13 #define SBI_ERR_IO -13
#define SBI_ERR_DENIED_LOCKED -14
#define SBI_LAST_ERR SBI_ERR_BAD_RANGE #define SBI_LAST_ERR SBI_ERR_DENIED_LOCKED
/* clang-format on */ /* clang-format on */

View File

@@ -29,6 +29,7 @@
#define SBI_ETIMEOUT SBI_ERR_TIMEOUT #define SBI_ETIMEOUT SBI_ERR_TIMEOUT
#define SBI_ETIMEDOUT SBI_ERR_TIMEOUT #define SBI_ETIMEDOUT SBI_ERR_TIMEOUT
#define SBI_EIO SBI_ERR_IO #define SBI_EIO SBI_ERR_IO
#define SBI_EDENIED_LOCKED SBI_ERR_DENIED_LOCKED
#define SBI_ENODEV -1000 #define SBI_ENODEV -1000
#define SBI_ENOSYS -1001 #define SBI_ENOSYS -1001

View File

@@ -31,7 +31,7 @@ enum sbi_hart_extensions {
SBI_HART_EXT_SMAIA = 0, SBI_HART_EXT_SMAIA = 0,
/** HART has Smepmp */ /** HART has Smepmp */
SBI_HART_EXT_SMEPMP, SBI_HART_EXT_SMEPMP,
/** HART has Smstateen CSR **/ /** HART has Smstateen extension **/
SBI_HART_EXT_SMSTATEEN, SBI_HART_EXT_SMSTATEEN,
/** Hart has Sscofpmt extension */ /** Hart has Sscofpmt extension */
SBI_HART_EXT_SSCOFPMF, SBI_HART_EXT_SSCOFPMF,
@@ -75,6 +75,18 @@ enum sbi_hart_extensions {
SBI_HART_EXT_ZICFISS, SBI_HART_EXT_ZICFISS,
/** Hart has Ssdbltrp extension */ /** Hart has Ssdbltrp extension */
SBI_HART_EXT_SSDBLTRP, SBI_HART_EXT_SSDBLTRP,
/** HART has CTR M-mode CSRs */
SBI_HART_EXT_SMCTR,
/** HART has CTR S-mode CSRs */
SBI_HART_EXT_SSCTR,
/** Hart has Ssqosid extension */
SBI_HART_EXT_SSQOSID,
/** HART has Ssstateen extension **/
SBI_HART_EXT_SSSTATEEN,
/** Hart has Xsfcflushdlone extension */
SBI_HART_EXT_XSIFIVE_CFLUSH_D_L1,
/** Hart has Xsfcease extension */
SBI_HART_EXT_XSIFIVE_CEASE,
/** Maximum index of Hart extension */ /** Maximum index of Hart extension */
SBI_HART_EXT_MAX, SBI_HART_EXT_MAX,
@@ -87,25 +99,19 @@ struct sbi_hart_ext_data {
extern const struct sbi_hart_ext_data sbi_hart_ext[]; extern const struct sbi_hart_ext_data sbi_hart_ext[];
/* /** CSRs should be detected by access and trapping */
* Smepmp enforces access boundaries between M-mode and enum sbi_hart_csrs {
* S/U-mode. When it is enabled, the PMPs are programmed SBI_HART_CSR_CYCLE = 0,
* such that M-mode doesn't have access to S/U-mode memory. SBI_HART_CSR_TIME,
* SBI_HART_CSR_INSTRET,
* To give M-mode R/W access to the shared memory between M and SBI_HART_CSR_MAX,
* S/U-mode, first entry is reserved. It is disabled at boot. };
* When shared memory access is required, the physical address
* should be programmed into the first PMP entry with R/W
* permissions to the M-mode. Once the work is done, it should be
* unmapped. sbi_hart_map_saddr/sbi_hart_unmap_saddr function
* pair should be used to map/unmap the shared memory.
*/
#define SBI_SMEPMP_RESV_ENTRY 0
struct sbi_hart_features { struct sbi_hart_features {
bool detected; bool detected;
int priv_version; int priv_version;
unsigned long extensions[BITS_TO_LONGS(SBI_HART_EXT_MAX)]; unsigned long extensions[BITS_TO_LONGS(SBI_HART_EXT_MAX)];
unsigned long csrs[BITS_TO_LONGS(SBI_HART_CSR_MAX)];
unsigned int pmp_count; unsigned int pmp_count;
unsigned int pmp_addr_bits; unsigned int pmp_addr_bits;
unsigned int pmp_log2gran; unsigned int pmp_log2gran;
@@ -113,27 +119,20 @@ struct sbi_hart_features {
unsigned int mhpm_bits; unsigned int mhpm_bits;
}; };
extern unsigned long hart_features_offset;
#define sbi_hart_features_ptr(__s) sbi_scratch_offset_ptr(__s, hart_features_offset)
struct sbi_scratch; struct sbi_scratch;
int sbi_hart_reinit(struct sbi_scratch *scratch); int sbi_hart_reinit(struct sbi_scratch *scratch);
int sbi_hart_init(struct sbi_scratch *scratch, bool cold_boot); int sbi_hart_init(struct sbi_scratch *scratch, bool cold_boot);
extern void (*sbi_hart_expected_trap)(void); extern void (*sbi_hart_expected_trap)(void);
static inline ulong sbi_hart_expected_trap_addr(void)
{
return (ulong)sbi_hart_expected_trap;
}
unsigned int sbi_hart_mhpm_mask(struct sbi_scratch *scratch); unsigned int sbi_hart_mhpm_mask(struct sbi_scratch *scratch);
void sbi_hart_delegation_dump(struct sbi_scratch *scratch, void sbi_hart_delegation_dump(struct sbi_scratch *scratch,
const char *prefix, const char *suffix); const char *prefix, const char *suffix);
unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch);
unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch);
unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch);
unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch); unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch);
int sbi_hart_pmp_configure(struct sbi_scratch *scratch);
int sbi_hart_map_saddr(unsigned long base, unsigned long size);
int sbi_hart_unmap_saddr(void);
int sbi_hart_priv_version(struct sbi_scratch *scratch); int sbi_hart_priv_version(struct sbi_scratch *scratch);
void sbi_hart_get_priv_version_str(struct sbi_scratch *scratch, void sbi_hart_get_priv_version_str(struct sbi_scratch *scratch,
char *version_str, int nvstr); char *version_str, int nvstr);
@@ -144,6 +143,7 @@ bool sbi_hart_has_extension(struct sbi_scratch *scratch,
enum sbi_hart_extensions ext); enum sbi_hart_extensions ext);
void sbi_hart_get_extensions_str(struct sbi_scratch *scratch, void sbi_hart_get_extensions_str(struct sbi_scratch *scratch,
char *extension_str, int nestr); char *extension_str, int nestr);
bool sbi_hart_has_csr(struct sbi_scratch *scratch, enum sbi_hart_csrs csr);
void __attribute__((noreturn)) sbi_hart_hang(void); void __attribute__((noreturn)) sbi_hart_hang(void);

View File

@@ -0,0 +1,20 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 Ventana Micro Systems Inc.
*/
#ifndef __SBI_HART_PMP_H__
#define __SBI_HART_PMP_H__
#include <sbi/sbi_types.h>
struct sbi_scratch;
unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch);
unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch);
unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch);
bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx);
int sbi_hart_pmp_init(struct sbi_scratch *scratch);
#endif

View File

@@ -0,0 +1,100 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 Ventana Micro Systems Inc.
*/
#ifndef __SBI_HART_PROTECTION_H__
#define __SBI_HART_PROTECTION_H__
#include <sbi/sbi_types.h>
#include <sbi/sbi_list.h>
struct sbi_scratch;
/** Representation of hart protection mechanism */
struct sbi_hart_protection {
/** List head */
struct sbi_dlist head;
/** Name of the hart protection mechanism */
char name[32];
/** Ratings of the hart protection mechanism (higher is better) */
unsigned long rating;
/** Configure protection for current HART (Mandatory) */
int (*configure)(struct sbi_scratch *scratch);
/** Unconfigure protection for current HART (Mandatory) */
void (*unconfigure)(struct sbi_scratch *scratch);
/** Create temporary mapping to access address range on current HART (Optional) */
int (*map_range)(struct sbi_scratch *scratch,
unsigned long base, unsigned long size);
/** Destroy temporary mapping on current HART (Optional) */
int (*unmap_range)(struct sbi_scratch *scratch,
unsigned long base, unsigned long size);
};
/**
* Get the best hart protection mechanism
*
* @return pointer to best hart protection mechanism
*/
struct sbi_hart_protection *sbi_hart_protection_best(void);
/**
* Register a hart protection mechanism
*
* @param hprot pointer to hart protection mechanism
*
* @return 0 on success and negative error code on failure
*/
int sbi_hart_protection_register(struct sbi_hart_protection *hprot);
/**
* Unregister a hart protection mechanism
*
* @param hprot pointer to hart protection mechanism
*/
void sbi_hart_protection_unregister(struct sbi_hart_protection *hprot);
/**
* Configure protection for current HART
*
* @param scratch pointer to scratch space of current HART
*
* @return 0 on success and negative error code on failure
*/
int sbi_hart_protection_configure(struct sbi_scratch *scratch);
/**
* Unconfigure protection for current HART
*
* @param scratch pointer to scratch space of current HART
*/
void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch);
/**
* Create temporary mapping to access address range on current HART
*
* @param base base address of the temporary mapping
* @param size size of the temporary mapping
*
* @return 0 on success and negative error code on failure
*/
int sbi_hart_protection_map_range(unsigned long base, unsigned long size);
/**
* Destroy temporary mapping to access address range on current HART
*
* @param base base address of the temporary mapping
* @param size size of the temporary mapping
*
* @return 0 on success and negative error code on failure
*/
int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size);
#endif /* __SBI_HART_PROTECTION_H__ */

View File

@@ -181,6 +181,17 @@ static inline void sbi_hartmask_xor(struct sbi_hartmask *dstp,
sbi_hartmask_bits(src2p), SBI_HARTMASK_MAX_BITS); sbi_hartmask_bits(src2p), SBI_HARTMASK_MAX_BITS);
} }
/**
* Count of bits in *srcp
* @param srcp the hartmask to count bits in
*
* Return: count of bits set in *srcp
*/
static inline int sbi_hartmask_weight(const struct sbi_hartmask *srcp)
{
return bitmap_weight(sbi_hartmask_bits(srcp), SBI_HARTMASK_MAX_BITS);
}
/** /**
* Iterate over each HART index in hartmask * Iterate over each HART index in hartmask
* __i hart index * __i hart index

View File

@@ -0,0 +1,17 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 MIPS
*
*/
#ifndef __SBI_ILLEGAL_ATOMIC_H__
#define __SBI_ILLEGAL_ATOMIC_H__
#include <sbi/sbi_types.h>
struct sbi_trap_regs;
int sbi_illegal_atomic(ulong insn, struct sbi_trap_regs *regs);
#endif

View File

@@ -14,6 +14,10 @@
struct sbi_trap_context; struct sbi_trap_context;
typedef int (*illegal_insn_func)(ulong insn, struct sbi_trap_regs *regs);
int truly_illegal_insn(ulong insn, struct sbi_trap_regs *regs);
int sbi_illegal_insn_handler(struct sbi_trap_context *tcntx); int sbi_illegal_insn_handler(struct sbi_trap_context *tcntx);
#endif #endif

View File

@@ -16,6 +16,8 @@ struct sbi_scratch;
void __noreturn sbi_init(struct sbi_scratch *scratch); void __noreturn sbi_init(struct sbi_scratch *scratch);
void sbi_revert_entry_count(struct sbi_scratch *scratch);
unsigned long sbi_entry_count(u32 hartindex); unsigned long sbi_entry_count(u32 hartindex);
unsigned long sbi_init_count(u32 hartindex); unsigned long sbi_init_count(u32 hartindex);

View File

@@ -23,6 +23,9 @@ struct sbi_ipi_device {
/** Name of the IPI device */ /** Name of the IPI device */
char name[32]; char name[32];
/** Ratings of the IPI device (higher is better) */
unsigned long rating;
/** Send IPI to a target HART index */ /** Send IPI to a target HART index */
void (*ipi_send)(u32 hart_index); void (*ipi_send)(u32 hart_index);
@@ -85,13 +88,13 @@ int sbi_ipi_send_halt(ulong hmask, ulong hbase);
void sbi_ipi_process(void); void sbi_ipi_process(void);
int sbi_ipi_raw_send(u32 hartindex); int sbi_ipi_raw_send(u32 hartindex, bool all_devices);
void sbi_ipi_raw_clear(void); void sbi_ipi_raw_clear(bool all_devices);
const struct sbi_ipi_device *sbi_ipi_get_device(void); const struct sbi_ipi_device *sbi_ipi_get_device(void);
void sbi_ipi_set_device(const struct sbi_ipi_device *dev); void sbi_ipi_add_device(const struct sbi_ipi_device *dev);
int sbi_ipi_init(struct sbi_scratch *scratch, bool cold_boot); int sbi_ipi_init(struct sbi_scratch *scratch, bool cold_boot);

View File

@@ -160,4 +160,28 @@ static inline void sbi_list_del_init(struct sbi_dlist *entry)
&pos->member != (head); \ &pos->member != (head); \
pos = sbi_list_entry(pos->member.next, typeof(*pos), member)) pos = sbi_list_entry(pos->member.next, typeof(*pos), member))
/**
* Iterate over list of given type safe against removal of list entry
* @param pos the type * to use as a loop cursor.
* @param n another type * to use as temporary storage.
* @param head the head for your list.
* @param member the name of the list_struct within the struct.
*/
#define sbi_list_for_each_entry_safe(pos, n, head, member) \
for (pos = sbi_list_entry((head)->next, typeof(*pos), member), \
n = sbi_list_entry(pos->member.next, typeof(*pos), member); \
&pos->member != (head); \
pos = n, n = sbi_list_entry(pos->member.next, typeof(*pos), member))
/**
* Iterate over list of given type in reverse order
* @param pos the type * to use as a loop cursor.
* @param head the head for your list.
* @param member the name of the list_struct within the struct.
*/
#define sbi_list_for_each_entry_reverse(pos, head, member) \
for (pos = sbi_list_entry((head)->prev, typeof(*pos), member); \
&pos->member != (head); \
pos = sbi_list_entry(pos->member.prev, typeof(*pos), member))
#endif #endif

View File

@@ -153,11 +153,13 @@ int sbi_mpxy_init(struct sbi_scratch *scratch);
/** Check if some Message proxy channel is available */ /** Check if some Message proxy channel is available */
bool sbi_mpxy_channel_available(void); bool sbi_mpxy_channel_available(void);
/** Set Message proxy shared memory on the calling HART */ /** Get message proxy shared memory size */
int sbi_mpxy_set_shmem(unsigned long shmem_size, unsigned long sbi_mpxy_get_shmem_size(void);
unsigned long shmem_phys_lo,
unsigned long shmem_phys_hi, /** Set message proxy shared memory on the calling HART */
unsigned long flags); int sbi_mpxy_set_shmem(unsigned long shmem_phys_lo,
unsigned long shmem_phys_hi,
unsigned long flags);
/** Get channel IDs list */ /** Get channel IDs list */
int sbi_mpxy_get_channel_ids(u32 start_index); int sbi_mpxy_get_channel_ids(u32 start_index);

View File

@@ -39,6 +39,8 @@
#define SBI_PLATFORM_FIRMWARE_CONTEXT_OFFSET (0x60 + __SIZEOF_POINTER__) #define SBI_PLATFORM_FIRMWARE_CONTEXT_OFFSET (0x60 + __SIZEOF_POINTER__)
/** Offset of hart_index2id in struct sbi_platform */ /** Offset of hart_index2id in struct sbi_platform */
#define SBI_PLATFORM_HART_INDEX2ID_OFFSET (0x60 + (__SIZEOF_POINTER__ * 2)) #define SBI_PLATFORM_HART_INDEX2ID_OFFSET (0x60 + (__SIZEOF_POINTER__ * 2))
/** Offset of cbom_block_size in struct sbi_platform */
#define SBI_PLATFORM_CBOM_BLOCK_SIZE_OFFSET (0x60 + (__SIZEOF_POINTER__ * 3))
#define SBI_PLATFORM_TLB_RANGE_FLUSH_LIMIT_DEFAULT (1UL << 12) #define SBI_PLATFORM_TLB_RANGE_FLUSH_LIMIT_DEFAULT (1UL << 12)
@@ -114,9 +116,6 @@ struct sbi_platform_operations {
/** Initialize the platform interrupt controller during cold boot */ /** Initialize the platform interrupt controller during cold boot */
int (*irqchip_init)(void); int (*irqchip_init)(void);
/** Initialize IPI during cold boot */
int (*ipi_init)(void);
/** Get tlb flush limit value **/ /** Get tlb flush limit value **/
u64 (*get_tlbr_flush_limit)(void); u64 (*get_tlbr_flush_limit)(void);
@@ -129,8 +128,6 @@ struct sbi_platform_operations {
/** Initialize the platform Message Proxy(MPXY) driver */ /** Initialize the platform Message Proxy(MPXY) driver */
int (*mpxy_init)(void); int (*mpxy_init)(void);
/** Check if SBI vendor extension is implemented or not */
bool (*vendor_ext_check)(void);
/** platform specific SBI extension implementation provider */ /** platform specific SBI extension implementation provider */
int (*vendor_ext_provider)(long funcid, int (*vendor_ext_provider)(long funcid,
struct sbi_trap_regs *regs, struct sbi_trap_regs *regs,
@@ -142,6 +139,13 @@ struct sbi_platform_operations {
/** platform specific handler to fixup store fault */ /** platform specific handler to fixup store fault */
int (*emulate_store)(int wlen, unsigned long addr, int (*emulate_store)(int wlen, unsigned long addr,
union sbi_ldst_data in_val); union sbi_ldst_data in_val);
/** platform specific pmp setup on current HART */
void (*pmp_set)(unsigned int n, unsigned long flags,
unsigned long prot, unsigned long addr,
unsigned long log2len);
/** platform specific pmp disable on current HART */
void (*pmp_disable)(unsigned int n);
}; };
/** Platform default per-HART stack size for exception/interrupt handling */ /** Platform default per-HART stack size for exception/interrupt handling */
@@ -169,7 +173,7 @@ struct sbi_platform {
char name[64]; char name[64];
/** Supported features */ /** Supported features */
u64 features; u64 features;
/** Total number of HARTs */ /** Total number of HARTs (at most SBI_HARTMASK_MAX_BITS) */
u32 hart_count; u32 hart_count;
/** Per-HART stack size for exception/interrupt handling */ /** Per-HART stack size for exception/interrupt handling */
u32 hart_stack_size; u32 hart_stack_size;
@@ -184,70 +188,34 @@ struct sbi_platform {
/** /**
* HART index to HART id table * HART index to HART id table
* *
* For used HART index <abc>: * If hart_index2id != NULL then the table must contain a mapping
* for each HART index 0 <= <abc> < hart_count:
* hart_index2id[<abc>] = some HART id * hart_index2id[<abc>] = some HART id
* For unused HART index <abc>:
* hart_index2id[<abc>] = -1U
* *
* If hart_index2id == NULL then we assume identity mapping * If hart_index2id == NULL then we assume identity mapping
* hart_index2id[<abc>] = <abc> * hart_index2id[<abc>] = <abc>
*
* We have only two restrictions:
* 1. HART index < sbi_platform hart_count
* 2. HART id < SBI_HARTMASK_MAX_BITS
*/ */
const u32 *hart_index2id; const u32 *hart_index2id;
/** Allocation alignment for Scratch */
unsigned long cbom_block_size;
}; };
/** /**
* Prevent modification of struct sbi_platform from affecting * Prevent modification of struct sbi_platform from affecting
* SBI_PLATFORM_xxx_OFFSET * SBI_PLATFORM_xxx_OFFSET
*/ */
_Static_assert( assert_member_offset(struct sbi_platform, opensbi_version, SBI_PLATFORM_OPENSBI_VERSION_OFFSET);
offsetof(struct sbi_platform, opensbi_version) assert_member_offset(struct sbi_platform, platform_version, SBI_PLATFORM_VERSION_OFFSET);
== SBI_PLATFORM_OPENSBI_VERSION_OFFSET, assert_member_offset(struct sbi_platform, name, SBI_PLATFORM_NAME_OFFSET);
"struct sbi_platform definition has changed, please redefine " assert_member_offset(struct sbi_platform, features, SBI_PLATFORM_FEATURES_OFFSET);
"SBI_PLATFORM_OPENSBI_VERSION_OFFSET"); assert_member_offset(struct sbi_platform, hart_count, SBI_PLATFORM_HART_COUNT_OFFSET);
_Static_assert( assert_member_offset(struct sbi_platform, hart_stack_size, SBI_PLATFORM_HART_STACK_SIZE_OFFSET);
offsetof(struct sbi_platform, platform_version) assert_member_offset(struct sbi_platform, heap_size, SBI_PLATFORM_HEAP_SIZE_OFFSET);
== SBI_PLATFORM_VERSION_OFFSET, assert_member_offset(struct sbi_platform, reserved, SBI_PLATFORM_RESERVED_OFFSET);
"struct sbi_platform definition has changed, please redefine " assert_member_offset(struct sbi_platform, platform_ops_addr, SBI_PLATFORM_OPS_OFFSET);
"SBI_PLATFORM_VERSION_OFFSET"); assert_member_offset(struct sbi_platform, firmware_context, SBI_PLATFORM_FIRMWARE_CONTEXT_OFFSET);
_Static_assert( assert_member_offset(struct sbi_platform, hart_index2id, SBI_PLATFORM_HART_INDEX2ID_OFFSET);
offsetof(struct sbi_platform, name) assert_member_offset(struct sbi_platform, cbom_block_size, SBI_PLATFORM_CBOM_BLOCK_SIZE_OFFSET);
== SBI_PLATFORM_NAME_OFFSET,
"struct sbi_platform definition has changed, please redefine "
"SBI_PLATFORM_NAME_OFFSET");
_Static_assert(
offsetof(struct sbi_platform, features)
== SBI_PLATFORM_FEATURES_OFFSET,
"struct sbi_platform definition has changed, please redefine "
"SBI_PLATFORM_FEATURES_OFFSET");
_Static_assert(
offsetof(struct sbi_platform, hart_count)
== SBI_PLATFORM_HART_COUNT_OFFSET,
"struct sbi_platform definition has changed, please redefine "
"SBI_PLATFORM_HART_COUNT_OFFSET");
_Static_assert(
offsetof(struct sbi_platform, hart_stack_size)
== SBI_PLATFORM_HART_STACK_SIZE_OFFSET,
"struct sbi_platform definition has changed, please redefine "
"SBI_PLATFORM_HART_STACK_SIZE_OFFSET");
_Static_assert(
offsetof(struct sbi_platform, platform_ops_addr)
== SBI_PLATFORM_OPS_OFFSET,
"struct sbi_platform definition has changed, please redefine "
"SBI_PLATFORM_OPS_OFFSET");
_Static_assert(
offsetof(struct sbi_platform, firmware_context)
== SBI_PLATFORM_FIRMWARE_CONTEXT_OFFSET,
"struct sbi_platform definition has changed, please redefine "
"SBI_PLATFORM_FIRMWARE_CONTEXT_OFFSET");
_Static_assert(
offsetof(struct sbi_platform, hart_index2id)
== SBI_PLATFORM_HART_INDEX2ID_OFFSET,
"struct sbi_platform definition has changed, please redefine "
"SBI_PLATFORM_HART_INDEX2ID_OFFSET");
/** Get pointer to sbi_platform for sbi_scratch pointer */ /** Get pointer to sbi_platform for sbi_scratch pointer */
#define sbi_platform_ptr(__s) \ #define sbi_platform_ptr(__s) \
@@ -331,7 +299,7 @@ static inline u32 sbi_platform_tlb_fifo_num_entries(const struct sbi_platform *p
{ {
if (plat && sbi_platform_ops(plat)->get_tlb_num_entries) if (plat && sbi_platform_ops(plat)->get_tlb_num_entries)
return sbi_platform_ops(plat)->get_tlb_num_entries(); return sbi_platform_ops(plat)->get_tlb_num_entries();
return sbi_scratch_last_hartindex() + 1; return sbi_hart_count();
} }
/** /**
@@ -557,20 +525,6 @@ static inline int sbi_platform_irqchip_init(const struct sbi_platform *plat)
return 0; return 0;
} }
/**
* Initialize the platform IPI support during cold boot
*
* @param plat pointer to struct sbi_platform
*
* @return 0 on success and negative error code on failure
*/
static inline int sbi_platform_ipi_init(const struct sbi_platform *plat)
{
if (plat && sbi_platform_ops(plat)->ipi_init)
return sbi_platform_ops(plat)->ipi_init();
return 0;
}
/** /**
* Initialize the platform timer during cold boot * Initialize the platform timer during cold boot
* *
@@ -609,10 +563,7 @@ static inline int sbi_platform_mpxy_init(const struct sbi_platform *plat)
static inline bool sbi_platform_vendor_ext_check( static inline bool sbi_platform_vendor_ext_check(
const struct sbi_platform *plat) const struct sbi_platform *plat)
{ {
if (plat && sbi_platform_ops(plat)->vendor_ext_check) return plat && sbi_platform_ops(plat)->vendor_ext_provider;
return sbi_platform_ops(plat)->vendor_ext_check();
return false;
} }
/** /**
@@ -683,6 +634,38 @@ static inline int sbi_platform_emulate_store(const struct sbi_platform *plat,
return SBI_ENOTSUPP; return SBI_ENOTSUPP;
} }
/**
* Platform specific PMP setup on current HART
*
* @param plat pointer to struct sbi_platform
* @param n index of the pmp entry
* @param flags domain memregion flags
* @param prot attribute of the pmp entry
* @param addr address of the pmp entry
* @param log2len size of the pmp entry as power-of-2
*/
static inline void sbi_platform_pmp_set(const struct sbi_platform *plat,
unsigned int n, unsigned long flags,
unsigned long prot, unsigned long addr,
unsigned long log2len)
{
if (plat && sbi_platform_ops(plat)->pmp_set)
sbi_platform_ops(plat)->pmp_set(n, flags, prot, addr, log2len);
}
/**
* Platform specific PMP disable on current HART
*
* @param plat pointer to struct sbi_platform
* @param n index of the pmp entry
*/
static inline void sbi_platform_pmp_disable(const struct sbi_platform *plat,
unsigned int n)
{
if (plat && sbi_platform_ops(plat)->pmp_disable)
sbi_platform_ops(plat)->pmp_disable(n);
}
#endif #endif
#endif #endif

View File

@@ -114,6 +114,9 @@ void sbi_pmu_exit(struct sbi_scratch *scratch);
/** Return the pmu irq bit depending on extension existence */ /** Return the pmu irq bit depending on extension existence */
int sbi_pmu_irq_bit(void); int sbi_pmu_irq_bit(void);
/** Return the pmu irq mask or 0 if the pmu overflow irq is not supported */
unsigned long sbi_pmu_irq_mask(void);
/** /**
* Add the hardware event to counter mapping information. This should be called * Add the hardware event to counter mapping information. This should be called
* from the platform code to update the mapping table. * from the platform code to update the mapping table.

View File

@@ -93,61 +93,21 @@ struct sbi_scratch {
* Prevent modification of struct sbi_scratch from affecting * Prevent modification of struct sbi_scratch from affecting
* SBI_SCRATCH_xxx_OFFSET * SBI_SCRATCH_xxx_OFFSET
*/ */
_Static_assert( assert_member_offset(struct sbi_scratch, fw_start, SBI_SCRATCH_FW_START_OFFSET);
offsetof(struct sbi_scratch, fw_start) assert_member_offset(struct sbi_scratch, fw_size, SBI_SCRATCH_FW_SIZE_OFFSET);
== SBI_SCRATCH_FW_START_OFFSET, assert_member_offset(struct sbi_scratch, fw_rw_offset, SBI_SCRATCH_FW_RW_OFFSET);
"struct sbi_scratch definition has changed, please redefine " assert_member_offset(struct sbi_scratch, fw_heap_offset, SBI_SCRATCH_FW_HEAP_OFFSET);
"SBI_SCRATCH_FW_START_OFFSET"); assert_member_offset(struct sbi_scratch, fw_heap_size, SBI_SCRATCH_FW_HEAP_SIZE_OFFSET);
_Static_assert( assert_member_offset(struct sbi_scratch, next_arg1, SBI_SCRATCH_NEXT_ARG1_OFFSET);
offsetof(struct sbi_scratch, fw_size) assert_member_offset(struct sbi_scratch, next_addr, SBI_SCRATCH_NEXT_ADDR_OFFSET);
== SBI_SCRATCH_FW_SIZE_OFFSET, assert_member_offset(struct sbi_scratch, next_mode, SBI_SCRATCH_NEXT_MODE_OFFSET);
"struct sbi_scratch definition has changed, please redefine " assert_member_offset(struct sbi_scratch, warmboot_addr, SBI_SCRATCH_WARMBOOT_ADDR_OFFSET);
"SBI_SCRATCH_FW_SIZE_OFFSET"); assert_member_offset(struct sbi_scratch, platform_addr, SBI_SCRATCH_PLATFORM_ADDR_OFFSET);
_Static_assert( assert_member_offset(struct sbi_scratch, hartid_to_scratch, SBI_SCRATCH_HARTID_TO_SCRATCH_OFFSET);
offsetof(struct sbi_scratch, next_arg1) assert_member_offset(struct sbi_scratch, trap_context, SBI_SCRATCH_TRAP_CONTEXT_OFFSET);
== SBI_SCRATCH_NEXT_ARG1_OFFSET, assert_member_offset(struct sbi_scratch, tmp0, SBI_SCRATCH_TMP0_OFFSET);
"struct sbi_scratch definition has changed, please redefine " assert_member_offset(struct sbi_scratch, options, SBI_SCRATCH_OPTIONS_OFFSET);
"SBI_SCRATCH_NEXT_ARG1_OFFSET"); assert_member_offset(struct sbi_scratch, hartindex, SBI_SCRATCH_HARTINDEX_OFFSET);
_Static_assert(
offsetof(struct sbi_scratch, next_addr)
== SBI_SCRATCH_NEXT_ADDR_OFFSET,
"struct sbi_scratch definition has changed, please redefine "
"SBI_SCRATCH_NEXT_ADDR_OFFSET");
_Static_assert(
offsetof(struct sbi_scratch, next_mode)
== SBI_SCRATCH_NEXT_MODE_OFFSET,
"struct sbi_scratch definition has changed, please redefine "
"SBI_SCRATCH_NEXT_MODE_OFFSET");
_Static_assert(
offsetof(struct sbi_scratch, warmboot_addr)
== SBI_SCRATCH_WARMBOOT_ADDR_OFFSET,
"struct sbi_scratch definition has changed, please redefine "
"SBI_SCRATCH_WARMBOOT_ADDR_OFFSET");
_Static_assert(
offsetof(struct sbi_scratch, platform_addr)
== SBI_SCRATCH_PLATFORM_ADDR_OFFSET,
"struct sbi_scratch definition has changed, please redefine "
"SBI_SCRATCH_PLATFORM_ADDR_OFFSET");
_Static_assert(
offsetof(struct sbi_scratch, hartid_to_scratch)
== SBI_SCRATCH_HARTID_TO_SCRATCH_OFFSET,
"struct sbi_scratch definition has changed, please redefine "
"SBI_SCRATCH_HARTID_TO_SCRATCH_OFFSET");
_Static_assert(
offsetof(struct sbi_scratch, trap_context)
== SBI_SCRATCH_TRAP_CONTEXT_OFFSET,
"struct sbi_scratch definition has changed, please redefine "
"SBI_SCRATCH_TRAP_CONTEXT_OFFSET");
_Static_assert(
offsetof(struct sbi_scratch, tmp0)
== SBI_SCRATCH_TMP0_OFFSET,
"struct sbi_scratch definition has changed, please redefine "
"SBI_SCRATCH_TMP0_OFFSET");
_Static_assert(
offsetof(struct sbi_scratch, options)
== SBI_SCRATCH_OPTIONS_OFFSET,
"struct sbi_scratch definition has changed, please redefine "
"SBI_SCRATCH_OPTIONS_OFFSET");
/** Possible options for OpenSBI library */ /** Possible options for OpenSBI library */
enum sbi_scratch_options { enum sbi_scratch_options {
@@ -210,15 +170,18 @@ do { \
#define current_hartindex() \ #define current_hartindex() \
(sbi_scratch_thishart_ptr()->hartindex) (sbi_scratch_thishart_ptr()->hartindex)
/** Last HART index having a sbi_scratch pointer */ /** Number of harts managed by this OpenSBI instance */
extern u32 last_hartindex_having_scratch; extern u32 sbi_scratch_hart_count;
/** Get last HART index having a sbi_scratch pointer */ /** Get the number of harts managed by this OpenSBI instance */
#define sbi_scratch_last_hartindex() last_hartindex_having_scratch #define sbi_hart_count() sbi_scratch_hart_count
/** Iterate over the harts managed by this OpenSBI instance */
#define sbi_for_each_hartindex(__var) \
for (u32 __var = 0; __var < sbi_hart_count(); ++__var)
/** Check whether a particular HART index is valid or not */ /** Check whether a particular HART index is valid or not */
#define sbi_hartindex_valid(__hartindex) \ #define sbi_hartindex_valid(__hartindex) ((__hartindex) < sbi_hart_count())
(((__hartindex) <= sbi_scratch_last_hartindex()) ? true : false)
/** HART index to HART id table */ /** HART index to HART id table */
extern u32 hartindex_to_hartid_table[]; extern u32 hartindex_to_hartid_table[];
@@ -226,7 +189,7 @@ extern u32 hartindex_to_hartid_table[];
/** Get sbi_scratch from HART index */ /** Get sbi_scratch from HART index */
#define sbi_hartindex_to_hartid(__hartindex) \ #define sbi_hartindex_to_hartid(__hartindex) \
({ \ ({ \
((__hartindex) <= sbi_scratch_last_hartindex()) ?\ ((__hartindex) < SBI_HARTMASK_MAX_BITS) ? \
hartindex_to_hartid_table[__hartindex] : -1U; \ hartindex_to_hartid_table[__hartindex] : -1U; \
}) })
@@ -236,8 +199,8 @@ extern struct sbi_scratch *hartindex_to_scratch_table[];
/** Get sbi_scratch from HART index */ /** Get sbi_scratch from HART index */
#define sbi_hartindex_to_scratch(__hartindex) \ #define sbi_hartindex_to_scratch(__hartindex) \
({ \ ({ \
((__hartindex) <= sbi_scratch_last_hartindex()) ?\ ((__hartindex) < SBI_HARTMASK_MAX_BITS) ? \
hartindex_to_scratch_table[__hartindex] : NULL;\ hartindex_to_scratch_table[__hartindex] : NULL; \
}) })
/** /**

33
include/sbi/sbi_slist.h Normal file
View File

@@ -0,0 +1,33 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Simple simply-linked list library.
*
* Copyright (c) 2025 Rivos Inc.
*
* Authors:
* Clément Léger <cleger@rivosinc.com>
*/
#ifndef __SBI_SLIST_H__
#define __SBI_SLIST_H__
#include <sbi/sbi_types.h>
#define SBI_SLIST_HEAD_INIT(_ptr) (_ptr)
#define SBI_SLIST_HEAD(_lname, _stype) struct _stype *_lname
#define SBI_SLIST_NODE(_stype) SBI_SLIST_HEAD(next, _stype)
#define SBI_SLIST_NODE_INIT(_ptr) .next = _ptr
#define SBI_INIT_SLIST_HEAD(_head) (_head) = NULL
#define SBI_SLIST_ADD(_ptr, _head) \
do { \
(_ptr)->next = _head; \
(_head) = _ptr; \
} while (0)
#define SBI_SLIST_FOR_EACH_ENTRY(_ptr, _head) \
for (_ptr = _head; _ptr; _ptr = _ptr->next)
#endif

View File

@@ -54,12 +54,12 @@ struct sbi_sse_cb_ops {
void (*disable_cb)(uint32_t event_id); void (*disable_cb)(uint32_t event_id);
}; };
/* Set the callback operations for an event /* Add a supported event with associated callback operations
* @param event_id Event identifier (SBI_SSE_EVENT_*) * @param event_id Event identifier (SBI_SSE_EVENT_* or a custom platform one)
* @param cb_ops Callback operations * @param cb_ops Callback operations (Can be NULL if any)
* @return 0 on success, error otherwise * @return 0 on success, error otherwise
*/ */
int sbi_sse_set_cb_ops(uint32_t event_id, const struct sbi_sse_cb_ops *cb_ops); int sbi_sse_add_event(uint32_t event_id, const struct sbi_sse_cb_ops *cb_ops);
/* Inject an event to the current hard /* Inject an event to the current hard
* @param event_id Event identifier (SBI_SSE_EVENT_*) * @param event_id Event identifier (SBI_SSE_EVENT_*)

View File

@@ -69,11 +69,18 @@ struct sbi_system_suspend_device {
* return from system_suspend() may ignore this parameter. * return from system_suspend() may ignore this parameter.
*/ */
int (*system_suspend)(u32 sleep_type, unsigned long mmode_resume_addr); int (*system_suspend)(u32 sleep_type, unsigned long mmode_resume_addr);
/**
* Resume the system from system suspend
*/
void (*system_resume)(void);
}; };
const struct sbi_system_suspend_device *sbi_system_suspend_get_device(void); const struct sbi_system_suspend_device *sbi_system_suspend_get_device(void);
void sbi_system_suspend_set_device(struct sbi_system_suspend_device *dev); void sbi_system_suspend_set_device(struct sbi_system_suspend_device *dev);
void sbi_system_suspend_test_enable(void); void sbi_system_suspend_test_enable(void);
void sbi_system_resume(void);
bool sbi_system_is_suspended(void);
bool sbi_system_suspend_supported(u32 sleep_type); bool sbi_system_suspend_supported(u32 sleep_type);
int sbi_system_suspend(u32 sleep_type, ulong resume_addr, ulong opaque); int sbi_system_suspend(u32 sleep_type, ulong resume_addr, ulong opaque);

View File

@@ -54,6 +54,8 @@ do { \
#define SBI_TLB_INFO_SIZE sizeof(struct sbi_tlb_info) #define SBI_TLB_INFO_SIZE sizeof(struct sbi_tlb_info)
void __sbi_sfence_vma_all();
int sbi_tlb_request(ulong hmask, ulong hbase, struct sbi_tlb_info *tinfo); int sbi_tlb_request(ulong hmask, ulong hbase, struct sbi_tlb_info *tinfo);
int sbi_tlb_init(struct sbi_scratch *scratch, bool cold_boot); int sbi_tlb_init(struct sbi_scratch *scratch, bool cold_boot);

View File

@@ -112,10 +112,13 @@
/** Size (in bytes) of sbi_trap_info */ /** Size (in bytes) of sbi_trap_info */
#define SBI_TRAP_INFO_SIZE SBI_TRAP_INFO_OFFSET(last) #define SBI_TRAP_INFO_SIZE SBI_TRAP_INFO_OFFSET(last)
#define STACK_BOUNDARY 16
#define ALIGN_TO_BOUNDARY(x, a) (((x) + (a) - 1) & ~((a) - 1))
/** Size (in bytes) of sbi_trap_context */ /** Size (in bytes) of sbi_trap_context */
#define SBI_TRAP_CONTEXT_SIZE (SBI_TRAP_REGS_SIZE + \ #define SBI_TRAP_CONTEXT_SIZE ALIGN_TO_BOUNDARY((SBI_TRAP_REGS_SIZE + \
SBI_TRAP_INFO_SIZE + \ SBI_TRAP_INFO_SIZE + \
__SIZEOF_POINTER__) __SIZEOF_POINTER__), STACK_BOUNDARY)
#ifndef __ASSEMBLER__ #ifndef __ASSEMBLER__
@@ -124,70 +127,75 @@
/** Representation of register state at time of trap/interrupt */ /** Representation of register state at time of trap/interrupt */
struct sbi_trap_regs { struct sbi_trap_regs {
/** zero register state */ union {
unsigned long zero; unsigned long gprs[32];
/** ra register state */ struct {
unsigned long ra; /** zero register state */
/** sp register state */ unsigned long zero;
unsigned long sp; /** ra register state */
/** gp register state */ unsigned long ra;
unsigned long gp; /** sp register state */
/** tp register state */ unsigned long sp;
unsigned long tp; /** gp register state */
/** t0 register state */ unsigned long gp;
unsigned long t0; /** tp register state */
/** t1 register state */ unsigned long tp;
unsigned long t1; /** t0 register state */
/** t2 register state */ unsigned long t0;
unsigned long t2; /** t1 register state */
/** s0 register state */ unsigned long t1;
unsigned long s0; /** t2 register state */
/** s1 register state */ unsigned long t2;
unsigned long s1; /** s0 register state */
/** a0 register state */ unsigned long s0;
unsigned long a0; /** s1 register state */
/** a1 register state */ unsigned long s1;
unsigned long a1; /** a0 register state */
/** a2 register state */ unsigned long a0;
unsigned long a2; /** a1 register state */
/** a3 register state */ unsigned long a1;
unsigned long a3; /** a2 register state */
/** a4 register state */ unsigned long a2;
unsigned long a4; /** a3 register state */
/** a5 register state */ unsigned long a3;
unsigned long a5; /** a4 register state */
/** a6 register state */ unsigned long a4;
unsigned long a6; /** a5 register state */
/** a7 register state */ unsigned long a5;
unsigned long a7; /** a6 register state */
/** s2 register state */ unsigned long a6;
unsigned long s2; /** a7 register state */
/** s3 register state */ unsigned long a7;
unsigned long s3; /** s2 register state */
/** s4 register state */ unsigned long s2;
unsigned long s4; /** s3 register state */
/** s5 register state */ unsigned long s3;
unsigned long s5; /** s4 register state */
/** s6 register state */ unsigned long s4;
unsigned long s6; /** s5 register state */
/** s7 register state */ unsigned long s5;
unsigned long s7; /** s6 register state */
/** s8 register state */ unsigned long s6;
unsigned long s8; /** s7 register state */
/** s9 register state */ unsigned long s7;
unsigned long s9; /** s8 register state */
/** s10 register state */ unsigned long s8;
unsigned long s10; /** s9 register state */
/** s11 register state */ unsigned long s9;
unsigned long s11; /** s10 register state */
/** t3 register state */ unsigned long s10;
unsigned long t3; /** s11 register state */
/** t4 register state */ unsigned long s11;
unsigned long t4; /** t3 register state */
/** t5 register state */ unsigned long t3;
unsigned long t5; /** t4 register state */
/** t6 register state */ unsigned long t4;
unsigned long t6; /** t5 register state */
unsigned long t5;
/** t6 register state */
unsigned long t6;
};
};
/** mepc register state */ /** mepc register state */
unsigned long mepc; unsigned long mepc;
/** mstatus register state */ /** mstatus register state */
@@ -196,6 +204,21 @@ struct sbi_trap_regs {
unsigned long mstatusH; unsigned long mstatusH;
}; };
_Static_assert(
sizeof(((struct sbi_trap_regs *)0)->gprs) ==
offsetof(struct sbi_trap_regs, t6) +
sizeof(((struct sbi_trap_regs *)0)->t6),
"struct sbi_trap_regs's layout differs between gprs and named members");
#define REG_VAL(idx, regs) ((regs)->gprs[(idx)])
#define GET_RS1(insn, regs) REG_VAL(GET_RS1_NUM(insn), regs)
#define GET_RS2(insn, regs) REG_VAL(GET_RS2_NUM(insn), regs)
#define GET_RS1S(insn, regs) REG_VAL(GET_RS1S_NUM(insn), regs)
#define GET_RS2S(insn, regs) REG_VAL(GET_RS2S_NUM(insn), regs)
#define GET_RS2C(insn, regs) REG_VAL(GET_RS2C_NUM(insn), regs)
#define SET_RD(insn, regs, val) (REG_VAL(GET_RD_NUM(insn), regs) = (val))
/** Representation of trap details */ /** Representation of trap details */
struct sbi_trap_info { struct sbi_trap_info {
/** cause Trap exception cause */ /** cause Trap exception cause */

View File

@@ -28,8 +28,6 @@ int sbi_load_access_handler(struct sbi_trap_context *tcntx);
int sbi_store_access_handler(struct sbi_trap_context *tcntx); int sbi_store_access_handler(struct sbi_trap_context *tcntx);
int sbi_double_trap_handler(struct sbi_trap_context *tcntx);
ulong sbi_misaligned_tinst_fixup(ulong orig_tinst, ulong new_tinst, ulong sbi_misaligned_tinst_fixup(ulong orig_tinst, ulong new_tinst,
ulong addr_offset); ulong addr_offset);

View File

@@ -14,7 +14,7 @@
/* clang-format off */ /* clang-format off */
typedef char s8; typedef signed char s8;
typedef unsigned char u8; typedef unsigned char u8;
typedef unsigned char uint8_t; typedef unsigned char uint8_t;
@@ -96,6 +96,13 @@ typedef uint64_t be64_t;
const typeof(((type *)0)->member) * __mptr = (ptr); \ const typeof(((type *)0)->member) * __mptr = (ptr); \
(type *)((char *)__mptr - offsetof(type, member)); }) (type *)((char *)__mptr - offsetof(type, member)); })
#define assert_member_offset(type, member, offset) \
_Static_assert( \
(offsetof(type, member)) == (offset ), \
"The offset " #offset " of " #member " in " #type \
"is not correct, please redefine it.")
#define array_size(x) (sizeof(x) / sizeof((x)[0])) #define array_size(x) (sizeof(x) / sizeof((x)[0]))
#define MAX(a, b) ((a) > (b) ? (a) : (b)) #define MAX(a, b) ((a) > (b) ? (a) : (b))

View File

@@ -11,7 +11,7 @@
#define __SBI_VERSION_H__ #define __SBI_VERSION_H__
#define OPENSBI_VERSION_MAJOR 1 #define OPENSBI_VERSION_MAJOR 1
#define OPENSBI_VERSION_MINOR 6 #define OPENSBI_VERSION_MINOR 7
/** /**
* OpenSBI 32-bit version with: * OpenSBI 32-bit version with:

View File

@@ -0,0 +1,18 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 SiFive
*/
#ifndef __SBI_VISIBILITY_H__
#define __SBI_VISIBILITY_H__
#ifndef __DTS__
/*
* Declare all global objects with hidden visibility so access is PC-relative
* instead of going through the GOT.
*/
#pragma GCC visibility push(hidden)
#endif
#endif

69
include/sbi_utils/cache/cache.h vendored Normal file
View File

@@ -0,0 +1,69 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 SiFive Inc.
*/
#ifndef __CACHE_H__
#define __CACHE_H__
#include <sbi/sbi_list.h>
#include <sbi/sbi_types.h>
#define CACHE_NAME_LEN 32
struct cache_device;
struct cache_ops {
/** Warm init **/
int (*warm_init)(struct cache_device *dev);
/** Flush entire cache **/
int (*cache_flush_all)(struct cache_device *dev);
};
struct cache_device {
/** Name of the device **/
char name[CACHE_NAME_LEN];
/** List node for search **/
struct sbi_dlist node;
/** Point to the next level cache **/
struct cache_device *next;
/** Cache Management Operations **/
struct cache_ops *ops;
/** CPU private cache **/
bool cpu_private;
/** The unique id of this cache device **/
u32 id;
};
/**
* Find a registered cache device
*
* @param id unique ID of the cache device
*
* @return the cache device or NULL
*/
struct cache_device *cache_find(u32 id);
/**
* Register a cache device
*
* cache_device->id must be initialized already and must not change during the life
* of the cache_device object.
*
* @param dev the cache device to register
*
* @return 0 on success, or a negative error code on failure
*/
int cache_add(struct cache_device *dev);
/**
* Flush the entire cache
*
* @param dev the cache to flush
*
* @return 0 on success, or a negative error code on failure
*/
int cache_flush_all(struct cache_device *dev);
#endif

34
include/sbi_utils/cache/fdt_cache.h vendored Normal file
View File

@@ -0,0 +1,34 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 SiFive Inc.
*/
#ifndef __FDT_CACHE_H__
#define __FDT_CACHE_H__
#include <sbi_utils/cache/cache.h>
/**
* Register a cache device using information from the DT
*
* @param fdt devicetree blob
* @param noff offset of a node in the devicetree blob
* @param dev cache device to register for this devicetree node
*
* @return 0 on success, or a negative error code on failure
*/
int fdt_cache_add(const void *fdt, int noff, struct cache_device *dev);
/**
* Get the cache device referencd by the "next-level-cache" property of a DT node
*
* @param fdt devicetree blob
* @param noff offset of a node in the devicetree blob
* @param out_dev location to return the cache device
*
* @return 0 on success, or a negative error code on failure
*/
int fdt_next_cache_get(const void *fdt, int noff, struct cache_device **out_dev);
#endif

View File

@@ -0,0 +1,40 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 SiFive Inc.
*/
#ifndef __FDT_CMO_HELPER_H__
#define __FDT_CMO_HELPER_H__
#ifdef CONFIG_FDT_CACHE
/**
* Flush the private first level cache of the current hart
*
* @return 0 on success, or a negative error code on failure
*/
int fdt_cmo_private_flc_flush_all(void);
/**
* Flush the last level cache of the current hart
*
* @return 0 on success, or a negative error code on failure
*/
int fdt_cmo_llc_flush_all(void);
/**
* Initialize the cache devices for each hart
*
* @param fdt devicetree blob
* @param cold_boot cold init or warm init
*
* @return 0 on success, or a negative error code on failure
*/
int fdt_cmo_init(bool cold_boot);
#else
static inline int fdt_cmo_init(bool cold_boot) { return 0; }
#endif /* CONFIG_FDT_CACHE */
#endif /* __FDT_CMO_HELPER_H__ */

View File

@@ -1,26 +0,0 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2024 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel <apatel@ventanamicro.com>
*/
#ifndef __FDT_CPPC_H__
#define __FDT_CPPC_H__
#include <sbi/sbi_types.h>
#include <sbi_utils/fdt/fdt_driver.h>
#ifdef CONFIG_FDT_CPPC
void fdt_cppc_init(const void *fdt);
#else
static inline void fdt_cppc_init(const void *fdt) { }
#endif
#endif

View File

@@ -18,6 +18,9 @@ struct fdt_driver {
bool experimental; bool experimental;
}; };
/* List of early FDT drivers generated at compile time */
extern const struct fdt_driver *const fdt_early_drivers[];
/** /**
* Initialize a driver instance for a specific DT node * Initialize a driver instance for a specific DT node
* *

View File

@@ -34,13 +34,6 @@ struct platform_uart_data {
unsigned long reg_offset; unsigned long reg_offset;
}; };
const struct fdt_match *fdt_match_node(const void *fdt, int nodeoff,
const struct fdt_match *match_table);
int fdt_find_match(const void *fdt, int startoff,
const struct fdt_match *match_table,
const struct fdt_match **out_match);
int fdt_parse_phandle_with_args(const void *fdt, int nodeoff, int fdt_parse_phandle_with_args(const void *fdt, int nodeoff,
const char *prop, const char *cells_prop, const char *prop, const char *cells_prop,
int index, struct fdt_phandle_args *out_args); int index, struct fdt_phandle_args *out_args);
@@ -57,9 +50,11 @@ int fdt_parse_hart_id(const void *fdt, int cpu_offset, u32 *hartid);
int fdt_parse_max_enabled_hart_id(const void *fdt, u32 *max_hartid); int fdt_parse_max_enabled_hart_id(const void *fdt, u32 *max_hartid);
int fdt_parse_cbom_block_size(const void *fdt, int cpu_offset, unsigned long *cbom_block_size);
int fdt_parse_timebase_frequency(const void *fdt, unsigned long *freq); int fdt_parse_timebase_frequency(const void *fdt, unsigned long *freq);
int fdt_parse_isa_extensions(const void *fdt, unsigned int hard_id, int fdt_parse_isa_extensions(const void *fdt, unsigned int hartid,
unsigned long *extensions); unsigned long *extensions);
int fdt_parse_gaisler_uart_node(const void *fdt, int nodeoffset, int fdt_parse_gaisler_uart_node(const void *fdt, int nodeoffset,

View File

@@ -62,11 +62,6 @@ int fdt_pmu_setup(const void *fdt);
*/ */
uint64_t fdt_pmu_get_select_value(uint32_t event_idx); uint64_t fdt_pmu_get_select_value(uint32_t event_idx);
/** The event index to selector value table instance */
extern struct fdt_pmu_hw_event_select_map fdt_pmu_evt_select[];
/** The number of valid entries in fdt_pmu_evt_select[] */
extern uint32_t hw_event_count;
#else #else
static inline void fdt_pmu_fixup(void *fdt) { } static inline void fdt_pmu_fixup(void *fdt) { }

View File

@@ -1,26 +0,0 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2024 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel <apatel@ventanamicro.com>
*/
#ifndef __FDT_HSM_H__
#define __FDT_HSM_H__
#include <sbi/sbi_types.h>
#include <sbi_utils/fdt/fdt_driver.h>
#ifdef CONFIG_FDT_HSM
void fdt_hsm_init(const void *fdt);
#else
static inline void fdt_hsm_init(const void *fdt) { }
#endif
#endif

View File

@@ -0,0 +1,20 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 SiFive Inc.
*/
#ifndef __FDT_HSM_SIFIVE_INST_H__
#define __FDT_HSM_SIFIVE_INST_H__
static inline void sifive_cease(void)
{
__asm__ __volatile__(".insn 0x30500073" ::: "memory");
}
static inline void sifive_cflush(void)
{
__asm__ __volatile__(".insn 0xfc000073" ::: "memory");
}
#endif

View File

@@ -0,0 +1,14 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 SiFive Inc.
*/
#ifndef __FDT_HSM_SIFIVE_TMC0_H__
#define __FDT_HSM_SIFIVE_TMC0_H__
int sifive_tmc0_set_wakemask_enareq(u32 hartid);
void sifive_tmc0_set_wakemask_disreq(u32 hartid);
bool sifive_tmc0_is_pg(u32 hartid);
#endif

View File

@@ -1,26 +0,0 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2020 Western Digital Corporation or its affiliates.
*
* Authors:
* Anup Patel <anup.patel@wdc.com>
*/
#ifndef __FDT_IPI_H__
#define __FDT_IPI_H__
#include <sbi/sbi_types.h>
#include <sbi_utils/fdt/fdt_driver.h>
#ifdef CONFIG_FDT_IPI
int fdt_ipi_init(void);
#else
static inline int fdt_ipi_init(void) { return 0; }
#endif
#endif

View File

@@ -33,6 +33,7 @@ struct aplic_delegate_data {
struct aplic_data { struct aplic_data {
/* Private members */ /* Private members */
struct sbi_irqchip_device irqchip; struct sbi_irqchip_device irqchip;
struct sbi_dlist node;
/* Public members */ /* Public members */
unsigned long addr; unsigned long addr;
unsigned long size; unsigned long size;
@@ -48,4 +49,6 @@ struct aplic_data {
int aplic_cold_irqchip_init(struct aplic_data *aplic); int aplic_cold_irqchip_init(struct aplic_data *aplic);
void aplic_reinit_all(void);
#endif #endif

View File

@@ -11,14 +11,10 @@
#define __FDT_IRQCHIP_H__ #define __FDT_IRQCHIP_H__
#include <sbi/sbi_types.h> #include <sbi/sbi_types.h>
#include <sbi_utils/fdt/fdt_driver.h>
#ifdef CONFIG_FDT_IRQCHIP #ifdef CONFIG_FDT_IRQCHIP
struct fdt_irqchip {
const struct fdt_match *match_table;
int (*cold_init)(const void *fdt, int nodeoff, const struct fdt_match *match);
};
int fdt_irqchip_init(void); int fdt_irqchip_init(void);
#else #else

View File

@@ -11,6 +11,7 @@
#define __RPMI_MAILBOX_H__ #define __RPMI_MAILBOX_H__
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi_utils/mailbox/mailbox.h>
#include <sbi_utils/mailbox/rpmi_msgprot.h> #include <sbi_utils/mailbox/rpmi_msgprot.h>
#define rpmi_u32_count(__var) (sizeof(__var) / sizeof(u32)) #define rpmi_u32_count(__var) (sizeof(__var) / sizeof(u32))

View File

@@ -175,7 +175,7 @@ enum rpmi_error {
RPMI_ERR_VENDOR_START = -128, RPMI_ERR_VENDOR_START = -128,
}; };
/** RPMI Message Arguments */ /** RPMI Mailbox Message Arguments */
struct rpmi_message_args { struct rpmi_message_args {
u32 flags; u32 flags;
#define RPMI_MSG_FLAGS_NO_TX (1U << 0) #define RPMI_MSG_FLAGS_NO_TX (1U << 0)
@@ -189,6 +189,20 @@ struct rpmi_message_args {
u32 rx_data_len; u32 rx_data_len;
}; };
/** RPMI Mailbox Channel Attribute IDs */
enum rpmi_channel_attribute_id {
RPMI_CHANNEL_ATTR_PROTOCOL_VERSION = 0,
RPMI_CHANNEL_ATTR_MAX_DATA_LEN,
RPMI_CHANNEL_ATTR_P2A_DOORBELL_SYSMSI_INDEX,
RPMI_CHANNEL_ATTR_TX_TIMEOUT,
RPMI_CHANNEL_ATTR_RX_TIMEOUT,
RPMI_CHANNEL_ATTR_SERVICEGROUP_ID,
RPMI_CHANNEL_ATTR_SERVICEGROUP_VERSION,
RPMI_CHANNEL_ATTR_IMPL_ID,
RPMI_CHANNEL_ATTR_IMPL_VERSION,
RPMI_CHANNEL_ATTR_MAX,
};
/* /*
* RPMI SERVICEGROUPS AND SERVICES * RPMI SERVICEGROUPS AND SERVICES
*/ */
@@ -197,11 +211,15 @@ struct rpmi_message_args {
enum rpmi_servicegroup_id { enum rpmi_servicegroup_id {
RPMI_SRVGRP_ID_MIN = 0, RPMI_SRVGRP_ID_MIN = 0,
RPMI_SRVGRP_BASE = 0x0001, RPMI_SRVGRP_BASE = 0x0001,
RPMI_SRVGRP_SYSTEM_RESET = 0x0002, RPMI_SRVGRP_SYSTEM_MSI = 0x0002,
RPMI_SRVGRP_SYSTEM_SUSPEND = 0x0003, RPMI_SRVGRP_SYSTEM_RESET = 0x0003,
RPMI_SRVGRP_HSM = 0x0004, RPMI_SRVGRP_SYSTEM_SUSPEND = 0x0004,
RPMI_SRVGRP_CPPC = 0x0005, RPMI_SRVGRP_HSM = 0x0005,
RPMI_SRVGRP_CLOCK = 0x0007, RPMI_SRVGRP_CPPC = 0x0006,
RPMI_SRVGRP_VOLTAGE = 0x00007,
RPMI_SRVGRP_CLOCK = 0x0008,
RPMI_SRVGRP_DEVICE_POWER = 0x0009,
RPMI_SRVGRP_PERFORMANCE = 0x0000A,
RPMI_SRVGRP_ID_MAX_COUNT, RPMI_SRVGRP_ID_MAX_COUNT,
/* Reserved range for service groups */ /* Reserved range for service groups */
@@ -232,12 +250,10 @@ enum rpmi_base_service_id {
RPMI_BASE_SRV_GET_PLATFORM_INFO = 0x05, RPMI_BASE_SRV_GET_PLATFORM_INFO = 0x05,
RPMI_BASE_SRV_PROBE_SERVICE_GROUP = 0x06, RPMI_BASE_SRV_PROBE_SERVICE_GROUP = 0x06,
RPMI_BASE_SRV_GET_ATTRIBUTES = 0x07, RPMI_BASE_SRV_GET_ATTRIBUTES = 0x07,
RPMI_BASE_SRV_SET_MSI = 0x08,
}; };
#define RPMI_BASE_FLAGS_F0_PRIVILEGE (1U << 2) #define RPMI_BASE_FLAGS_F0_PRIVILEGE (1U << 1)
#define RPMI_BASE_FLAGS_F0_EV_NOTIFY (1U << 1) #define RPMI_BASE_FLAGS_F0_EV_NOTIFY (1U << 0)
#define RPMI_BASE_FLAGS_F0_MSI_EN (1U)
enum rpmi_base_context_priv_level { enum rpmi_base_context_priv_level {
RPMI_BASE_CONTEXT_PRIV_S_MODE, RPMI_BASE_CONTEXT_PRIV_S_MODE,
@@ -258,6 +274,92 @@ struct rpmi_base_get_platform_info_resp {
char plat_info[]; char plat_info[];
}; };
/** RPMI System MSI ServiceGroup Service IDs */
enum rpmi_sysmsi_service_id {
RPMI_SYSMSI_SRV_ENABLE_NOTIFICATION = 0x01,
RPMI_SYSMSI_SRV_GET_ATTRIBUTES = 0x2,
RPMI_SYSMSI_SRV_GET_MSI_ATTRIBUTES = 0x3,
RPMI_SYSMSI_SRV_SET_MSI_STATE = 0x4,
RPMI_SYSMSI_SRV_GET_MSI_STATE = 0x5,
RPMI_SYSMSI_SRV_SET_MSI_TARGET = 0x6,
RPMI_SYSMSI_SRV_GET_MSI_TARGET = 0x7,
RPMI_SYSMSI_SRV_ID_MAX_COUNT,
};
/** Response for system MSI service group attributes */
struct rpmi_sysmsi_get_attributes_resp {
s32 status;
u32 sys_num_msi;
u32 flag0;
u32 flag1;
};
/** Request for system MSI attributes */
struct rpmi_sysmsi_get_msi_attributes_req {
u32 sys_msi_index;
};
/** Response for system MSI attributes */
struct rpmi_sysmsi_get_msi_attributes_resp {
s32 status;
u32 flag0;
u32 flag1;
u8 name[16];
};
#define RPMI_SYSMSI_MSI_ATTRIBUTES_FLAG0_PREF_PRIV (1U << 0)
/** Request for system MSI set state */
struct rpmi_sysmsi_set_msi_state_req {
u32 sys_msi_index;
u32 sys_msi_state;
};
#define RPMI_SYSMSI_MSI_STATE_ENABLE (1U << 0)
#define RPMI_SYSMSI_MSI_STATE_PENDING (1U << 1)
/** Response for system MSI set state */
struct rpmi_sysmsi_set_msi_state_resp {
s32 status;
};
/** Request for system MSI get state */
struct rpmi_sysmsi_get_msi_state_req {
u32 sys_msi_index;
};
/** Response for system MSI get state */
struct rpmi_sysmsi_get_msi_state_resp {
s32 status;
u32 sys_msi_state;
};
/** Request for system MSI set target */
struct rpmi_sysmsi_set_msi_target_req {
u32 sys_msi_index;
u32 sys_msi_address_low;
u32 sys_msi_address_high;
u32 sys_msi_data;
};
/** Response for system MSI set target */
struct rpmi_sysmsi_set_msi_target_resp {
s32 status;
};
/** Request for system MSI get target */
struct rpmi_sysmsi_get_msi_target_req {
u32 sys_msi_index;
};
/** Response for system MSI get target */
struct rpmi_sysmsi_get_msi_target_resp {
s32 status;
u32 sys_msi_address_low;
u32 sys_msi_address_high;
u32 sys_msi_data;
};
/** RPMI System Reset ServiceGroup Service IDs */ /** RPMI System Reset ServiceGroup Service IDs */
enum rpmi_system_reset_service_id { enum rpmi_system_reset_service_id {
RPMI_SYSRST_SRV_ENABLE_NOTIFICATION = 0x01, RPMI_SYSRST_SRV_ENABLE_NOTIFICATION = 0x01,
@@ -512,6 +614,86 @@ struct rpmi_cppc_hart_list_resp {
u32 hartid[(RPMI_MSG_DATA_SIZE(RPMI_SLOT_SIZE_MIN) - (sizeof(u32) * 3)) / sizeof(u32)]; u32 hartid[(RPMI_MSG_DATA_SIZE(RPMI_SLOT_SIZE_MIN) - (sizeof(u32) * 3)) / sizeof(u32)];
}; };
/** RPMI Voltage ServiceGroup Service IDs */
enum rpmi_voltage_service_id {
RPMI_VOLTAGE_SRV_ENABLE_NOTIFICATION = 0x01,
RPMI_VOLTAGE_SRV_GET_NUM_DOMAINS = 0x02,
RPMI_VOLTAGE_SRV_GET_ATTRIBUTES = 0x03,
RPMI_VOLTAGE_SRV_GET_SUPPORTED_LEVELS = 0x04,
RPMI_VOLTAGE_SRV_SET_CONFIG = 0x05,
RPMI_VOLTAGE_SRV_GET_CONFIG = 0x06,
RPMI_VOLTAGE_SRV_SET_LEVEL = 0x07,
RPMI_VOLTAGE_SRV_GET_LEVEL = 0x08,
RPMI_VOLTAGE_SRV_MAX_COUNT,
};
struct rpmi_voltage_get_num_domains_resp {
s32 status;
u32 num_domains;
};
struct rpmi_voltage_get_attributes_req {
u32 domain_id;
};
struct rpmi_voltage_get_attributes_resp {
s32 status;
u32 flags;
u32 num_levels;
u32 transition_latency;
u8 name[16];
};
struct rpmi_voltage_get_supported_rate_req {
u32 domain_id;
u32 index;
};
struct rpmi_voltage_get_supported_rate_resp {
s32 status;
u32 flags;
u32 remaining;
u32 returned;
u32 level[0];
};
struct rpmi_voltage_set_config_req {
u32 domain_id;
#define RPMI_CLOCK_CONFIG_ENABLE (1U << 0)
u32 config;
};
struct rpmi_voltage_set_config_resp {
s32 status;
};
struct rpmi_voltage_get_config_req {
u32 domain_id;
};
struct rpmi_voltage_get_config_resp {
s32 status;
u32 config;
};
struct rpmi_voltage_set_level_req {
u32 domain_id;
s32 level;
};
struct rpmi_voltage_set_level_resp {
s32 status;
};
struct rpmi_voltage_get_level_req {
u32 domain_id;
};
struct rpmi_voltage_get_level_resp {
s32 status;
s32 level;
};
/** RPMI Clock ServiceGroup Service IDs */ /** RPMI Clock ServiceGroup Service IDs */
enum rpmi_clock_service_id { enum rpmi_clock_service_id {
RPMI_CLOCK_SRV_ENABLE_NOTIFICATION = 0x01, RPMI_CLOCK_SRV_ENABLE_NOTIFICATION = 0x01,
@@ -604,4 +786,165 @@ struct rpmi_clock_get_rate_resp {
u32 clock_rate_high; u32 clock_rate_high;
}; };
/** RPMI Device Power ServiceGroup Service IDs */
enum rpmi_dpwr_service_id {
RPMI_DPWR_SRV_ENABLE_NOTIFICATION = 0x01,
RPMI_DPWR_SRV_GET_NUM_DOMAINS = 0x02,
RPMI_DPWR_SRV_GET_ATTRIBUTES = 0x03,
RPMI_DPWR_SRV_SET_STATE = 0x04,
RPMI_DPWR_SRV_GET_STATE = 0x05,
RPMI_DPWR_SRV_MAX_COUNT,
};
struct rpmi_dpwr_get_num_domain_resp {
s32 status;
u32 num_domain;
};
struct rpmi_dpwr_get_attrs_req {
u32 domain_id;
};
struct rpmi_dpwr_get_attrs_resp {
s32 status;
u32 flags;
u32 transition_latency;
u8 name[16];
};
struct rpmi_dpwr_set_state_req {
u32 domain_id;
u32 state;
};
struct rpmi_dpwr_set_state_resp {
s32 status;
};
struct rpmi_dpwr_get_state_req {
u32 domain_id;
};
struct rpmi_dpwr_get_state_resp {
s32 status;
u32 state;
};
/** RPMI Performance ServiceGroup Service IDs */
enum rpmi_performance_service_id {
RPMI_PERF_SRV_ENABLE_NOTIFICATION = 0x01,
RPMI_PERF_SRV_GET_NUM_DOMAINS = 0x02,
RPMI_PERF_SRV_GET_ATTRIBUTES = 0x03,
RPMI_PERF_SRV_GET_SUPPORTED_LEVELS = 0x04,
RPMI_PERF_SRV_GET_LEVEL = 0x05,
RPMI_PERF_SRV_SET_LEVEL = 0x06,
RPMI_PERF_SRV_GET_LIMIT = 0x07,
RPMI_PERF_SRV_SET_LIMIT = 0x08,
RPMI_PERF_SRV_GET_FAST_CHANNEL_REGION = 0x09,
RPMI_PERF_SRV_GET_FAST_CHANNEL_ATTRIBUTES = 0x0A,
RPMI_PERF_SRV_MAX_COUNT,
};
struct rpmi_perf_get_num_domain_resp {
s32 status;
u32 num_domains;
};
struct rpmi_perf_get_attrs_req {
u32 domain_id;
};
struct rpmi_perf_get_attrs_resp {
s32 status;
u32 flags;
u32 num_level;
u32 latency;
u8 name[16];
};
struct rpmi_perf_get_supported_level_req {
u32 domain_id;
u32 perf_level_index;
};
struct rpmi_perf_domain_level {
u32 level_index;
u32 opp_level;
u32 power_cost_uw;
u32 transition_latency_us;
};
struct rpmi_perf_get_supported_level_resp {
s32 status;
u32 reserve;
u32 remaining;
u32 returned;
struct rpmi_perf_domain_level level[0];
};
struct rpmi_perf_get_level_req {
u32 domain_id;
};
struct rpmi_perf_get_level_resp {
s32 status;
u32 level_index;
};
struct rpmi_perf_set_level_req {
u32 domain_id;
u32 level_index;
};
struct rpmi_perf_set_level_resp {
s32 status;
};
struct rpmi_perf_get_limit_req {
u32 domain_id;
};
struct rpmi_perf_get_limit_resp {
s32 status;
u32 level_index_max;
u32 level_index_min;
};
struct rpmi_perf_set_limit_req {
u32 domain_id;
u32 level_index_max;
u32 level_index_min;
};
struct rpmi_perf_set_limit_resp {
s32 status;
};
struct rpmi_perf_get_fast_chn_region_resp {
s32 status;
u32 region_phy_addr_low;
u32 region_phy_addr_high;
u32 region_size_low;
u32 region_size_high;
};
struct rpmi_perf_get_fast_chn_attr_req {
u32 domain_id;
u32 service_id;
};
struct rpmi_perf_get_fast_chn_attr_resp {
s32 status;
u32 flags;
u32 region_offset_low;
u32 region_offset_high;
u32 region_size;
u32 db_addr_low;
u32 db_addr_high;
u32 db_id_low;
u32 db_id_high;
u32 db_perserved_low;
u32 db_perserved_high;
};
#endif /* !__RPMI_MSGPROT_H__ */ #endif /* !__RPMI_MSGPROT_H__ */

View File

@@ -15,11 +15,11 @@
#ifdef CONFIG_FDT_MPXY #ifdef CONFIG_FDT_MPXY
void fdt_mpxy_init(const void *fdt); int fdt_mpxy_init(const void *fdt);
#else #else
static inline void fdt_mpxy_init(const void *fdt) { } static inline int fdt_mpxy_init(const void *fdt) { return 0; }
#endif #endif

View File

@@ -0,0 +1,85 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2024 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel <apatel@ventanamicro.com>
*/
#ifndef __FDT_MPXY_RPMI_MBOX_H__
#define __FDT_MPXY_RPMI_MBOX_H__
#include <sbi/sbi_types.h>
#include <sbi/sbi_mpxy.h>
#include <sbi_utils/mailbox/fdt_mailbox.h>
#include <sbi_utils/mailbox/rpmi_msgprot.h>
#include <sbi_utils/mpxy/fdt_mpxy.h>
/** Convert the mpxy attribute ID to attribute array index */
#define attr_id2index(attr_id) (attr_id - SBI_MPXY_ATTR_MSGPROTO_ATTR_START)
enum mpxy_msgprot_rpmi_attr_id {
MPXY_MSGPROT_RPMI_ATTR_SERVICEGROUP_ID = SBI_MPXY_ATTR_MSGPROTO_ATTR_START,
MPXY_MSGPROT_RPMI_ATTR_SERVICEGROUP_VERSION,
MPXY_MSGPROT_RPMI_ATTR_IMPL_ID,
MPXY_MSGPROT_RPMI_ATTR_IMPL_VERSION,
MPXY_MSGPROT_RPMI_ATTR_MAX_ID
};
/**
* MPXY message protocol attributes for RPMI
* Order of attribute fields must follow the
* attribute IDs in `enum mpxy_msgprot_rpmi_attr_id`
*/
struct mpxy_rpmi_channel_attrs {
u32 servicegrp_id;
u32 servicegrp_ver;
u32 impl_id;
u32 impl_ver;
};
/** Make sure all attributes are packed for direct memcpy */
#define assert_field_offset(field, attr_offset) \
_Static_assert( \
((offsetof(struct mpxy_rpmi_channel_attrs, field)) / \
sizeof(u32)) == (attr_offset - SBI_MPXY_ATTR_MSGPROTO_ATTR_START),\
"field " #field \
" from struct mpxy_rpmi_channel_attrs invalid offset, expected " #attr_offset)
assert_field_offset(servicegrp_id, MPXY_MSGPROT_RPMI_ATTR_SERVICEGROUP_ID);
assert_field_offset(servicegrp_ver, MPXY_MSGPROT_RPMI_ATTR_SERVICEGROUP_VERSION);
assert_field_offset(impl_id, MPXY_MSGPROT_RPMI_ATTR_IMPL_ID);
assert_field_offset(impl_ver, MPXY_MSGPROT_RPMI_ATTR_IMPL_VERSION);
/** MPXY RPMI service data for each service group */
struct mpxy_rpmi_service_data {
u8 id;
u32 min_tx_len;
u32 max_tx_len;
u32 min_rx_len;
u32 max_rx_len;
};
/** MPXY RPMI mbox data for each service group */
struct mpxy_rpmi_mbox_data {
u32 servicegrp_id;
u32 num_services;
struct mpxy_rpmi_service_data *service_data;
/** Transfer RPMI service group message */
int (*xfer_group)(void *context, struct mbox_chan *chan,
struct mbox_xfer *xfer);
/** Setup RPMI service group context for MPXY */
int (*setup_group)(void **context, struct mbox_chan *chan,
const struct mpxy_rpmi_mbox_data *data);
/** Cleanup RPMI service group context for MPXY */
void (*cleanup_group)(void *context);
};
/** Common probe function for MPXY RPMI drivers */
int mpxy_rpmi_mbox_init(const void *fdt, int nodeoff, const struct fdt_match *match);
#endif

View File

@@ -1,31 +0,0 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2020 Western Digital Corporation or its affiliates.
*
* Authors:
* Anup Patel <anup.patel@wdc.com>
*/
#ifndef __FDT_RESET_H__
#define __FDT_RESET_H__
#include <sbi/sbi_types.h>
#include <sbi_utils/fdt/fdt_driver.h>
#ifdef CONFIG_FDT_RESET
/**
* fdt_reset_init() - initialize reset drivers based on the device-tree
*
* This function shall be invoked in final init.
*/
void fdt_reset_init(const void *fdt);
#else
static inline void fdt_reset_init(const void *fdt) { }
#endif
#endif

View File

@@ -12,7 +12,9 @@
#include <sbi/sbi_types.h> #include <sbi/sbi_types.h>
#define UART_CAP_UUE BIT(0) /* Check UUE capability for XScale PXA UARTs */
int uart8250_init(unsigned long base, u32 in_freq, u32 baudrate, u32 reg_shift, int uart8250_init(unsigned long base, u32 in_freq, u32 baudrate, u32 reg_shift,
u32 reg_width, u32 reg_offset); u32 reg_width, u32 reg_offset, u32 caps);
#endif #endif

View File

@@ -1,26 +0,0 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2024 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel <apatel@ventanamicro.com>
*/
#ifndef __FDT_SUSPEND_H__
#define __FDT_SUSPEND_H__
#include <sbi/sbi_types.h>
#include <sbi_utils/fdt/fdt_driver.h>
#ifdef CONFIG_FDT_SUSPEND
void fdt_suspend_init(const void *fdt);
#else
static inline void fdt_suspend_init(const void *fdt) { }
#endif
#endif

View File

@@ -42,6 +42,11 @@ struct aclint_mtimer_data {
void (*time_wr)(bool timecmp, u64 value, volatile u64 *addr); void (*time_wr)(bool timecmp, u64 value, volatile u64 *addr);
}; };
struct aclint_mtimer_data *aclint_get_mtimer_data(void);
void aclint_mtimer_update(struct aclint_mtimer_data *mt,
struct aclint_mtimer_data *ref);
void aclint_mtimer_sync(struct aclint_mtimer_data *mt); void aclint_mtimer_sync(struct aclint_mtimer_data *mt);
void aclint_mtimer_set_reference(struct aclint_mtimer_data *mt, void aclint_mtimer_set_reference(struct aclint_mtimer_data *mt,

View File

@@ -75,10 +75,13 @@ libsbi-objs-y += sbi_emulate_csr.o
libsbi-objs-y += sbi_fifo.o libsbi-objs-y += sbi_fifo.o
libsbi-objs-y += sbi_fwft.o libsbi-objs-y += sbi_fwft.o
libsbi-objs-y += sbi_hart.o libsbi-objs-y += sbi_hart.o
libsbi-objs-y += sbi_hart_pmp.o
libsbi-objs-y += sbi_hart_protection.o
libsbi-objs-y += sbi_heap.o libsbi-objs-y += sbi_heap.o
libsbi-objs-y += sbi_math.o libsbi-objs-y += sbi_math.o
libsbi-objs-y += sbi_hfence.o libsbi-objs-y += sbi_hfence.o
libsbi-objs-y += sbi_hsm.o libsbi-objs-y += sbi_hsm.o
libsbi-objs-y += sbi_illegal_atomic.o
libsbi-objs-y += sbi_illegal_insn.o libsbi-objs-y += sbi_illegal_insn.o
libsbi-objs-y += sbi_init.o libsbi-objs-y += sbi_init.o
libsbi-objs-y += sbi_ipi.o libsbi-objs-y += sbi_ipi.o

View File

@@ -93,77 +93,91 @@ void misa_string(int xlen, char *out, unsigned int out_sz)
unsigned long csr_read_num(int csr_num) unsigned long csr_read_num(int csr_num)
{ {
#define switchcase_csr_read(__csr_num, __val) \ #define switchcase_csr_read(__csr_num) \
case __csr_num: \ case __csr_num: \
__val = csr_read(__csr_num); \ return csr_read(__csr_num);
break; #define switchcase_csr_read_2(__csr_num) \
#define switchcase_csr_read_2(__csr_num, __val) \ switchcase_csr_read(__csr_num + 0) \
switchcase_csr_read(__csr_num + 0, __val) \ switchcase_csr_read(__csr_num + 1)
switchcase_csr_read(__csr_num + 1, __val) #define switchcase_csr_read_4(__csr_num) \
#define switchcase_csr_read_4(__csr_num, __val) \ switchcase_csr_read_2(__csr_num + 0) \
switchcase_csr_read_2(__csr_num + 0, __val) \ switchcase_csr_read_2(__csr_num + 2)
switchcase_csr_read_2(__csr_num + 2, __val) #define switchcase_csr_read_8(__csr_num) \
#define switchcase_csr_read_8(__csr_num, __val) \ switchcase_csr_read_4(__csr_num + 0) \
switchcase_csr_read_4(__csr_num + 0, __val) \ switchcase_csr_read_4(__csr_num + 4)
switchcase_csr_read_4(__csr_num + 4, __val) #define switchcase_csr_read_16(__csr_num) \
#define switchcase_csr_read_16(__csr_num, __val) \ switchcase_csr_read_8(__csr_num + 0) \
switchcase_csr_read_8(__csr_num + 0, __val) \ switchcase_csr_read_8(__csr_num + 8)
switchcase_csr_read_8(__csr_num + 8, __val) #define switchcase_csr_read_32(__csr_num) \
#define switchcase_csr_read_32(__csr_num, __val) \ switchcase_csr_read_16(__csr_num + 0) \
switchcase_csr_read_16(__csr_num + 0, __val) \ switchcase_csr_read_16(__csr_num + 16)
switchcase_csr_read_16(__csr_num + 16, __val) #define switchcase_csr_read_64(__csr_num) \
#define switchcase_csr_read_64(__csr_num, __val) \ switchcase_csr_read_32(__csr_num + 0) \
switchcase_csr_read_32(__csr_num + 0, __val) \ switchcase_csr_read_32(__csr_num + 32)
switchcase_csr_read_32(__csr_num + 32, __val) #define switchcase_csr_read_128(__csr_num) \
switchcase_csr_read_64(__csr_num + 0) \
unsigned long ret = 0; switchcase_csr_read_64(__csr_num + 64)
#define switchcase_csr_read_256(__csr_num) \
switchcase_csr_read_128(__csr_num + 0) \
switchcase_csr_read_128(__csr_num + 128)
switch (csr_num) { switch (csr_num) {
switchcase_csr_read_16(CSR_PMPCFG0, ret) switchcase_csr_read_16(CSR_PMPCFG0)
switchcase_csr_read_64(CSR_PMPADDR0, ret) switchcase_csr_read_64(CSR_PMPADDR0)
switchcase_csr_read(CSR_MCYCLE, ret) switchcase_csr_read(CSR_MCYCLE)
switchcase_csr_read(CSR_MINSTRET, ret) switchcase_csr_read(CSR_MINSTRET)
switchcase_csr_read(CSR_MHPMCOUNTER3, ret) switchcase_csr_read(CSR_MHPMCOUNTER3)
switchcase_csr_read_4(CSR_MHPMCOUNTER4, ret) switchcase_csr_read_4(CSR_MHPMCOUNTER4)
switchcase_csr_read_8(CSR_MHPMCOUNTER8, ret) switchcase_csr_read_8(CSR_MHPMCOUNTER8)
switchcase_csr_read_16(CSR_MHPMCOUNTER16, ret) switchcase_csr_read_16(CSR_MHPMCOUNTER16)
switchcase_csr_read(CSR_MCOUNTINHIBIT, ret) switchcase_csr_read(CSR_MCOUNTINHIBIT)
switchcase_csr_read(CSR_MCYCLECFG, ret) switchcase_csr_read(CSR_MCYCLECFG)
switchcase_csr_read(CSR_MINSTRETCFG, ret) switchcase_csr_read(CSR_MINSTRETCFG)
switchcase_csr_read(CSR_MHPMEVENT3, ret) switchcase_csr_read(CSR_MHPMEVENT3)
switchcase_csr_read_4(CSR_MHPMEVENT4, ret) switchcase_csr_read_4(CSR_MHPMEVENT4)
switchcase_csr_read_8(CSR_MHPMEVENT8, ret) switchcase_csr_read_8(CSR_MHPMEVENT8)
switchcase_csr_read_16(CSR_MHPMEVENT16, ret) switchcase_csr_read_16(CSR_MHPMEVENT16)
#if __riscv_xlen == 32 #if __riscv_xlen == 32
switchcase_csr_read(CSR_MCYCLEH, ret) switchcase_csr_read(CSR_MCYCLEH)
switchcase_csr_read(CSR_MINSTRETH, ret) switchcase_csr_read(CSR_MINSTRETH)
switchcase_csr_read(CSR_MHPMCOUNTER3H, ret) switchcase_csr_read(CSR_MHPMCOUNTER3H)
switchcase_csr_read_4(CSR_MHPMCOUNTER4H, ret) switchcase_csr_read_4(CSR_MHPMCOUNTER4H)
switchcase_csr_read_8(CSR_MHPMCOUNTER8H, ret) switchcase_csr_read_8(CSR_MHPMCOUNTER8H)
switchcase_csr_read_16(CSR_MHPMCOUNTER16H, ret) switchcase_csr_read_16(CSR_MHPMCOUNTER16H)
/** /**
* The CSR range M[CYCLE, INSTRET]CFGH are available only if smcntrpmf * The CSR range M[CYCLE, INSTRET]CFGH are available only if smcntrpmf
* extension is present. The caller must ensure that. * extension is present. The caller must ensure that.
*/ */
switchcase_csr_read(CSR_MCYCLECFGH, ret) switchcase_csr_read(CSR_MCYCLECFGH)
switchcase_csr_read(CSR_MINSTRETCFGH, ret) switchcase_csr_read(CSR_MINSTRETCFGH)
/** /**
* The CSR range MHPMEVENT[3-16]H are available only if sscofpmf * The CSR range MHPMEVENT[3-16]H are available only if sscofpmf
* extension is present. The caller must ensure that. * extension is present. The caller must ensure that.
*/ */
switchcase_csr_read(CSR_MHPMEVENT3H, ret) switchcase_csr_read(CSR_MHPMEVENT3H)
switchcase_csr_read_4(CSR_MHPMEVENT4H, ret) switchcase_csr_read_4(CSR_MHPMEVENT4H)
switchcase_csr_read_8(CSR_MHPMEVENT8H, ret) switchcase_csr_read_8(CSR_MHPMEVENT8H)
switchcase_csr_read_16(CSR_MHPMEVENT16H, ret) switchcase_csr_read_16(CSR_MHPMEVENT16H)
#endif #endif
switchcase_csr_read_256(CSR_CUSTOM0_U_RW_BASE)
switchcase_csr_read_64(CSR_CUSTOM1_U_RO_BASE)
switchcase_csr_read_64(CSR_CUSTOM2_S_RW_BASE)
switchcase_csr_read_64(CSR_CUSTOM3_S_RW_BASE)
switchcase_csr_read_64(CSR_CUSTOM4_S_RO_BASE)
switchcase_csr_read_64(CSR_CUSTOM5_HS_RW_BASE)
switchcase_csr_read_64(CSR_CUSTOM6_HS_RW_BASE)
switchcase_csr_read_64(CSR_CUSTOM7_HS_RO_BASE)
switchcase_csr_read_64(CSR_CUSTOM8_M_RW_BASE)
switchcase_csr_read_64(CSR_CUSTOM9_M_RW_BASE)
switchcase_csr_read_64(CSR_CUSTOM10_M_RO_BASE)
default: default:
sbi_panic("%s: Unknown CSR %#x", __func__, csr_num); sbi_panic("%s: Unknown CSR %#x", __func__, csr_num);
break; return 0;
} }
return ret; #undef switchcase_csr_read_256
#undef switchcase_csr_read_128
#undef switchcase_csr_read_64 #undef switchcase_csr_read_64
#undef switchcase_csr_read_32 #undef switchcase_csr_read_32
#undef switchcase_csr_read_16 #undef switchcase_csr_read_16
@@ -197,6 +211,12 @@ void csr_write_num(int csr_num, unsigned long val)
#define switchcase_csr_write_64(__csr_num, __val) \ #define switchcase_csr_write_64(__csr_num, __val) \
switchcase_csr_write_32(__csr_num + 0, __val) \ switchcase_csr_write_32(__csr_num + 0, __val) \
switchcase_csr_write_32(__csr_num + 32, __val) switchcase_csr_write_32(__csr_num + 32, __val)
#define switchcase_csr_write_128(__csr_num, __val) \
switchcase_csr_write_64(__csr_num + 0, __val) \
switchcase_csr_write_64(__csr_num + 64, __val)
#define switchcase_csr_write_256(__csr_num, __val) \
switchcase_csr_write_128(__csr_num + 0, __val) \
switchcase_csr_write_128(__csr_num + 128, __val)
switch (csr_num) { switch (csr_num) {
switchcase_csr_write_16(CSR_PMPCFG0, val) switchcase_csr_write_16(CSR_PMPCFG0, val)
@@ -228,12 +248,21 @@ void csr_write_num(int csr_num, unsigned long val)
switchcase_csr_write_4(CSR_MHPMEVENT4, val) switchcase_csr_write_4(CSR_MHPMEVENT4, val)
switchcase_csr_write_8(CSR_MHPMEVENT8, val) switchcase_csr_write_8(CSR_MHPMEVENT8, val)
switchcase_csr_write_16(CSR_MHPMEVENT16, val) switchcase_csr_write_16(CSR_MHPMEVENT16, val)
switchcase_csr_write_256(CSR_CUSTOM0_U_RW_BASE, val)
switchcase_csr_write_64(CSR_CUSTOM2_S_RW_BASE, val)
switchcase_csr_write_64(CSR_CUSTOM3_S_RW_BASE, val)
switchcase_csr_write_64(CSR_CUSTOM5_HS_RW_BASE, val)
switchcase_csr_write_64(CSR_CUSTOM6_HS_RW_BASE, val)
switchcase_csr_write_64(CSR_CUSTOM8_M_RW_BASE, val)
switchcase_csr_write_64(CSR_CUSTOM9_M_RW_BASE, val)
default: default:
sbi_panic("%s: Unknown CSR %#x", __func__, csr_num); sbi_panic("%s: Unknown CSR %#x", __func__, csr_num);
break; break;
} }
#undef switchcase_csr_write_256
#undef switchcase_csr_write_128
#undef switchcase_csr_write_64 #undef switchcase_csr_write_64
#undef switchcase_csr_write_32 #undef switchcase_csr_write_32
#undef switchcase_csr_write_16 #undef switchcase_csr_write_16

View File

@@ -12,7 +12,7 @@
#include <sbi/riscv_atomic.h> #include <sbi/riscv_atomic.h>
#include <sbi/riscv_barrier.h> #include <sbi/riscv_barrier.h>
#ifndef __riscv_atomic #if !defined(__riscv_atomic) && !defined(__riscv_zalrsc)
#error "opensbi strongly relies on the A extension of RISC-V" #error "opensbi strongly relies on the A extension of RISC-V"
#endif #endif
@@ -31,6 +31,7 @@ void atomic_write(atomic_t *atom, long value)
long atomic_add_return(atomic_t *atom, long value) long atomic_add_return(atomic_t *atom, long value)
{ {
#ifdef __riscv_atomic
long ret; long ret;
#if __SIZEOF_LONG__ == 4 #if __SIZEOF_LONG__ == 4
__asm__ __volatile__(" amoadd.w.aqrl %1, %2, %0" __asm__ __volatile__(" amoadd.w.aqrl %1, %2, %0"
@@ -43,6 +44,29 @@ long atomic_add_return(atomic_t *atom, long value)
: "r"(value) : "r"(value)
: "memory"); : "memory");
#endif #endif
#elif __riscv_zalrsc
long ret, temp;
#if __SIZEOF_LONG__ == 4
__asm__ __volatile__("1:lr.w.aqrl %1,%0\n"
" addw %2,%1,%3\n"
" sc.w.aqrl %2,%2,%0\n"
" bnez %2,1b"
: "+A"(atom->counter), "=&r"(ret), "=&r"(temp)
: "r"(value)
: "memory");
#elif __SIZEOF_LONG__ == 8
__asm__ __volatile__("1:lr.d.aqrl %1,%0\n"
" add %2,%1,%3\n"
" sc.d.aqrl %2,%2,%0\n"
" bnez %2,1b"
: "+A"(atom->counter), "=&r"(ret), "=&r"(temp)
: "r"(value)
: "memory");
#endif
#else
#error "need a or zalrsc"
#endif
return ret + value; return ret + value;
} }
@@ -51,6 +75,7 @@ long atomic_sub_return(atomic_t *atom, long value)
return atomic_add_return(atom, -value); return atomic_add_return(atom, -value);
} }
#ifdef __riscv_atomic
#define __axchg(ptr, new, size) \ #define __axchg(ptr, new, size) \
({ \ ({ \
__typeof__(ptr) __ptr = (ptr); \ __typeof__(ptr) __ptr = (ptr); \
@@ -76,6 +101,39 @@ long atomic_sub_return(atomic_t *atom, long value)
} \ } \
__ret; \ __ret; \
}) })
#elif __riscv_zalrsc
#define __axchg(ptr, new, size) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(new) __new = (new); \
__typeof__(*(ptr)) __ret, __temp; \
switch (size) { \
case 4: \
__asm__ __volatile__ ( \
"1: lr.w.aqrl %0, %1\n" \
" sc.w.aqrl %2, %3, %1\n" \
" bnez %2, 1b\n" \
: "=&r" (__ret), "+A" (*__ptr), "=&r" (__temp) \
: "r" (__new) \
: "memory"); \
break; \
case 8: \
__asm__ __volatile__ ( \
"1: lr.d.aqrl %0, %1\n" \
" sc.d.aqrl %2, %3, %1\n" \
" bnez %2, 1b\n" \
: "=&r" (__ret), "+A" (*__ptr), "=&r" (__temp) \
: "r" (__new) \
: "memory"); \
break; \
default: \
break; \
} \
__ret; \
})
#else
#error "need a or zalrsc"
#endif
#define axchg(ptr, x) \ #define axchg(ptr, x) \
({ \ ({ \

View File

@@ -53,7 +53,16 @@ void spin_lock(spinlock_t *lock)
__asm__ __volatile__( __asm__ __volatile__(
/* Atomically increment the next ticket. */ /* Atomically increment the next ticket. */
#ifdef __riscv_atomic
" amoadd.w.aqrl %0, %4, %3\n" " amoadd.w.aqrl %0, %4, %3\n"
#elif __riscv_zalrsc
"3: lr.w.aqrl %0, %3\n"
" addw %1, %0, %4\n"
" sc.w.aqrl %1, %1, %3\n"
" bnez %1, 3b\n"
#else
#error "need a or zalrsc"
#endif
/* Did we get the lock? */ /* Did we get the lock? */
" srli %1, %0, %6\n" " srli %1, %0, %6\n"

View File

@@ -16,6 +16,7 @@
#include <sbi/sbi_trap.h> #include <sbi/sbi_trap.h>
#include <sbi/sbi_dbtr.h> #include <sbi/sbi_dbtr.h>
#include <sbi/sbi_heap.h> #include <sbi/sbi_heap.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/riscv_encoding.h> #include <sbi/riscv_encoding.h>
#include <sbi/riscv_asm.h> #include <sbi/riscv_asm.h>
@@ -336,6 +337,19 @@ static void dbtr_trigger_setup(struct sbi_dbtr_trigger *trig,
if (__test_bit(RV_DBTR_BIT(MC6, VS), &tdata1)) if (__test_bit(RV_DBTR_BIT(MC6, VS), &tdata1))
__set_bit(RV_DBTR_BIT(TS, VS), &trig->state); __set_bit(RV_DBTR_BIT(TS, VS), &trig->state);
break; break;
case RISCV_DBTR_TRIG_ICOUNT:
if (__test_bit(RV_DBTR_BIT(ICOUNT, U), &tdata1))
__set_bit(RV_DBTR_BIT(TS, U), &trig->state);
if (__test_bit(RV_DBTR_BIT(ICOUNT, S), &tdata1))
__set_bit(RV_DBTR_BIT(TS, S), &trig->state);
if (__test_bit(RV_DBTR_BIT(ICOUNT, VU), &tdata1))
__set_bit(RV_DBTR_BIT(TS, VU), &trig->state);
if (__test_bit(RV_DBTR_BIT(ICOUNT, VS), &tdata1))
__set_bit(RV_DBTR_BIT(TS, VS), &trig->state);
break;
default: default:
sbi_dprintf("%s: Unknown type (tdata1: 0x%lx Type: %ld)\n", sbi_dprintf("%s: Unknown type (tdata1: 0x%lx Type: %ld)\n",
__func__, tdata1, TDATA1_GET_TYPE(tdata1)); __func__, tdata1, TDATA1_GET_TYPE(tdata1));
@@ -379,6 +393,16 @@ static void dbtr_trigger_enable(struct sbi_dbtr_trigger *trig)
update_bit(state & RV_DBTR_BIT_MASK(TS, S), update_bit(state & RV_DBTR_BIT_MASK(TS, S),
RV_DBTR_BIT(MC6, S), &trig->tdata1); RV_DBTR_BIT(MC6, S), &trig->tdata1);
break; break;
case RISCV_DBTR_TRIG_ICOUNT:
update_bit(state & RV_DBTR_BIT_MASK(TS, VU),
RV_DBTR_BIT(ICOUNT, VU), &trig->tdata1);
update_bit(state & RV_DBTR_BIT_MASK(TS, VS),
RV_DBTR_BIT(ICOUNT, VS), &trig->tdata1);
update_bit(state & RV_DBTR_BIT_MASK(TS, U),
RV_DBTR_BIT(ICOUNT, U), &trig->tdata1);
update_bit(state & RV_DBTR_BIT_MASK(TS, S),
RV_DBTR_BIT(ICOUNT, S), &trig->tdata1);
break;
default: default:
break; break;
} }
@@ -418,6 +442,12 @@ static void dbtr_trigger_disable(struct sbi_dbtr_trigger *trig)
__clear_bit(RV_DBTR_BIT(MC6, U), &trig->tdata1); __clear_bit(RV_DBTR_BIT(MC6, U), &trig->tdata1);
__clear_bit(RV_DBTR_BIT(MC6, S), &trig->tdata1); __clear_bit(RV_DBTR_BIT(MC6, S), &trig->tdata1);
break; break;
case RISCV_DBTR_TRIG_ICOUNT:
__clear_bit(RV_DBTR_BIT(ICOUNT, VU), &trig->tdata1);
__clear_bit(RV_DBTR_BIT(ICOUNT, VS), &trig->tdata1);
__clear_bit(RV_DBTR_BIT(ICOUNT, U), &trig->tdata1);
__clear_bit(RV_DBTR_BIT(ICOUNT, S), &trig->tdata1);
break;
default: default:
break; break;
} }
@@ -441,6 +471,7 @@ static int dbtr_trigger_supported(unsigned long type)
switch (type) { switch (type) {
case RISCV_DBTR_TRIG_MCONTROL: case RISCV_DBTR_TRIG_MCONTROL:
case RISCV_DBTR_TRIG_MCONTROL6: case RISCV_DBTR_TRIG_MCONTROL6:
case RISCV_DBTR_TRIG_ICOUNT:
return 1; return 1;
default: default:
break; break;
@@ -462,6 +493,11 @@ static int dbtr_trigger_valid(unsigned long type, unsigned long tdata)
!(tdata & RV_DBTR_BIT_MASK(MC6, M))) !(tdata & RV_DBTR_BIT_MASK(MC6, M)))
return 1; return 1;
break; break;
case RISCV_DBTR_TRIG_ICOUNT:
if (!(tdata & RV_DBTR_BIT_MASK(ICOUNT, DMODE)) &&
!(tdata & RV_DBTR_BIT_MASK(ICOUNT, M)))
return 1;
break;
default: default:
break; break;
} }
@@ -506,7 +542,7 @@ int sbi_dbtr_read_trig(unsigned long smode,
{ {
struct sbi_dbtr_data_msg *xmit; struct sbi_dbtr_data_msg *xmit;
struct sbi_dbtr_trigger *trig; struct sbi_dbtr_trigger *trig;
struct sbi_dbtr_shmem_entry *entry; union sbi_dbtr_shmem_entry *entry;
void *shmem_base = NULL; void *shmem_base = NULL;
struct sbi_dbtr_hart_triggers_state *hs = NULL; struct sbi_dbtr_hart_triggers_state *hs = NULL;
@@ -523,16 +559,22 @@ int sbi_dbtr_read_trig(unsigned long smode,
shmem_base = hart_shmem_base(hs); shmem_base = hart_shmem_base(hs);
sbi_hart_protection_map_range((unsigned long)shmem_base,
trig_count * sizeof(*entry));
for_each_trig_entry(shmem_base, trig_count, typeof(*entry), entry) { for_each_trig_entry(shmem_base, trig_count, typeof(*entry), entry) {
sbi_hart_map_saddr((unsigned long)entry, sizeof(*entry));
xmit = &entry->data; xmit = &entry->data;
trig = INDEX_TO_TRIGGER((_idx + trig_idx_base)); trig = INDEX_TO_TRIGGER((_idx + trig_idx_base));
csr_write(CSR_TSELECT, trig->index);
trig->tdata1 = csr_read(CSR_TDATA1);
trig->tdata2 = csr_read(CSR_TDATA2);
trig->tdata3 = csr_read(CSR_TDATA3);
xmit->tstate = cpu_to_lle(trig->state); xmit->tstate = cpu_to_lle(trig->state);
xmit->tdata1 = cpu_to_lle(trig->tdata1); xmit->tdata1 = cpu_to_lle(trig->tdata1);
xmit->tdata2 = cpu_to_lle(trig->tdata2); xmit->tdata2 = cpu_to_lle(trig->tdata2);
xmit->tdata3 = cpu_to_lle(trig->tdata3); xmit->tdata3 = cpu_to_lle(trig->tdata3);
sbi_hart_unmap_saddr();
} }
sbi_hart_protection_unmap_range((unsigned long)shmem_base,
trig_count * sizeof(*entry));
return SBI_SUCCESS; return SBI_SUCCESS;
} }
@@ -541,7 +583,7 @@ int sbi_dbtr_install_trig(unsigned long smode,
unsigned long trig_count, unsigned long *out) unsigned long trig_count, unsigned long *out)
{ {
void *shmem_base = NULL; void *shmem_base = NULL;
struct sbi_dbtr_shmem_entry *entry; union sbi_dbtr_shmem_entry *entry;
struct sbi_dbtr_data_msg *recv; struct sbi_dbtr_data_msg *recv;
struct sbi_dbtr_id_msg *xmit; struct sbi_dbtr_id_msg *xmit;
unsigned long ctrl; unsigned long ctrl;
@@ -556,29 +598,33 @@ int sbi_dbtr_install_trig(unsigned long smode,
return SBI_ERR_NO_SHMEM; return SBI_ERR_NO_SHMEM;
shmem_base = hart_shmem_base(hs); shmem_base = hart_shmem_base(hs);
sbi_hart_protection_map_range((unsigned long)shmem_base,
trig_count * sizeof(*entry));
/* Check requested triggers configuration */ /* Check requested triggers configuration */
for_each_trig_entry(shmem_base, trig_count, typeof(*entry), entry) { for_each_trig_entry(shmem_base, trig_count, typeof(*entry), entry) {
sbi_hart_map_saddr((unsigned long)entry, sizeof(*entry));
recv = (struct sbi_dbtr_data_msg *)(&entry->data); recv = (struct sbi_dbtr_data_msg *)(&entry->data);
ctrl = recv->tdata1; ctrl = recv->tdata1;
if (!dbtr_trigger_supported(TDATA1_GET_TYPE(ctrl))) { if (!dbtr_trigger_supported(TDATA1_GET_TYPE(ctrl))) {
*out = _idx; *out = _idx;
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range((unsigned long)shmem_base,
trig_count * sizeof(*entry));
return SBI_ERR_FAILED; return SBI_ERR_FAILED;
} }
if (!dbtr_trigger_valid(TDATA1_GET_TYPE(ctrl), ctrl)) { if (!dbtr_trigger_valid(TDATA1_GET_TYPE(ctrl), ctrl)) {
*out = _idx; *out = _idx;
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range((unsigned long)shmem_base,
trig_count * sizeof(*entry));
return SBI_ERR_FAILED; return SBI_ERR_FAILED;
} }
sbi_hart_unmap_saddr();
} }
if (hs->available_trigs < trig_count) { if (hs->available_trigs < trig_count) {
*out = hs->available_trigs; *out = hs->available_trigs;
sbi_hart_protection_unmap_range((unsigned long)shmem_base,
trig_count * sizeof(*entry));
return SBI_ERR_FAILED; return SBI_ERR_FAILED;
} }
@@ -590,17 +636,18 @@ int sbi_dbtr_install_trig(unsigned long smode,
*/ */
trig = sbi_alloc_trigger(); trig = sbi_alloc_trigger();
sbi_hart_map_saddr((unsigned long)entry, sizeof(*entry));
recv = (struct sbi_dbtr_data_msg *)(&entry->data); recv = (struct sbi_dbtr_data_msg *)(&entry->data);
xmit = (struct sbi_dbtr_id_msg *)(&entry->id); xmit = (struct sbi_dbtr_id_msg *)(&entry->id);
dbtr_trigger_setup(trig, recv); dbtr_trigger_setup(trig, recv);
dbtr_trigger_enable(trig); dbtr_trigger_enable(trig);
xmit->idx = cpu_to_lle(trig->index); xmit->idx = cpu_to_lle(trig->index);
sbi_hart_unmap_saddr();
} }
sbi_hart_protection_unmap_range((unsigned long)shmem_base,
trig_count * sizeof(*entry));
return SBI_SUCCESS; return SBI_SUCCESS;
} }
@@ -651,15 +698,11 @@ int sbi_dbtr_enable_trig(unsigned long trig_idx_base,
} }
int sbi_dbtr_update_trig(unsigned long smode, int sbi_dbtr_update_trig(unsigned long smode,
unsigned long trig_idx_base, unsigned long trig_count)
unsigned long trig_idx_mask)
{ {
unsigned long trig_mask = trig_idx_mask << trig_idx_base; unsigned long trig_idx;
unsigned long idx = trig_idx_base;
struct sbi_dbtr_data_msg *recv;
unsigned long uidx = 0;
struct sbi_dbtr_trigger *trig; struct sbi_dbtr_trigger *trig;
struct sbi_dbtr_shmem_entry *entry; union sbi_dbtr_shmem_entry *entry;
void *shmem_base = NULL; void *shmem_base = NULL;
struct sbi_dbtr_hart_triggers_state *hs = NULL; struct sbi_dbtr_hart_triggers_state *hs = NULL;
@@ -672,18 +715,28 @@ int sbi_dbtr_update_trig(unsigned long smode,
shmem_base = hart_shmem_base(hs); shmem_base = hart_shmem_base(hs);
for_each_set_bit_from(idx, &trig_mask, hs->total_trigs) { if (trig_count >= hs->total_trigs)
trig = INDEX_TO_TRIGGER(idx); return SBI_ERR_BAD_RANGE;
if (!(trig->state & RV_DBTR_BIT_MASK(TS, MAPPED))) for_each_trig_entry(shmem_base, trig_count, typeof(*entry), entry) {
sbi_hart_protection_map_range((unsigned long)entry, sizeof(*entry));
trig_idx = entry->id.idx;
if (trig_idx >= hs->total_trigs) {
sbi_hart_protection_unmap_range((unsigned long)entry, sizeof(*entry));
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
}
entry = (shmem_base + uidx * sizeof(*entry)); trig = INDEX_TO_TRIGGER(trig_idx);
recv = &entry->data;
trig->tdata2 = lle_to_cpu(recv->tdata2); if (!(trig->state & RV_DBTR_BIT_MASK(TS, MAPPED))) {
sbi_hart_protection_unmap_range((unsigned long)entry, sizeof(*entry));
return SBI_ERR_FAILED;
}
dbtr_trigger_setup(trig, &entry->data);
sbi_hart_protection_unmap_range((unsigned long)entry, sizeof(*entry));
dbtr_trigger_enable(trig); dbtr_trigger_enable(trig);
uidx++;
} }
return SBI_SUCCESS; return SBI_SUCCESS;

View File

@@ -25,7 +25,6 @@ static u32 domain_count = 0;
static bool domain_finalized = false; static bool domain_finalized = false;
#define ROOT_REGION_MAX 32 #define ROOT_REGION_MAX 32
static u32 root_memregs_count = 0;
struct sbi_domain root = { struct sbi_domain root = {
.name = "root", .name = "root",
@@ -122,6 +121,80 @@ void sbi_domain_memregion_init(unsigned long addr,
} }
} }
unsigned int sbi_domain_get_smepmp_flags(struct sbi_domain_memregion *reg)
{
unsigned int pmp_flags = 0;
unsigned long rstart, rend;
if ((reg->flags & SBI_DOMAIN_MEMREGION_ACCESS_MASK) == 0) {
/*
* Region is inaccessible in all privilege modes.
*
* SmePMP allows two encodings for an inaccessible region:
* - pmpcfg.LRWX = 0000 (Inaccessible region)
* - pmpcfg.LRWX = 1000 (Locked inaccessible region)
* We use the first encoding here.
*/
return 0;
} else if (SBI_DOMAIN_MEMREGION_IS_SHARED(reg->flags)) {
/* Read only for both M and SU modes */
if (SBI_DOMAIN_MEMREGION_IS_SUR_MR(reg->flags))
pmp_flags = (PMP_L | PMP_R | PMP_W | PMP_X);
/* Execute for SU but Read/Execute for M mode */
else if (SBI_DOMAIN_MEMREGION_IS_SUX_MRX(reg->flags))
/* locked region */
pmp_flags = (PMP_L | PMP_W | PMP_X);
/* Execute only for both M and SU modes */
else if (SBI_DOMAIN_MEMREGION_IS_SUX_MX(reg->flags))
pmp_flags = (PMP_L | PMP_W);
/* Read/Write for both M and SU modes */
else if (SBI_DOMAIN_MEMREGION_IS_SURW_MRW(reg->flags))
pmp_flags = (PMP_W | PMP_X);
/* Read only for SU mode but Read/Write for M mode */
else if (SBI_DOMAIN_MEMREGION_IS_SUR_MRW(reg->flags))
pmp_flags = (PMP_W);
} else if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
/*
* When smepmp is supported and used, M region cannot have RWX
* permissions on any region.
*/
if ((reg->flags & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK)
== SBI_DOMAIN_MEMREGION_M_RWX) {
sbi_printf("%s: M-mode only regions cannot have"
"RWX permissions\n", __func__);
return 0;
}
/* M-mode only access regions are always locked */
pmp_flags |= PMP_L;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_EXECUTABLE)
pmp_flags |= PMP_X;
} else if (SBI_DOMAIN_MEMREGION_SU_ONLY_ACCESS(reg->flags)) {
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
pmp_flags |= PMP_X;
} else {
rstart = reg->base;
rend = (reg->order < __riscv_xlen) ? rstart + ((1UL << reg->order) - 1) : -1UL;
sbi_printf("%s: Unsupported Smepmp permissions on region 0x%"PRILX"-0x%"PRILX"\n",
__func__, rstart, rend);
}
return pmp_flags;
}
bool sbi_domain_check_addr(const struct sbi_domain *dom, bool sbi_domain_check_addr(const struct sbi_domain *dom,
unsigned long addr, unsigned long mode, unsigned long addr, unsigned long mode,
unsigned long access_flags) unsigned long access_flags)
@@ -162,7 +235,11 @@ bool sbi_domain_check_addr(const struct sbi_domain *dom,
rstart + ((1UL << reg->order) - 1) : -1UL; rstart + ((1UL << reg->order) - 1) : -1UL;
if (rstart <= addr && addr <= rend) { if (rstart <= addr && addr <= rend) {
rmmio = (rflags & SBI_DOMAIN_MEMREGION_MMIO) ? true : false; rmmio = (rflags & SBI_DOMAIN_MEMREGION_MMIO) ? true : false;
if (mmio != rmmio) /*
* MMIO devices may appear in regions without the flag set (such as the
* default region), but MMIO device regions should not be used as memory.
*/
if (!mmio && rmmio)
return false; return false;
return ((rrwx & rwx) == rwx) ? true : false; return ((rrwx & rwx) == rwx) ? true : false;
} }
@@ -218,6 +295,19 @@ static bool is_region_compatible(const struct sbi_domain_memregion *regA,
static bool is_region_before(const struct sbi_domain_memregion *regA, static bool is_region_before(const struct sbi_domain_memregion *regA,
const struct sbi_domain_memregion *regB) const struct sbi_domain_memregion *regB)
{ {
/*
* Enforce firmware region ordering for memory access
* under SmePMP.
* Place firmware regions first to ensure consistent
* PMP entries during domain context switches.
*/
if (SBI_DOMAIN_MEMREGION_IS_FIRMWARE(regA->flags) &&
!SBI_DOMAIN_MEMREGION_IS_FIRMWARE(regB->flags))
return true;
if (!SBI_DOMAIN_MEMREGION_IS_FIRMWARE(regA->flags) &&
SBI_DOMAIN_MEMREGION_IS_FIRMWARE(regB->flags))
return false;
if (regA->order < regB->order) if (regA->order < regB->order)
return true; return true;
@@ -281,6 +371,17 @@ static void clear_region(struct sbi_domain_memregion* reg)
sbi_memset(reg, 0x0, sizeof(*reg)); sbi_memset(reg, 0x0, sizeof(*reg));
} }
static int sbi_domain_used_memregions(const struct sbi_domain *dom)
{
int count = 0;
struct sbi_domain_memregion *reg;
sbi_domain_for_each_memregion(dom, reg)
count++;
return count;
}
static int sanitize_domain(struct sbi_domain *dom) static int sanitize_domain(struct sbi_domain *dom)
{ {
u32 i, j, count; u32 i, j, count;
@@ -319,9 +420,7 @@ static int sanitize_domain(struct sbi_domain *dom)
} }
/* Count memory regions */ /* Count memory regions */
count = 0; count = sbi_domain_used_memregions(dom);
sbi_domain_for_each_memregion(dom, reg)
count++;
/* Check presence of firmware regions */ /* Check presence of firmware regions */
if (!dom->fw_region_inited) { if (!dom->fw_region_inited) {
@@ -344,7 +443,7 @@ static int sanitize_domain(struct sbi_domain *dom)
} }
/* Remove covered regions */ /* Remove covered regions */
while(i < (count - 1)) { for (i = 0; i < (count - 1);) {
is_covered = false; is_covered = false;
reg = &dom->regions[i]; reg = &dom->regions[i];
@@ -464,6 +563,8 @@ void sbi_domain_dump(const struct sbi_domain *dom, const char *suffix)
sbi_printf("M: "); sbi_printf("M: ");
if (reg->flags & SBI_DOMAIN_MEMREGION_MMIO) if (reg->flags & SBI_DOMAIN_MEMREGION_MMIO)
sbi_printf("%cI", (k++) ? ',' : '('); sbi_printf("%cI", (k++) ? ',' : '(');
if (reg->flags & SBI_DOMAIN_MEMREGION_FW)
sbi_printf("%cF", (k++) ? ',' : '(');
if (reg->flags & SBI_DOMAIN_MEMREGION_M_READABLE) if (reg->flags & SBI_DOMAIN_MEMREGION_M_READABLE)
sbi_printf("%cR", (k++) ? ',' : '('); sbi_printf("%cR", (k++) ? ',' : '(');
if (reg->flags & SBI_DOMAIN_MEMREGION_M_WRITABLE) if (reg->flags & SBI_DOMAIN_MEMREGION_M_WRITABLE)
@@ -603,6 +704,7 @@ static int root_add_memregion(const struct sbi_domain_memregion *reg)
int rc; int rc;
bool reg_merged; bool reg_merged;
struct sbi_domain_memregion *nreg, *nreg1, *nreg2; struct sbi_domain_memregion *nreg, *nreg1, *nreg2;
int root_memregs_count = sbi_domain_used_memregions(&root);
/* Sanity checks */ /* Sanity checks */
if (!reg || domain_finalized || !root.regions || if (!reg || domain_finalized || !root.regions ||
@@ -685,20 +787,15 @@ int sbi_domain_root_add_memrange(unsigned long addr, unsigned long size,
return 0; return 0;
} }
int sbi_domain_finalize(struct sbi_scratch *scratch, u32 cold_hartid) int sbi_domain_startup(struct sbi_scratch *scratch, u32 cold_hartid)
{ {
int rc; int rc;
u32 dhart; u32 dhart;
struct sbi_domain *dom; struct sbi_domain *dom;
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
/* Initialize and populate domains for the platform */ /* Sanity checks */
rc = sbi_platform_domains_init(plat); if (!domain_finalized)
if (rc) { return SBI_EINVAL;
sbi_printf("%s: platform domains_init() failed (error %d)\n",
__func__, rc);
return rc;
}
/* Startup boot HART of domains */ /* Startup boot HART of domains */
sbi_domain_for_each(dom) { sbi_domain_for_each(dom) {
@@ -744,6 +841,26 @@ int sbi_domain_finalize(struct sbi_scratch *scratch, u32 cold_hartid)
} }
} }
return 0;
}
int sbi_domain_finalize(struct sbi_scratch *scratch)
{
int rc;
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
/* Sanity checks */
if (domain_finalized)
return SBI_EINVAL;
/* Initialize and populate domains for the platform */
rc = sbi_platform_domains_init(plat);
if (rc) {
sbi_printf("%s: platform domains_init() failed (error %d)\n",
__func__, rc);
return rc;
}
/* /*
* Set the finalized flag so that the root domain * Set the finalized flag so that the root domain
* regions can't be changed. * regions can't be changed.
@@ -755,11 +872,10 @@ int sbi_domain_finalize(struct sbi_scratch *scratch, u32 cold_hartid)
int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid) int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid)
{ {
u32 i;
int rc; int rc;
struct sbi_hartmask *root_hmask; struct sbi_hartmask *root_hmask;
struct sbi_domain_memregion *root_memregs; struct sbi_domain_memregion *root_memregs;
const struct sbi_platform *plat = sbi_platform_ptr(scratch); int root_memregs_count = 0;
SBI_INIT_LIST_HEAD(&domain_list); SBI_INIT_LIST_HEAD(&domain_list);
@@ -804,13 +920,15 @@ int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid)
/* Root domain firmware memory region */ /* Root domain firmware memory region */
sbi_domain_memregion_init(scratch->fw_start, scratch->fw_rw_offset, sbi_domain_memregion_init(scratch->fw_start, scratch->fw_rw_offset,
(SBI_DOMAIN_MEMREGION_M_READABLE | (SBI_DOMAIN_MEMREGION_M_READABLE |
SBI_DOMAIN_MEMREGION_M_EXECUTABLE), SBI_DOMAIN_MEMREGION_M_EXECUTABLE |
SBI_DOMAIN_MEMREGION_FW),
&root_memregs[root_memregs_count++]); &root_memregs[root_memregs_count++]);
sbi_domain_memregion_init((scratch->fw_start + scratch->fw_rw_offset), sbi_domain_memregion_init((scratch->fw_start + scratch->fw_rw_offset),
(scratch->fw_size - scratch->fw_rw_offset), (scratch->fw_size - scratch->fw_rw_offset),
(SBI_DOMAIN_MEMREGION_M_READABLE | (SBI_DOMAIN_MEMREGION_M_READABLE |
SBI_DOMAIN_MEMREGION_M_WRITABLE), SBI_DOMAIN_MEMREGION_M_WRITABLE |
SBI_DOMAIN_MEMREGION_FW),
&root_memregs[root_memregs_count++]); &root_memregs[root_memregs_count++]);
root.fw_region_inited = true; root.fw_region_inited = true;
@@ -840,7 +958,7 @@ int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid)
root.next_mode = scratch->next_mode; root.next_mode = scratch->next_mode;
/* Root domain possible and assigned HARTs */ /* Root domain possible and assigned HARTs */
for (i = 0; i < plat->hart_count; i++) sbi_for_each_hartindex(i)
sbi_hartmask_set_hartindex(i, root_hmask); sbi_hartmask_set_hartindex(i, root_hmask);
/* Finally register the root domain */ /* Finally register the root domain */

View File

@@ -10,11 +10,13 @@
#include <sbi/sbi_console.h> #include <sbi/sbi_console.h>
#include <sbi/sbi_hsm.h> #include <sbi/sbi_hsm.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h> #include <sbi/sbi_heap.h>
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
#include <sbi/sbi_domain.h> #include <sbi/sbi_domain.h>
#include <sbi/sbi_domain_context.h> #include <sbi/sbi_domain_context.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_trap.h> #include <sbi/sbi_trap.h>
/** Context representation for a hart within a domain */ /** Context representation for a hart within a domain */
@@ -44,6 +46,8 @@ struct hart_context {
unsigned long scounteren; unsigned long scounteren;
/** Supervisor environment configuration register */ /** Supervisor environment configuration register */
unsigned long senvcfg; unsigned long senvcfg;
/** Supervisor resource management configuration register */
unsigned long srmcfg;
/** Reference to the owning domain */ /** Reference to the owning domain */
struct sbi_domain *dom; struct sbi_domain *dom;
@@ -53,31 +57,30 @@ struct hart_context {
bool initialized; bool initialized;
}; };
struct domain_context_priv { static struct sbi_domain_data dcpriv;
/** Contexts for possible HARTs indexed by hartindex */
struct hart_context *hartindex_to_context_table[SBI_HARTMASK_MAX_BITS];
};
static struct sbi_domain_data dcpriv = {
.data_size = sizeof(struct domain_context_priv),
};
static inline struct hart_context *hart_context_get(struct sbi_domain *dom, static inline struct hart_context *hart_context_get(struct sbi_domain *dom,
u32 hartindex) u32 hartindex)
{ {
struct domain_context_priv *dcp = sbi_domain_data_ptr(dom, &dcpriv); struct hart_context **dom_hartindex_to_context_table;
return (dcp && hartindex < SBI_HARTMASK_MAX_BITS) ? dom_hartindex_to_context_table = sbi_domain_data_ptr(dom, &dcpriv);
dcp->hartindex_to_context_table[hartindex] : NULL; if (!dom_hartindex_to_context_table || !sbi_hartindex_valid(hartindex))
return NULL;
return dom_hartindex_to_context_table[hartindex];
} }
static void hart_context_set(struct sbi_domain *dom, u32 hartindex, static void hart_context_set(struct sbi_domain *dom, u32 hartindex,
struct hart_context *hc) struct hart_context *hc)
{ {
struct domain_context_priv *dcp = sbi_domain_data_ptr(dom, &dcpriv); struct hart_context **dom_hartindex_to_context_table;
if (dcp && hartindex < SBI_HARTMASK_MAX_BITS) dom_hartindex_to_context_table = sbi_domain_data_ptr(dom, &dcpriv);
dcp->hartindex_to_context_table[hartindex] = hc; if (!dom_hartindex_to_context_table || !sbi_hartindex_valid(hartindex))
return;
dom_hartindex_to_context_table[hartindex] = hc;
} }
/** Macro to obtain the current hart's context pointer */ /** Macro to obtain the current hart's context pointer */
@@ -92,17 +95,22 @@ static void hart_context_set(struct sbi_domain *dom, u32 hartindex,
* *
* @param ctx pointer to the current HART context * @param ctx pointer to the current HART context
* @param dom_ctx pointer to the target domain context * @param dom_ctx pointer to the target domain context
*
* @return 0 on success and negative error code on failure
*/ */
static void switch_to_next_domain_context(struct hart_context *ctx, static int switch_to_next_domain_context(struct hart_context *ctx,
struct hart_context *dom_ctx) struct hart_context *dom_ctx)
{ {
u32 hartindex = current_hartindex(); u32 hartindex = current_hartindex();
struct sbi_trap_context *trap_ctx; struct sbi_trap_context *trap_ctx;
struct sbi_domain *current_dom = ctx->dom; struct sbi_domain *current_dom, *target_dom;
struct sbi_domain *target_dom = dom_ctx->dom;
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr(); struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
unsigned int pmp_count = sbi_hart_pmp_count(scratch);
if (!ctx || !dom_ctx || ctx == dom_ctx)
return SBI_EINVAL;
current_dom = ctx->dom;
target_dom = dom_ctx->dom;
/* Assign current hart to target domain */ /* Assign current hart to target domain */
spin_lock(&current_dom->assigned_harts_lock); spin_lock(&current_dom->assigned_harts_lock);
sbi_hartmask_clear_hartindex(hartindex, &current_dom->assigned_harts); sbi_hartmask_clear_hartindex(hartindex, &current_dom->assigned_harts);
@@ -115,10 +123,8 @@ static void switch_to_next_domain_context(struct hart_context *ctx,
spin_unlock(&target_dom->assigned_harts_lock); spin_unlock(&target_dom->assigned_harts_lock);
/* Reconfigure PMP settings for the new domain */ /* Reconfigure PMP settings for the new domain */
for (int i = 0; i < pmp_count; i++) { sbi_hart_protection_unconfigure(scratch);
pmp_disable(i); sbi_hart_protection_configure(scratch);
}
sbi_hart_pmp_configure(scratch);
/* Save current CSR context and restore target domain's CSR context */ /* Save current CSR context and restore target domain's CSR context */
ctx->sstatus = csr_swap(CSR_SSTATUS, dom_ctx->sstatus); ctx->sstatus = csr_swap(CSR_SSTATUS, dom_ctx->sstatus);
@@ -134,6 +140,8 @@ static void switch_to_next_domain_context(struct hart_context *ctx,
ctx->scounteren = csr_swap(CSR_SCOUNTEREN, dom_ctx->scounteren); ctx->scounteren = csr_swap(CSR_SCOUNTEREN, dom_ctx->scounteren);
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12) if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12)
ctx->senvcfg = csr_swap(CSR_SENVCFG, dom_ctx->senvcfg); ctx->senvcfg = csr_swap(CSR_SENVCFG, dom_ctx->senvcfg);
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSQOSID))
ctx->srmcfg = csr_swap(CSR_SRMCFG, dom_ctx->srmcfg);
/* Save current trap state and restore target domain's trap state */ /* Save current trap state and restore target domain's trap state */
trap_ctx = sbi_trap_get_context(scratch); trap_ctx = sbi_trap_get_context(scratch);
@@ -155,13 +163,57 @@ static void switch_to_next_domain_context(struct hart_context *ctx,
else else
sbi_hsm_hart_stop(scratch, true); sbi_hsm_hart_stop(scratch, true);
} }
return 0;
}
static int hart_context_init(u32 hartindex)
{
struct hart_context *ctx;
struct sbi_domain *dom;
sbi_domain_for_each(dom) {
if (!sbi_hartmask_test_hartindex(hartindex,
dom->possible_harts))
continue;
ctx = sbi_zalloc(sizeof(struct hart_context));
if (!ctx)
return SBI_ENOMEM;
/* Bind context and domain */
ctx->dom = dom;
hart_context_set(dom, hartindex, ctx);
}
return 0;
} }
int sbi_domain_context_enter(struct sbi_domain *dom) int sbi_domain_context_enter(struct sbi_domain *dom)
{ {
int rc;
struct hart_context *dom_ctx;
struct hart_context *ctx = hart_context_thishart_get(); struct hart_context *ctx = hart_context_thishart_get();
struct hart_context *dom_ctx = hart_context_get(dom, current_hartindex());
/* Target domain must not be same as the current domain */
if (!dom || dom == sbi_domain_thishart_ptr())
return SBI_EINVAL;
/*
* If it's first time to call `enter` on the current hart, no
* context allocated before. Allocate context for each valid
* domain on the current hart.
*/
if (!ctx) {
rc = hart_context_init(current_hartindex());
if (rc)
return rc;
ctx = hart_context_thishart_get();
if (!ctx)
return SBI_EINVAL;
}
dom_ctx = hart_context_get(dom, current_hartindex());
/* Validate the domain context existence */ /* Validate the domain context existence */
if (!dom_ctx) if (!dom_ctx)
return SBI_EINVAL; return SBI_EINVAL;
@@ -169,13 +221,12 @@ int sbi_domain_context_enter(struct sbi_domain *dom)
/* Update target context's previous context to indicate the caller */ /* Update target context's previous context to indicate the caller */
dom_ctx->prev_ctx = ctx; dom_ctx->prev_ctx = ctx;
switch_to_next_domain_context(ctx, dom_ctx); return switch_to_next_domain_context(ctx, dom_ctx);
return 0;
} }
int sbi_domain_context_exit(void) int sbi_domain_context_exit(void)
{ {
int rc;
u32 hartindex = current_hartindex(); u32 hartindex = current_hartindex();
struct sbi_domain *dom; struct sbi_domain *dom;
struct hart_context *ctx = hart_context_thishart_get(); struct hart_context *ctx = hart_context_thishart_get();
@@ -187,21 +238,13 @@ int sbi_domain_context_exit(void)
* its context on the current hart if valid. * its context on the current hart if valid.
*/ */
if (!ctx) { if (!ctx) {
sbi_domain_for_each(dom) { rc = hart_context_init(current_hartindex());
if (!sbi_hartmask_test_hartindex(hartindex, if (rc)
dom->possible_harts)) return rc;
continue;
dom_ctx = sbi_zalloc(sizeof(struct hart_context));
if (!dom_ctx)
return SBI_ENOMEM;
/* Bind context and domain */
dom_ctx->dom = dom;
hart_context_set(dom, hartindex, dom_ctx);
}
ctx = hart_context_thishart_get(); ctx = hart_context_thishart_get();
if (!ctx)
return SBI_EINVAL;
} }
dom_ctx = ctx->prev_ctx; dom_ctx = ctx->prev_ctx;
@@ -225,13 +268,19 @@ int sbi_domain_context_exit(void)
if (!dom_ctx) if (!dom_ctx)
dom_ctx = hart_context_get(&root, hartindex); dom_ctx = hart_context_get(&root, hartindex);
switch_to_next_domain_context(ctx, dom_ctx); return switch_to_next_domain_context(ctx, dom_ctx);
return 0;
} }
int sbi_domain_context_init(void) int sbi_domain_context_init(void)
{ {
/**
* Allocate per-domain and per-hart context data.
* The data type is "struct hart_context **" whose memory space will be
* dynamically allocated by domain_setup_data_one(). Calculate needed
* size of memory space here.
*/
dcpriv.data_size = sizeof(struct hart_context *) * sbi_hart_count();
return sbi_domain_register_data(&dcpriv); return sbi_domain_register_data(&dcpriv);
} }

View File

@@ -10,6 +10,7 @@
#include <sbi/sbi_console.h> #include <sbi/sbi_console.h>
#include <sbi/sbi_ecall_interface.h> #include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_hart.h>
#include <sbi/sbi_sse.h> #include <sbi/sbi_sse.h>
#include <sbi/sbi_trap.h> #include <sbi/sbi_trap.h>
@@ -28,3 +29,9 @@ int sbi_double_trap_handler(struct sbi_trap_context *tcntx)
return sbi_sse_inject_event(SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP); return sbi_sse_inject_event(SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP);
} }
void sbi_double_trap_init(struct sbi_scratch *scratch)
{
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSDBLTRP))
sbi_sse_add_event(SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP, NULL);
}

View File

@@ -93,7 +93,6 @@ int sbi_ecall_register_extension(struct sbi_ecall_extension *ext)
return SBI_EINVAL; return SBI_EINVAL;
} }
SBI_INIT_LIST_HEAD(&ext->head);
sbi_list_add_tail(&ext->head, &ecall_exts_list); sbi_list_add_tail(&ext->head, &ecall_exts_list);
return 0; return 0;

View File

@@ -14,7 +14,7 @@
#include <sbi/sbi_ecall_interface.h> #include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_trap.h> #include <sbi/sbi_trap.h>
#include <sbi/riscv_asm.h> #include <sbi/riscv_asm.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart_protection.h>
static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, struct sbi_trap_regs *regs,
@@ -46,12 +46,12 @@ static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid,
regs->a1, regs->a0, smode, regs->a1, regs->a0, smode,
SBI_DOMAIN_READ|SBI_DOMAIN_WRITE)) SBI_DOMAIN_READ|SBI_DOMAIN_WRITE))
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
sbi_hart_map_saddr(regs->a1, regs->a0); sbi_hart_protection_map_range(regs->a1, regs->a0);
if (funcid == SBI_EXT_DBCN_CONSOLE_WRITE) if (funcid == SBI_EXT_DBCN_CONSOLE_WRITE)
out->value = sbi_nputs((const char *)regs->a1, regs->a0); out->value = sbi_nputs((const char *)regs->a1, regs->a0);
else else
out->value = sbi_ngets((char *)regs->a1, regs->a0); out->value = sbi_ngets((char *)regs->a1, regs->a0);
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range(regs->a1, regs->a0);
return 0; return 0;
case SBI_EXT_DBCN_CONSOLE_WRITE_BYTE: case SBI_EXT_DBCN_CONSOLE_WRITE_BYTE:
sbi_putc(regs->a0); sbi_putc(regs->a0);

View File

@@ -43,7 +43,7 @@ static int sbi_ecall_dbtr_handler(unsigned long extid, unsigned long funcid,
ret = sbi_dbtr_enable_trig(regs->a0, regs->a1); ret = sbi_dbtr_enable_trig(regs->a0, regs->a1);
break; break;
case SBI_EXT_DBTR_TRIGGER_UPDATE: case SBI_EXT_DBTR_TRIGGER_UPDATE:
ret = sbi_dbtr_update_trig(smode, regs->a0, regs->a1); ret = sbi_dbtr_update_trig(smode, regs->a0);
break; break;
case SBI_EXT_DBTR_TRIGGER_DISABLE: case SBI_EXT_DBTR_TRIGGER_DISABLE:
ret = sbi_dbtr_disable_trig(regs->a0, regs->a1); ret = sbi_dbtr_disable_trig(regs->a0, regs->a1);
@@ -69,7 +69,6 @@ struct sbi_ecall_extension ecall_dbtr = {
.name = "dbtr", .name = "dbtr",
.extid_start = SBI_EXT_DBTR, .extid_start = SBI_EXT_DBTR,
.extid_end = SBI_EXT_DBTR, .extid_end = SBI_EXT_DBTR,
.experimental = true,
.handle = sbi_ecall_dbtr_handler, .handle = sbi_ecall_dbtr_handler,
.register_extensions = sbi_ecall_dbtr_register_extensions, .register_extensions = sbi_ecall_dbtr_register_extensions,
}; };

View File

@@ -45,7 +45,6 @@ struct sbi_ecall_extension ecall_fwft = {
.name = "fwft", .name = "fwft",
.extid_start = SBI_EXT_FWFT, .extid_start = SBI_EXT_FWFT,
.extid_end = SBI_EXT_FWFT, .extid_end = SBI_EXT_FWFT,
.experimental = true,
.register_extensions = sbi_ecall_fwft_register_extensions, .register_extensions = sbi_ecall_fwft_register_extensions,
.handle = sbi_ecall_fwft_handler, .handle = sbi_ecall_fwft_handler,
}; };

View File

@@ -20,8 +20,11 @@ static int sbi_ecall_mpxy_handler(unsigned long extid, unsigned long funcid,
int ret = 0; int ret = 0;
switch (funcid) { switch (funcid) {
case SBI_EXT_MPXY_GET_SHMEM_SIZE:
out->value = sbi_mpxy_get_shmem_size();
break;
case SBI_EXT_MPXY_SET_SHMEM: case SBI_EXT_MPXY_SET_SHMEM:
ret = sbi_mpxy_set_shmem(regs->a0, regs->a1, regs->a2, regs->a3); ret = sbi_mpxy_set_shmem(regs->a0, regs->a1, regs->a2);
break; break;
case SBI_EXT_MPXY_GET_CHANNEL_IDS: case SBI_EXT_MPXY_GET_CHANNEL_IDS:
ret = sbi_mpxy_get_channel_ids(regs->a0); ret = sbi_mpxy_get_channel_ids(regs->a0);
@@ -36,7 +39,7 @@ static int sbi_ecall_mpxy_handler(unsigned long extid, unsigned long funcid,
ret = sbi_mpxy_send_message(regs->a0, regs->a1, ret = sbi_mpxy_send_message(regs->a0, regs->a1,
regs->a2, &out->value); regs->a2, &out->value);
break; break;
case SBI_EXT_MPXY_SEND_MSG_NO_RESP: case SBI_EXT_MPXY_SEND_MSG_WITHOUT_RESP:
ret = sbi_mpxy_send_message(regs->a0, regs->a1, regs->a2, ret = sbi_mpxy_send_message(regs->a0, regs->a1, regs->a2,
NULL); NULL);
break; break;
@@ -64,7 +67,6 @@ struct sbi_ecall_extension ecall_mpxy = {
.name = "mpxy", .name = "mpxy",
.extid_start = SBI_EXT_MPXY, .extid_start = SBI_EXT_MPXY,
.extid_end = SBI_EXT_MPXY, .extid_end = SBI_EXT_MPXY,
.experimental = true,
.register_extensions = sbi_ecall_mpxy_register_extensions, .register_extensions = sbi_ecall_mpxy_register_extensions,
.handle = sbi_ecall_mpxy_handler, .handle = sbi_ecall_mpxy_handler,
}; };

View File

@@ -59,7 +59,6 @@ struct sbi_ecall_extension ecall_sse = {
.name = "sse", .name = "sse",
.extid_start = SBI_EXT_SSE, .extid_start = SBI_EXT_SSE,
.extid_end = SBI_EXT_SSE, .extid_end = SBI_EXT_SSE,
.experimental = true,
.register_extensions = sbi_ecall_sse_register_extensions, .register_extensions = sbi_ecall_sse_register_extensions,
.handle = sbi_ecall_sse_handler, .handle = sbi_ecall_sse_handler,
}; };

View File

@@ -13,8 +13,10 @@
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_heap.h> #include <sbi/sbi_heap.h>
#include <sbi/sbi_hfence.h>
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
#include <sbi/sbi_tlb.h>
#include <sbi/sbi_types.h> #include <sbi/sbi_types.h>
#include <sbi/riscv_asm.h> #include <sbi/riscv_asm.h>
@@ -167,7 +169,16 @@ static int fwft_adue_supported(struct fwft_config *conf)
static int fwft_set_adue(struct fwft_config *conf, unsigned long value) static int fwft_set_adue(struct fwft_config *conf, unsigned long value)
{ {
return fwft_menvcfg_set_bit(value, ENVCFG_ADUE_SHIFT); int res = fwft_menvcfg_set_bit(value, ENVCFG_ADUE_SHIFT);
if (res == SBI_OK) {
__sbi_sfence_vma_all();
if (misa_extension('H'))
__sbi_hfence_gvma_all();
}
return res;
} }
static int fwft_get_adue(struct fwft_config *conf, unsigned long *value) static int fwft_get_adue(struct fwft_config *conf, unsigned long *value)
@@ -223,32 +234,32 @@ static int fwft_pmlen_supported(struct fwft_config *conf)
return SBI_OK; return SBI_OK;
} }
static bool fwft_try_to_set_pmm(unsigned long pmm)
{
csr_set(CSR_MENVCFG, pmm);
return (csr_read(CSR_MENVCFG) & ENVCFG_PMM) == pmm;
}
static int fwft_set_pmlen(struct fwft_config *conf, unsigned long value) static int fwft_set_pmlen(struct fwft_config *conf, unsigned long value)
{ {
unsigned long prev; unsigned long pmm, prev;
if (value > 16) switch (value) {
case 0:
pmm = ENVCFG_PMM_PMLEN_0;
break;
case 7:
pmm = ENVCFG_PMM_PMLEN_7;
break;
case 16:
pmm = ENVCFG_PMM_PMLEN_16;
break;
default:
return SBI_EINVAL; return SBI_EINVAL;
}
prev = csr_read_clear(CSR_MENVCFG, ENVCFG_PMM); prev = csr_read_clear(CSR_MENVCFG, ENVCFG_PMM);
if (value == 0) csr_set(CSR_MENVCFG, pmm);
return SBI_OK; if ((csr_read(CSR_MENVCFG) & ENVCFG_PMM) != pmm) {
if (value <= 7) { csr_write(CSR_MENVCFG, prev);
if (fwft_try_to_set_pmm(ENVCFG_PMM_PMLEN_7)) return SBI_EINVAL;
return SBI_OK;
csr_clear(CSR_MENVCFG, ENVCFG_PMM);
} }
if (fwft_try_to_set_pmm(ENVCFG_PMM_PMLEN_16))
return SBI_OK;
csr_write(CSR_MENVCFG, prev);
return SBI_EINVAL; return SBI_OK;
} }
static int fwft_get_pmlen(struct fwft_config *conf, unsigned long *value) static int fwft_get_pmlen(struct fwft_config *conf, unsigned long *value)
@@ -337,7 +348,7 @@ int sbi_fwft_set(enum sbi_fwft_feature_t feature, unsigned long value,
return SBI_EINVAL; return SBI_EINVAL;
if (conf->flags & SBI_FWFT_SET_FLAG_LOCK) if (conf->flags & SBI_FWFT_SET_FLAG_LOCK)
return SBI_EDENIED; return SBI_EDENIED_LOCKED;
ret = conf->feature->set(conf, value); ret = conf->feature->set(conf, value);
if (ret) if (ret)

View File

@@ -13,23 +13,21 @@
#include <sbi/riscv_fp.h> #include <sbi/riscv_fp.h>
#include <sbi/sbi_bitops.h> #include <sbi/sbi_bitops.h>
#include <sbi/sbi_console.h> #include <sbi/sbi_console.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_csr_detect.h> #include <sbi/sbi_csr_detect.h>
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_math.h> #include <sbi/sbi_hart_pmp.h>
#include <sbi/sbi_platform.h> #include <sbi/sbi_platform.h>
#include <sbi/sbi_pmu.h> #include <sbi/sbi_pmu.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
#include <sbi/sbi_trap.h> #include <sbi/sbi_trap.h>
#include <sbi/sbi_hfence.h>
extern void __sbi_expected_trap(void); extern void __sbi_expected_trap(void);
extern void __sbi_expected_trap_hext(void); extern void __sbi_expected_trap_hext(void);
void (*sbi_hart_expected_trap)(void) = &__sbi_expected_trap; void (*sbi_hart_expected_trap)(void) = &__sbi_expected_trap;
static unsigned long hart_features_offset; unsigned long hart_features_offset;
static void mstatus_init(struct sbi_scratch *scratch) static void mstatus_init(struct sbi_scratch *scratch)
{ {
@@ -49,10 +47,10 @@ static void mstatus_init(struct sbi_scratch *scratch)
csr_write(CSR_MSTATUS, mstatus_val); csr_write(CSR_MSTATUS, mstatus_val);
/* Disable user mode usage of all perf counters except default ones (CY, TM, IR) */ /* Disable user mode usage of all perf counters except TM */
if (misa_extension('S') && if (misa_extension('S') &&
sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_10) sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_10)
csr_write(CSR_SCOUNTEREN, 7); csr_write(CSR_SCOUNTEREN, 0x02);
/** /**
* OpenSBI doesn't use any PMU counters in M-mode. * OpenSBI doesn't use any PMU counters in M-mode.
@@ -85,11 +83,11 @@ static void mstatus_init(struct sbi_scratch *scratch)
#endif #endif
} }
if (misa_extension('H'))
csr_write(CSR_HSTATUS, 0);
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMSTATEEN)) { if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMSTATEEN)) {
mstateen_val = csr_read(CSR_MSTATEEN0); mstateen_val = 0;
#if __riscv_xlen == 32
mstateen_val |= ((uint64_t)csr_read(CSR_MSTATEEN0H)) << 32;
#endif
mstateen_val |= SMSTATEEN_STATEN; mstateen_val |= SMSTATEEN_STATEN;
mstateen_val |= SMSTATEEN0_CONTEXT; mstateen_val |= SMSTATEEN0_CONTEXT;
mstateen_val |= SMSTATEEN0_HSENVCFG; mstateen_val |= SMSTATEEN0_HSENVCFG;
@@ -105,17 +103,39 @@ static void mstatus_init(struct sbi_scratch *scratch)
else else
mstateen_val &= ~(SMSTATEEN0_SVSLCT); mstateen_val &= ~(SMSTATEEN0_SVSLCT);
csr_write(CSR_MSTATEEN0, mstateen_val); if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCTR))
#if __riscv_xlen == 32 mstateen_val |= SMSTATEEN0_CTR;
csr_write(CSR_MSTATEEN0H, mstateen_val >> 32); else
#endif mstateen_val &= ~SMSTATEEN0_CTR;
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSQOSID))
mstateen_val |= SMSTATEEN0_SRMCFG;
else
mstateen_val &= ~SMSTATEEN0_SRMCFG;
csr_write64(CSR_MSTATEEN0, mstateen_val);
csr_write64(CSR_MSTATEEN1, SMSTATEEN_STATEN);
csr_write64(CSR_MSTATEEN2, SMSTATEEN_STATEN);
csr_write64(CSR_MSTATEEN3, SMSTATEEN_STATEN);
}
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSSTATEEN)) {
if (misa_extension('S')) {
csr_write(CSR_SSTATEEN0, 0);
csr_write(CSR_SSTATEEN1, 0);
csr_write(CSR_SSTATEEN2, 0);
csr_write(CSR_SSTATEEN3, 0);
}
if (misa_extension('H')) {
csr_write64(CSR_HSTATEEN0, (uint64_t)0);
csr_write64(CSR_HSTATEEN1, (uint64_t)0);
csr_write64(CSR_HSTATEEN2, (uint64_t)0);
csr_write64(CSR_HSTATEEN3, (uint64_t)0);
}
} }
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12) { if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12) {
menvcfg_val = csr_read(CSR_MENVCFG); menvcfg_val = csr_read64(CSR_MENVCFG);
#if __riscv_xlen == 32
menvcfg_val |= ((uint64_t)csr_read(CSR_MENVCFGH)) << 32;
#endif
/* Disable double trap by default */ /* Disable double trap by default */
menvcfg_val &= ~ENVCFG_DTE; menvcfg_val &= ~ENVCFG_DTE;
@@ -151,10 +171,7 @@ static void mstatus_init(struct sbi_scratch *scratch)
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SVADE)) if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SVADE))
menvcfg_val &= ~ENVCFG_ADUE; menvcfg_val &= ~ENVCFG_ADUE;
csr_write(CSR_MENVCFG, menvcfg_val); csr_write64(CSR_MENVCFG, menvcfg_val);
#if __riscv_xlen == 32
csr_write(CSR_MENVCFGH, menvcfg_val >> 32);
#endif
/* Enable S-mode access to seed CSR */ /* Enable S-mode access to seed CSR */
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_ZKR)) { if (sbi_hart_has_extension(scratch, SBI_HART_EXT_ZKR)) {
@@ -203,7 +220,7 @@ static int delegate_traps(struct sbi_scratch *scratch)
/* Send M-mode interrupts and most exceptions to S-mode */ /* Send M-mode interrupts and most exceptions to S-mode */
interrupts = MIP_SSIP | MIP_STIP | MIP_SEIP; interrupts = MIP_SSIP | MIP_STIP | MIP_SEIP;
interrupts |= sbi_pmu_irq_bit(); interrupts |= sbi_pmu_irq_mask();
exceptions = (1U << CAUSE_MISALIGNED_FETCH) | (1U << CAUSE_BREAKPOINT) | exceptions = (1U << CAUSE_MISALIGNED_FETCH) | (1U << CAUSE_BREAKPOINT) |
(1U << CAUSE_USER_ECALL); (1U << CAUSE_USER_ECALL);
@@ -255,30 +272,6 @@ unsigned int sbi_hart_mhpm_mask(struct sbi_scratch *scratch)
return hfeatures->mhpm_mask; return hfeatures->mhpm_mask;
} }
unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset);
return hfeatures->pmp_count;
}
unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset);
return hfeatures->pmp_log2gran;
}
unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset);
return hfeatures->pmp_addr_bits;
}
unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch) unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch)
{ {
struct sbi_hart_features *hfeatures = struct sbi_hart_features *hfeatures =
@@ -287,297 +280,6 @@ unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch)
return hfeatures->mhpm_bits; return hfeatures->mhpm_bits;
} }
/*
* Returns Smepmp flags for a given domain and region based on permissions.
*/
static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
struct sbi_domain *dom,
struct sbi_domain_memregion *reg)
{
unsigned int pmp_flags = 0;
if (SBI_DOMAIN_MEMREGION_IS_SHARED(reg->flags)) {
/* Read only for both M and SU modes */
if (SBI_DOMAIN_MEMREGION_IS_SUR_MR(reg->flags))
pmp_flags = (PMP_L | PMP_R | PMP_W | PMP_X);
/* Execute for SU but Read/Execute for M mode */
else if (SBI_DOMAIN_MEMREGION_IS_SUX_MRX(reg->flags))
/* locked region */
pmp_flags = (PMP_L | PMP_W | PMP_X);
/* Execute only for both M and SU modes */
else if (SBI_DOMAIN_MEMREGION_IS_SUX_MX(reg->flags))
pmp_flags = (PMP_L | PMP_W);
/* Read/Write for both M and SU modes */
else if (SBI_DOMAIN_MEMREGION_IS_SURW_MRW(reg->flags))
pmp_flags = (PMP_W | PMP_X);
/* Read only for SU mode but Read/Write for M mode */
else if (SBI_DOMAIN_MEMREGION_IS_SUR_MRW(reg->flags))
pmp_flags = (PMP_W);
} else if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
/*
* When smepmp is supported and used, M region cannot have RWX
* permissions on any region.
*/
if ((reg->flags & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK)
== SBI_DOMAIN_MEMREGION_M_RWX) {
sbi_printf("%s: M-mode only regions cannot have"
"RWX permissions\n", __func__);
return 0;
}
/* M-mode only access regions are always locked */
pmp_flags |= PMP_L;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_EXECUTABLE)
pmp_flags |= PMP_X;
} else if (SBI_DOMAIN_MEMREGION_SU_ONLY_ACCESS(reg->flags)) {
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
pmp_flags |= PMP_X;
}
return pmp_flags;
}
static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
struct sbi_domain *dom,
struct sbi_domain_memregion *reg,
unsigned int pmp_idx,
unsigned int pmp_flags,
unsigned int pmp_log2gran,
unsigned long pmp_addr_max)
{
unsigned long pmp_addr = reg->base >> PMP_SHIFT;
if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
pmp_set(pmp_idx, pmp_flags, reg->base, reg->order);
} else {
sbi_printf("Can not configure pmp for domain %s because"
" memory region address 0x%lx or size 0x%lx "
"is not in range.\n", dom->name, reg->base,
reg->order);
}
}
static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
unsigned int pmp_count,
unsigned int pmp_log2gran,
unsigned long pmp_addr_max)
{
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned int pmp_idx, pmp_flags;
/*
* Set the RLB so that, we can write to PMP entries without
* enforcement even if some entries are locked.
*/
csr_set(CSR_MSECCFG, MSECCFG_RLB);
/* Disable the reserved entry */
pmp_disable(SBI_SMEPMP_RESV_ENTRY);
/* Program M-only regions when MML is not set. */
pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
/* Skip reserved entry */
if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
pmp_idx++;
if (pmp_count <= pmp_idx)
break;
/* Skip shared and SU-only regions */
if (!SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
pmp_idx++;
continue;
}
pmp_flags = sbi_hart_get_smepmp_flags(scratch, dom, reg);
if (!pmp_flags)
return 0;
sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
pmp_log2gran, pmp_addr_max);
}
/* Set the MML to enforce new encoding */
csr_set(CSR_MSECCFG, MSECCFG_MML);
/* Program shared and SU-only regions */
pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
/* Skip reserved entry */
if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
pmp_idx++;
if (pmp_count <= pmp_idx)
break;
/* Skip M-only regions */
if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
pmp_idx++;
continue;
}
pmp_flags = sbi_hart_get_smepmp_flags(scratch, dom, reg);
if (!pmp_flags)
return 0;
sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
pmp_log2gran, pmp_addr_max);
}
/*
* All entries are programmed.
* Keep the RLB bit so that dynamic mappings can be done.
*/
return 0;
}
static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
unsigned int pmp_count,
unsigned int pmp_log2gran,
unsigned long pmp_addr_max)
{
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned int pmp_idx = 0;
unsigned int pmp_flags;
unsigned long pmp_addr;
sbi_domain_for_each_memregion(dom, reg) {
if (pmp_count <= pmp_idx)
break;
pmp_flags = 0;
/*
* If permissions are to be enforced for all modes on
* this region, the lock bit should be set.
*/
if (reg->flags & SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS)
pmp_flags |= PMP_L;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
pmp_flags |= PMP_X;
pmp_addr = reg->base >> PMP_SHIFT;
if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
pmp_set(pmp_idx++, pmp_flags, reg->base, reg->order);
} else {
sbi_printf("Can not configure pmp for domain %s because"
" memory region address 0x%lx or size 0x%lx "
"is not in range.\n", dom->name, reg->base,
reg->order);
}
}
return 0;
}
int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
{
/* shared R/W access for M and S/U mode */
unsigned int pmp_flags = (PMP_W | PMP_X);
unsigned long order, base = 0;
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
/* If Smepmp is not supported no special mapping is required */
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
return SBI_OK;
if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
return SBI_ENOSPC;
for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
order <= __riscv_xlen; order++) {
if (order < __riscv_xlen) {
base = addr & ~((1UL << order) - 1UL);
if ((base <= addr) &&
(addr < (base + (1UL << order))) &&
(base <= (addr + size - 1UL)) &&
((addr + size - 1UL) < (base + (1UL << order))))
break;
} else {
return SBI_EFAIL;
}
}
pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
return SBI_OK;
}
int sbi_hart_unmap_saddr(void)
{
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
return SBI_OK;
return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
}
int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
{
int rc;
unsigned int pmp_bits, pmp_log2gran;
unsigned int pmp_count = sbi_hart_pmp_count(scratch);
unsigned long pmp_addr_max;
if (!pmp_count)
return 0;
pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
rc = sbi_hart_smepmp_configure(scratch, pmp_count,
pmp_log2gran, pmp_addr_max);
else
rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
pmp_log2gran, pmp_addr_max);
/*
* As per section 3.7.2 of privileged specification v1.12,
* virtual address translations can be speculatively performed
* (even before actual access). These, along with PMP traslations,
* can be cached. This can pose a problem with CPU hotplug
* and non-retentive suspend scenario because PMP states are
* not preserved.
* It is advisable to flush the caching structures under such
* conditions.
*/
if (misa_extension('S')) {
__asm__ __volatile__("sfence.vma");
/*
* If hypervisor mode is supported, flush caching
* structures in guest mode too.
*/
if (misa_extension('H'))
__sbi_hfence_gvma_all();
}
return rc;
}
int sbi_hart_priv_version(struct sbi_scratch *scratch) int sbi_hart_priv_version(struct sbi_scratch *scratch)
{ {
struct sbi_hart_features *hfeatures = struct sbi_hart_features *hfeatures =
@@ -688,6 +390,12 @@ const struct sbi_hart_ext_data sbi_hart_ext[] = {
__SBI_HART_EXT_DATA(zicfilp, SBI_HART_EXT_ZICFILP), __SBI_HART_EXT_DATA(zicfilp, SBI_HART_EXT_ZICFILP),
__SBI_HART_EXT_DATA(zicfiss, SBI_HART_EXT_ZICFISS), __SBI_HART_EXT_DATA(zicfiss, SBI_HART_EXT_ZICFISS),
__SBI_HART_EXT_DATA(ssdbltrp, SBI_HART_EXT_SSDBLTRP), __SBI_HART_EXT_DATA(ssdbltrp, SBI_HART_EXT_SSDBLTRP),
__SBI_HART_EXT_DATA(smctr, SBI_HART_EXT_SMCTR),
__SBI_HART_EXT_DATA(ssctr, SBI_HART_EXT_SSCTR),
__SBI_HART_EXT_DATA(ssqosid, SBI_HART_EXT_SSQOSID),
__SBI_HART_EXT_DATA(ssstateen, SBI_HART_EXT_SSSTATEEN),
__SBI_HART_EXT_DATA(xsfcflushdlone, SBI_HART_EXT_XSIFIVE_CFLUSH_D_L1),
__SBI_HART_EXT_DATA(xsfcease, SBI_HART_EXT_XSIFIVE_CEASE),
}; };
_Static_assert(SBI_HART_EXT_MAX == array_size(sbi_hart_ext), _Static_assert(SBI_HART_EXT_MAX == array_size(sbi_hart_ext),
@@ -714,6 +422,10 @@ void sbi_hart_get_extensions_str(struct sbi_scratch *scratch,
sbi_memset(extensions_str, 0, nestr); sbi_memset(extensions_str, 0, nestr);
for_each_set_bit(ext, hfeatures->extensions, SBI_HART_EXT_MAX) { for_each_set_bit(ext, hfeatures->extensions, SBI_HART_EXT_MAX) {
if (offset + sbi_strlen(sbi_hart_ext[ext].name) + 1 > nestr) {
sbi_printf("%s:extension name is longer than buffer (error)\n", __func__);
break;
}
sbi_snprintf(extensions_str + offset, sbi_snprintf(extensions_str + offset,
nestr - offset, nestr - offset,
"%s,", sbi_hart_ext[ext].name); "%s,", sbi_hart_ext[ext].name);
@@ -726,6 +438,20 @@ void sbi_hart_get_extensions_str(struct sbi_scratch *scratch,
sbi_strncpy(extensions_str, "none", nestr); sbi_strncpy(extensions_str, "none", nestr);
} }
/**
* Check whether a particular CSR is present on the HART
*
* @param scratch pointer to the HART scratch space
* @param csr the CSR number to check
*/
bool sbi_hart_has_csr(struct sbi_scratch *scratch, enum sbi_hart_csrs csr)
{
struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset);
return __test_bit(csr, hfeatures->csrs);
}
static unsigned long hart_pmp_get_allowed_addr(void) static unsigned long hart_pmp_get_allowed_addr(void)
{ {
unsigned long val = 0; unsigned long val = 0;
@@ -782,7 +508,6 @@ static int hart_detect_features(struct sbi_scratch *scratch)
struct sbi_hart_features *hfeatures = struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset); sbi_scratch_offset_ptr(scratch, hart_features_offset);
unsigned long val, oldval; unsigned long val, oldval;
bool has_zicntr = false;
int rc; int rc;
/* If hart features already detected then do nothing */ /* If hart features already detected then do nothing */
@@ -791,6 +516,7 @@ static int hart_detect_features(struct sbi_scratch *scratch)
/* Clear hart features */ /* Clear hart features */
sbi_memset(hfeatures->extensions, 0, sizeof(hfeatures->extensions)); sbi_memset(hfeatures->extensions, 0, sizeof(hfeatures->extensions));
sbi_memset(hfeatures->csrs, 0, sizeof(hfeatures->csrs));
hfeatures->pmp_count = 0; hfeatures->pmp_count = 0;
hfeatures->mhpm_mask = 0; hfeatures->mhpm_mask = 0;
hfeatures->priv_version = SBI_HART_PRIV_VER_UNKNOWN; hfeatures->priv_version = SBI_HART_PRIV_VER_UNKNOWN;
@@ -917,9 +643,6 @@ __pmp_skip:
/* Detect if hart supports sscofpmf */ /* Detect if hart supports sscofpmf */
__check_ext_csr(SBI_HART_PRIV_VER_1_11, __check_ext_csr(SBI_HART_PRIV_VER_1_11,
CSR_SCOUNTOVF, SBI_HART_EXT_SSCOFPMF); CSR_SCOUNTOVF, SBI_HART_EXT_SSCOFPMF);
/* Detect if hart supports time CSR */
__check_ext_csr(SBI_HART_PRIV_VER_UNKNOWN,
CSR_TIME, SBI_HART_EXT_ZICNTR);
/* Detect if hart has AIA local interrupt CSRs */ /* Detect if hart has AIA local interrupt CSRs */
__check_ext_csr(SBI_HART_PRIV_VER_UNKNOWN, __check_ext_csr(SBI_HART_PRIV_VER_UNKNOWN,
CSR_MTOPI, SBI_HART_EXT_SMAIA); CSR_MTOPI, SBI_HART_EXT_SMAIA);
@@ -929,6 +652,9 @@ __pmp_skip:
/* Detect if hart supports mstateen CSRs */ /* Detect if hart supports mstateen CSRs */
__check_ext_csr(SBI_HART_PRIV_VER_1_12, __check_ext_csr(SBI_HART_PRIV_VER_1_12,
CSR_MSTATEEN0, SBI_HART_EXT_SMSTATEEN); CSR_MSTATEEN0, SBI_HART_EXT_SMSTATEEN);
/* Detect if hart supports sstateen CSRs */
__check_ext_csr(SBI_HART_PRIV_VER_1_12,
CSR_SSTATEEN0, SBI_HART_EXT_SSSTATEEN);
/* Detect if hart supports smcntrpmf */ /* Detect if hart supports smcntrpmf */
__check_ext_csr(SBI_HART_PRIV_VER_1_12, __check_ext_csr(SBI_HART_PRIV_VER_1_12,
CSR_MCYCLECFG, SBI_HART_EXT_SMCNTRPMF); CSR_MCYCLECFG, SBI_HART_EXT_SMCNTRPMF);
@@ -938,8 +664,16 @@ __pmp_skip:
#undef __check_ext_csr #undef __check_ext_csr
/* Save trap based detection of Zicntr */ #define __check_csr_existence(__csr, __csr_id) \
has_zicntr = sbi_hart_has_extension(scratch, SBI_HART_EXT_ZICNTR); csr_read_allowed(__csr, &trap); \
if (!trap.cause) \
__set_bit(__csr_id, hfeatures->csrs);
__check_csr_existence(CSR_CYCLE, SBI_HART_CSR_CYCLE);
__check_csr_existence(CSR_TIME, SBI_HART_CSR_TIME);
__check_csr_existence(CSR_INSTRET, SBI_HART_CSR_INSTRET);
#undef __check_csr_existence
/* Let platform populate extensions */ /* Let platform populate extensions */
rc = sbi_platform_extensions_init(sbi_platform_thishart_ptr(), rc = sbi_platform_extensions_init(sbi_platform_thishart_ptr(),
@@ -949,7 +683,9 @@ __pmp_skip:
/* Zicntr should only be detected using traps */ /* Zicntr should only be detected using traps */
__sbi_hart_update_extension(hfeatures, SBI_HART_EXT_ZICNTR, __sbi_hart_update_extension(hfeatures, SBI_HART_EXT_ZICNTR,
has_zicntr); sbi_hart_has_csr(scratch, SBI_HART_CSR_CYCLE) &&
sbi_hart_has_csr(scratch, SBI_HART_CSR_TIME) &&
sbi_hart_has_csr(scratch, SBI_HART_CSR_INSTRET));
/* Extensions implied by other extensions and features */ /* Extensions implied by other extensions and features */
if (hfeatures->mhpm_mask) if (hfeatures->mhpm_mask)
@@ -982,10 +718,6 @@ int sbi_hart_reinit(struct sbi_scratch *scratch)
if (rc) if (rc)
return rc; return rc;
rc = delegate_traps(scratch);
if (rc)
return rc;
return 0; return 0;
} }
@@ -1013,6 +745,16 @@ int sbi_hart_init(struct sbi_scratch *scratch, bool cold_boot)
if (rc) if (rc)
return rc; return rc;
if (cold_boot) {
rc = sbi_hart_pmp_init(scratch);
if (rc)
return rc;
}
rc = delegate_traps(scratch);
if (rc)
return rc;
return sbi_hart_reinit(scratch); return sbi_hart_reinit(scratch);
} }

356
lib/sbi/sbi_hart_pmp.c Normal file
View File

@@ -0,0 +1,356 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 Ventana Micro Systems Inc.
*/
#include <sbi/sbi_bitmap.h>
#include <sbi/sbi_console.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_hart.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_hfence.h>
#include <sbi/sbi_math.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_tlb.h>
#include <sbi/riscv_asm.h>
/*
* Smepmp enforces access boundaries between M-mode and
* S/U-mode. When it is enabled, the PMPs are programmed
* such that M-mode doesn't have access to S/U-mode memory.
*
* To give M-mode R/W access to the shared memory between M and
* S/U-mode, first entry is reserved. It is disabled at boot.
* When shared memory access is required, the physical address
* should be programmed into the first PMP entry with R/W
* permissions to the M-mode. Once the work is done, it should be
* unmapped. sbi_hart_protection_map_range/sbi_hart_protection_unmap_range
* function pair should be used to map/unmap the shared memory.
*/
#define SBI_SMEPMP_RESV_ENTRY 0
static DECLARE_BITMAP(fw_smepmp_ids, PMP_COUNT);
static bool fw_smepmp_ids_inited;
unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures = sbi_hart_features_ptr(scratch);
return hfeatures->pmp_count;
}
unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures = sbi_hart_features_ptr(scratch);
return hfeatures->pmp_log2gran;
}
unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures = sbi_hart_features_ptr(scratch);
return hfeatures->pmp_addr_bits;
}
bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx)
{
if (!fw_smepmp_ids_inited)
return false;
return bitmap_test(fw_smepmp_ids, pmp_idx) ? true : false;
}
static void sbi_hart_pmp_fence(void)
{
/*
* As per section 3.7.2 of privileged specification v1.12,
* virtual address translations can be speculatively performed
* (even before actual access). These, along with PMP traslations,
* can be cached. This can pose a problem with CPU hotplug
* and non-retentive suspend scenario because PMP states are
* not preserved.
* It is advisable to flush the caching structures under such
* conditions.
*/
if (misa_extension('S')) {
__sbi_sfence_vma_all();
/*
* If hypervisor mode is supported, flush caching
* structures in guest mode too.
*/
if (misa_extension('H'))
__sbi_hfence_gvma_all();
}
}
static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
struct sbi_domain *dom,
struct sbi_domain_memregion *reg,
unsigned int pmp_idx,
unsigned int pmp_flags,
unsigned int pmp_log2gran,
unsigned long pmp_addr_max)
{
unsigned long pmp_addr = reg->base >> PMP_SHIFT;
if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
sbi_platform_pmp_set(sbi_platform_ptr(scratch),
pmp_idx, reg->flags, pmp_flags,
reg->base, reg->order);
pmp_set(pmp_idx, pmp_flags, reg->base, reg->order);
} else {
sbi_printf("Can not configure pmp for domain %s because"
" memory region address 0x%lx or size 0x%lx "
"is not in range.\n", dom->name, reg->base,
reg->order);
}
}
static bool is_valid_pmp_idx(unsigned int pmp_count, unsigned int pmp_idx)
{
if (pmp_count > pmp_idx)
return true;
sbi_printf("error: insufficient PMP entries\n");
return false;
}
static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch)
{
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned int pmp_log2gran, pmp_bits;
unsigned int pmp_idx, pmp_count;
unsigned long pmp_addr_max;
unsigned int pmp_flags;
pmp_count = sbi_hart_pmp_count(scratch);
pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
/*
* Set the RLB so that, we can write to PMP entries without
* enforcement even if some entries are locked.
*/
csr_set(CSR_MSECCFG, MSECCFG_RLB);
/* Disable the reserved entry */
pmp_disable(SBI_SMEPMP_RESV_ENTRY);
/* Program M-only regions when MML is not set. */
pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
/* Skip reserved entry */
if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
pmp_idx++;
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
/* Skip shared and SU-only regions */
if (!SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
pmp_idx++;
continue;
}
/*
* Track firmware PMP entries to preserve them during
* domain switches. Under SmePMP, M-mode requires
* explicit PMP entries to access firmware code/data.
* These entries must remain enabled across domain
* context switches to prevent M-mode access faults.
*/
if (SBI_DOMAIN_MEMREGION_IS_FIRMWARE(reg->flags)) {
if (fw_smepmp_ids_inited) {
/* Check inconsistent firmware region */
if (!sbi_hart_smepmp_is_fw_region(pmp_idx))
return SBI_EINVAL;
} else {
bitmap_set(fw_smepmp_ids, pmp_idx, 1);
}
}
pmp_flags = sbi_domain_get_smepmp_flags(reg);
sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
pmp_log2gran, pmp_addr_max);
}
fw_smepmp_ids_inited = true;
/* Set the MML to enforce new encoding */
csr_set(CSR_MSECCFG, MSECCFG_MML);
/* Program shared and SU-only regions */
pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
/* Skip reserved entry */
if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
pmp_idx++;
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
/* Skip M-only regions */
if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
pmp_idx++;
continue;
}
pmp_flags = sbi_domain_get_smepmp_flags(reg);
sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
pmp_log2gran, pmp_addr_max);
}
/*
* All entries are programmed.
* Keep the RLB bit so that dynamic mappings can be done.
*/
sbi_hart_pmp_fence();
return 0;
}
static int sbi_hart_smepmp_map_range(struct sbi_scratch *scratch,
unsigned long addr, unsigned long size)
{
/* shared R/W access for M and S/U mode */
unsigned int pmp_flags = (PMP_W | PMP_X);
unsigned long order, base = 0;
if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
return SBI_ENOSPC;
for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
order <= __riscv_xlen; order++) {
if (order < __riscv_xlen) {
base = addr & ~((1UL << order) - 1UL);
if ((base <= addr) &&
(addr < (base + (1UL << order))) &&
(base <= (addr + size - 1UL)) &&
((addr + size - 1UL) < (base + (1UL << order))))
break;
} else {
return SBI_EFAIL;
}
}
sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
pmp_flags, base, order);
pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
return SBI_OK;
}
static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch,
unsigned long addr, unsigned long size)
{
sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
}
static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
{
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned long pmp_addr, pmp_addr_max;
unsigned int pmp_log2gran, pmp_bits;
unsigned int pmp_idx, pmp_count;
unsigned int pmp_flags;
pmp_count = sbi_hart_pmp_count(scratch);
pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
pmp_flags = 0;
/*
* If permissions are to be enforced for all modes on
* this region, the lock bit should be set.
*/
if (reg->flags & SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS)
pmp_flags |= PMP_L;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
pmp_flags |= PMP_X;
pmp_addr = reg->base >> PMP_SHIFT;
if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
sbi_platform_pmp_set(sbi_platform_ptr(scratch),
pmp_idx, reg->flags, pmp_flags,
reg->base, reg->order);
pmp_set(pmp_idx++, pmp_flags, reg->base, reg->order);
} else {
sbi_printf("Can not configure pmp for domain %s because"
" memory region address 0x%lx or size 0x%lx "
"is not in range.\n", dom->name, reg->base,
reg->order);
}
}
sbi_hart_pmp_fence();
return 0;
}
static void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
{
int i, pmp_count = sbi_hart_pmp_count(scratch);
for (i = 0; i < pmp_count; i++) {
/* Don't revoke firmware access permissions */
if (sbi_hart_smepmp_is_fw_region(i))
continue;
sbi_platform_pmp_disable(sbi_platform_ptr(scratch), i);
pmp_disable(i);
}
}
static struct sbi_hart_protection pmp_protection = {
.name = "pmp",
.rating = 100,
.configure = sbi_hart_oldpmp_configure,
.unconfigure = sbi_hart_pmp_unconfigure,
};
static struct sbi_hart_protection epmp_protection = {
.name = "epmp",
.rating = 200,
.configure = sbi_hart_smepmp_configure,
.unconfigure = sbi_hart_pmp_unconfigure,
.map_range = sbi_hart_smepmp_map_range,
.unmap_range = sbi_hart_smepmp_unmap_range,
};
int sbi_hart_pmp_init(struct sbi_scratch *scratch)
{
int rc;
if (sbi_hart_pmp_count(scratch)) {
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP)) {
rc = sbi_hart_protection_register(&epmp_protection);
if (rc)
return rc;
} else {
rc = sbi_hart_protection_register(&pmp_protection);
if (rc)
return rc;
}
}
return 0;
}

View File

@@ -0,0 +1,96 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 Ventana Micro Systems Inc.
*/
#include <sbi/sbi_error.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_scratch.h>
static SBI_LIST_HEAD(hart_protection_list);
struct sbi_hart_protection *sbi_hart_protection_best(void)
{
if (sbi_list_empty(&hart_protection_list))
return NULL;
return sbi_list_first_entry(&hart_protection_list, struct sbi_hart_protection, head);
}
int sbi_hart_protection_register(struct sbi_hart_protection *hprot)
{
struct sbi_hart_protection *pos = NULL;
bool found_pos = false;
if (!hprot)
return SBI_EINVAL;
sbi_list_for_each_entry(pos, &hart_protection_list, head) {
if (hprot->rating > pos->rating) {
found_pos = true;
break;
}
}
if (found_pos)
sbi_list_add_tail(&hprot->head, &pos->head);
else
sbi_list_add_tail(&hprot->head, &hart_protection_list);
return 0;
}
void sbi_hart_protection_unregister(struct sbi_hart_protection *hprot)
{
if (!hprot)
return;
sbi_list_del(&hprot->head);
}
int sbi_hart_protection_configure(struct sbi_scratch *scratch)
{
struct sbi_hart_protection *hprot = sbi_hart_protection_best();
if (!hprot)
return SBI_EINVAL;
if (!hprot->configure)
return SBI_ENOSYS;
return hprot->configure(scratch);
}
void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch)
{
struct sbi_hart_protection *hprot = sbi_hart_protection_best();
if (!hprot || !hprot->unconfigure)
return;
hprot->unconfigure(scratch);
}
int sbi_hart_protection_map_range(unsigned long base, unsigned long size)
{
struct sbi_hart_protection *hprot = sbi_hart_protection_best();
if (!hprot)
return SBI_EINVAL;
if (!hprot->map_range)
return 0;
return hprot->map_range(sbi_scratch_thishart_ptr(), base, size);
}
int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size)
{
struct sbi_hart_protection *hprot = sbi_hart_protection_best();
if (!hprot)
return SBI_EINVAL;
if (!hprot->unmap_range)
return 0;
return hprot->unmap_range(sbi_scratch_thishart_ptr(), base, size);
}

View File

@@ -16,7 +16,9 @@
/* Minimum size and alignment of heap allocations */ /* Minimum size and alignment of heap allocations */
#define HEAP_ALLOC_ALIGN 64 #define HEAP_ALLOC_ALIGN 64
#define HEAP_HOUSEKEEPING_FACTOR 16
/* Number of heap nodes to allocate at once */
#define HEAP_NODE_BATCH_SIZE 8
struct heap_node { struct heap_node {
struct sbi_dlist head; struct sbi_dlist head;
@@ -28,20 +30,50 @@ struct sbi_heap_control {
spinlock_t lock; spinlock_t lock;
unsigned long base; unsigned long base;
unsigned long size; unsigned long size;
unsigned long hkbase; unsigned long resv;
unsigned long hksize;
struct sbi_dlist free_node_list; struct sbi_dlist free_node_list;
struct sbi_dlist free_space_list; struct sbi_dlist free_space_list;
struct sbi_dlist used_space_list; struct sbi_dlist used_space_list;
struct heap_node init_free_space_node;
}; };
struct sbi_heap_control global_hpctrl; struct sbi_heap_control global_hpctrl;
static bool alloc_nodes(struct sbi_heap_control *hpctrl)
{
size_t size = HEAP_NODE_BATCH_SIZE * sizeof(struct heap_node);
struct heap_node *n, *new = NULL;
/* alloc_with_align() requires at most two free nodes */
if (hpctrl->free_node_list.next != hpctrl->free_node_list.prev)
return true;
sbi_list_for_each_entry_reverse(n, &hpctrl->free_space_list, head) {
if (n->size >= size) {
n->size -= size;
if (!n->size) {
sbi_list_del(&n->head);
sbi_list_add_tail(&n->head, &hpctrl->free_node_list);
}
new = (void *)(n->addr + n->size);
break;
}
}
if (!new)
return false;
for (size_t i = 0; i < HEAP_NODE_BATCH_SIZE; i++)
sbi_list_add_tail(&new[i].head, &hpctrl->free_node_list);
hpctrl->resv += size;
return true;
}
static void *alloc_with_align(struct sbi_heap_control *hpctrl, static void *alloc_with_align(struct sbi_heap_control *hpctrl,
size_t align, size_t size) size_t align, size_t size)
{ {
void *ret = NULL; void *ret = NULL;
struct heap_node *n, *np, *rem; struct heap_node *n, *np;
unsigned long lowest_aligned; unsigned long lowest_aligned;
size_t pad; size_t pad;
@@ -53,6 +85,10 @@ static void *alloc_with_align(struct sbi_heap_control *hpctrl,
spin_lock(&hpctrl->lock); spin_lock(&hpctrl->lock);
/* Ensure at least two free nodes are available for use below */
if (!alloc_nodes(hpctrl))
goto out;
np = NULL; np = NULL;
sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) { sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) {
lowest_aligned = ROUNDUP(n->addr, align); lowest_aligned = ROUNDUP(n->addr, align);
@@ -67,55 +103,34 @@ static void *alloc_with_align(struct sbi_heap_control *hpctrl,
goto out; goto out;
if (pad) { if (pad) {
if (sbi_list_empty(&hpctrl->free_node_list)) {
goto out;
}
n = sbi_list_first_entry(&hpctrl->free_node_list, n = sbi_list_first_entry(&hpctrl->free_node_list,
struct heap_node, head); struct heap_node, head);
sbi_list_del(&n->head); sbi_list_del(&n->head);
if ((size + pad < np->size) && n->addr = np->addr;
!sbi_list_empty(&hpctrl->free_node_list)) { n->size = pad;
rem = sbi_list_first_entry(&hpctrl->free_node_list, sbi_list_add_tail(&n->head, &np->head);
struct heap_node, head);
sbi_list_del(&rem->head);
rem->addr = np->addr + (size + pad);
rem->size = np->size - (size + pad);
sbi_list_add_tail(&rem->head,
&hpctrl->free_space_list);
} else if (size + pad != np->size) {
/* Can't allocate, return n */
sbi_list_add(&n->head, &hpctrl->free_node_list);
ret = NULL;
goto out;
}
n->addr = lowest_aligned; np->addr += pad;
n->size = size; np->size -= pad;
sbi_list_add_tail(&n->head, &hpctrl->used_space_list);
np->size = pad;
ret = (void *)n->addr;
} else {
if ((size < np->size) &&
!sbi_list_empty(&hpctrl->free_node_list)) {
n = sbi_list_first_entry(&hpctrl->free_node_list,
struct heap_node, head);
sbi_list_del(&n->head);
n->addr = np->addr;
n->size = size;
np->addr += size;
np->size -= size;
sbi_list_add_tail(&n->head, &hpctrl->used_space_list);
ret = (void *)n->addr;
} else if (size == np->size) {
sbi_list_del(&np->head);
sbi_list_add_tail(&np->head, &hpctrl->used_space_list);
ret = (void *)np->addr;
}
} }
if (size < np->size) {
n = sbi_list_first_entry(&hpctrl->free_node_list,
struct heap_node, head);
sbi_list_del(&n->head);
n->addr = np->addr + size;
n->size = np->size - size;
sbi_list_add(&n->head, &np->head);
np->size = size;
}
sbi_list_del(&np->head);
sbi_list_add_tail(&np->head, &hpctrl->used_space_list);
ret = (void *)np->addr;
out: out:
spin_unlock(&hpctrl->lock); spin_unlock(&hpctrl->lock);
@@ -216,45 +231,32 @@ unsigned long sbi_heap_free_space_from(struct sbi_heap_control *hpctrl)
unsigned long sbi_heap_used_space_from(struct sbi_heap_control *hpctrl) unsigned long sbi_heap_used_space_from(struct sbi_heap_control *hpctrl)
{ {
return hpctrl->size - hpctrl->hksize - sbi_heap_free_space(); return hpctrl->size - hpctrl->resv - sbi_heap_free_space();
} }
unsigned long sbi_heap_reserved_space_from(struct sbi_heap_control *hpctrl) unsigned long sbi_heap_reserved_space_from(struct sbi_heap_control *hpctrl)
{ {
return hpctrl->hksize; return hpctrl->resv;
} }
int sbi_heap_init_new(struct sbi_heap_control *hpctrl, unsigned long base, int sbi_heap_init_new(struct sbi_heap_control *hpctrl, unsigned long base,
unsigned long size) unsigned long size)
{ {
unsigned long i;
struct heap_node *n; struct heap_node *n;
/* Initialize heap control */ /* Initialize heap control */
SPIN_LOCK_INIT(hpctrl->lock); SPIN_LOCK_INIT(hpctrl->lock);
hpctrl->base = base; hpctrl->base = base;
hpctrl->size = size; hpctrl->size = size;
hpctrl->hkbase = hpctrl->base; hpctrl->resv = 0;
hpctrl->hksize = hpctrl->size / HEAP_HOUSEKEEPING_FACTOR;
hpctrl->hksize &= ~((unsigned long)HEAP_BASE_ALIGN - 1);
SBI_INIT_LIST_HEAD(&hpctrl->free_node_list); SBI_INIT_LIST_HEAD(&hpctrl->free_node_list);
SBI_INIT_LIST_HEAD(&hpctrl->free_space_list); SBI_INIT_LIST_HEAD(&hpctrl->free_space_list);
SBI_INIT_LIST_HEAD(&hpctrl->used_space_list); SBI_INIT_LIST_HEAD(&hpctrl->used_space_list);
/* Prepare free node list */
for (i = 0; i < (hpctrl->hksize / sizeof(*n)); i++) {
n = (struct heap_node *)(hpctrl->hkbase + (sizeof(*n) * i));
SBI_INIT_LIST_HEAD(&n->head);
n->addr = n->size = 0;
sbi_list_add_tail(&n->head, &hpctrl->free_node_list);
}
/* Prepare free space list */ /* Prepare free space list */
n = sbi_list_first_entry(&hpctrl->free_node_list, n = &hpctrl->init_free_space_node;
struct heap_node, head); n->addr = base;
sbi_list_del(&n->head); n->size = size;
n->addr = hpctrl->hkbase + hpctrl->hksize;
n->size = hpctrl->size - hpctrl->hksize;
sbi_list_add_tail(&n->head, &hpctrl->free_space_list); sbi_list_add_tail(&n->head, &hpctrl->free_space_list);
return 0; return 0;

View File

@@ -37,6 +37,8 @@
static const struct sbi_hsm_device *hsm_dev = NULL; static const struct sbi_hsm_device *hsm_dev = NULL;
static unsigned long hart_data_offset; static unsigned long hart_data_offset;
static bool hsm_device_has_hart_hotplug(void);
static int hsm_device_hart_stop(void);
/** Per hart specific data to manage state transition **/ /** Per hart specific data to manage state transition **/
struct sbi_hsm_data { struct sbi_hsm_data {
@@ -45,10 +47,8 @@ struct sbi_hsm_data {
unsigned long saved_mie; unsigned long saved_mie;
unsigned long saved_mip; unsigned long saved_mip;
unsigned long saved_medeleg; unsigned long saved_medeleg;
unsigned long saved_menvcfg; unsigned long saved_mideleg;
#if __riscv_xlen == 32 u64 saved_menvcfg;
unsigned long saved_menvcfgh;
#endif
atomic_t start_ticket; atomic_t start_ticket;
}; };
@@ -170,6 +170,15 @@ static void sbi_hsm_hart_wait(struct sbi_scratch *scratch)
/* Wait for state transition requested by sbi_hsm_hart_start() */ /* Wait for state transition requested by sbi_hsm_hart_start() */
while (atomic_read(&hdata->state) != SBI_HSM_STATE_START_PENDING) { while (atomic_read(&hdata->state) != SBI_HSM_STATE_START_PENDING) {
/*
* If the hsm_dev is ready and it support the hotplug, we can
* use the hsm stop for more power saving
*/
if (hsm_device_has_hart_hotplug()) {
sbi_revert_entry_count(scratch);
hsm_device_hart_stop();
}
wfi(); wfi();
} }
@@ -238,7 +247,6 @@ static void hsm_device_hart_resume(void)
int sbi_hsm_init(struct sbi_scratch *scratch, bool cold_boot) int sbi_hsm_init(struct sbi_scratch *scratch, bool cold_boot)
{ {
u32 i;
struct sbi_scratch *rscratch; struct sbi_scratch *rscratch;
struct sbi_hsm_data *hdata; struct sbi_hsm_data *hdata;
@@ -248,7 +256,7 @@ int sbi_hsm_init(struct sbi_scratch *scratch, bool cold_boot)
return SBI_ENOMEM; return SBI_ENOMEM;
/* Initialize hart state data for every hart */ /* Initialize hart state data for every hart */
for (i = 0; i <= sbi_scratch_last_hartindex(); i++) { sbi_for_each_hartindex(i) {
rscratch = sbi_hartindex_to_scratch(i); rscratch = sbi_hartindex_to_scratch(i);
if (!rscratch) if (!rscratch)
continue; continue;
@@ -356,7 +364,7 @@ int sbi_hsm_hart_start(struct sbi_scratch *scratch,
(hsm_device_has_hart_secondary_boot() && !init_count)) { (hsm_device_has_hart_secondary_boot() && !init_count)) {
rc = hsm_device_hart_start(hartid, scratch->warmboot_addr); rc = hsm_device_hart_start(hartid, scratch->warmboot_addr);
} else { } else {
rc = sbi_ipi_raw_send(hartindex); rc = sbi_ipi_raw_send(hartindex, true);
} }
if (!rc) if (!rc)
@@ -419,12 +427,9 @@ void __sbi_hsm_suspend_non_ret_save(struct sbi_scratch *scratch)
hdata->saved_mie = csr_read(CSR_MIE); hdata->saved_mie = csr_read(CSR_MIE);
hdata->saved_mip = csr_read(CSR_MIP) & (MIP_SSIP | MIP_STIP); hdata->saved_mip = csr_read(CSR_MIP) & (MIP_SSIP | MIP_STIP);
hdata->saved_medeleg = csr_read(CSR_MEDELEG); hdata->saved_medeleg = csr_read(CSR_MEDELEG);
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12) { hdata->saved_mideleg = csr_read(CSR_MIDELEG);
#if __riscv_xlen == 32 if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12)
hdata->saved_menvcfgh = csr_read(CSR_MENVCFGH); hdata->saved_menvcfg = csr_read64(CSR_MENVCFG);
#endif
hdata->saved_menvcfg = csr_read(CSR_MENVCFG);
}
} }
static void __sbi_hsm_suspend_non_ret_restore(struct sbi_scratch *scratch) static void __sbi_hsm_suspend_non_ret_restore(struct sbi_scratch *scratch)
@@ -432,12 +437,9 @@ static void __sbi_hsm_suspend_non_ret_restore(struct sbi_scratch *scratch)
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch, struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset); hart_data_offset);
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12) { if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12)
csr_write(CSR_MENVCFG, hdata->saved_menvcfg); csr_write64(CSR_MENVCFG, hdata->saved_menvcfg);
#if __riscv_xlen == 32 csr_write(CSR_MIDELEG, hdata->saved_mideleg);
csr_write(CSR_MENVCFGH, hdata->saved_menvcfgh);
#endif
}
csr_write(CSR_MEDELEG, hdata->saved_medeleg); csr_write(CSR_MEDELEG, hdata->saved_medeleg);
csr_write(CSR_MIE, hdata->saved_mie); csr_write(CSR_MIE, hdata->saved_mie);
csr_set(CSR_MIP, (hdata->saved_mip & (MIP_SSIP | MIP_STIP))); csr_set(CSR_MIP, (hdata->saved_mip & (MIP_SSIP | MIP_STIP)));
@@ -453,7 +455,10 @@ void sbi_hsm_hart_resume_start(struct sbi_scratch *scratch)
SBI_HSM_STATE_RESUME_PENDING)) SBI_HSM_STATE_RESUME_PENDING))
sbi_hart_hang(); sbi_hart_hang();
hsm_device_hart_resume(); if (sbi_system_is_suspended())
sbi_system_resume();
else
hsm_device_hart_resume();
} }
void __noreturn sbi_hsm_hart_resume_finish(struct sbi_scratch *scratch, void __noreturn sbi_hsm_hart_resume_finish(struct sbi_scratch *scratch,

View File

@@ -0,0 +1,664 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2025 MIPS
*
*/
#include <sbi/riscv_asm.h>
#include <sbi/sbi_hart.h>
#include <sbi/sbi_trap.h>
#include <sbi/sbi_illegal_atomic.h>
#include <sbi/sbi_illegal_insn.h>
#if !defined(__riscv_atomic) && !defined(__riscv_zalrsc)
#error "opensbi strongly relies on the A extension of RISC-V"
#endif
#ifdef __riscv_atomic
int sbi_illegal_atomic(ulong insn, struct sbi_trap_regs *regs)
{
return truly_illegal_insn(insn, regs);
}
#elif __riscv_zalrsc
#define DEFINE_UNPRIVILEGED_LR_FUNCTION(type, aqrl, insn) \
static type lr_##type##aqrl(const type *addr, \
struct sbi_trap_info *trap) \
{ \
register ulong tinfo asm("a3"); \
register ulong mstatus = 0; \
register ulong mtvec = (ulong)sbi_hart_expected_trap; \
type ret = 0; \
trap->cause = 0; \
asm volatile( \
"add %[tinfo], %[taddr], zero\n" \
"csrrw %[mtvec], " STR(CSR_MTVEC) ", %[mtvec]\n" \
"csrrs %[mstatus], " STR(CSR_MSTATUS) ", %[mprv]\n" \
".option push\n" \
".option norvc\n" \
#insn " %[ret], %[addr]\n" \
".option pop\n" \
"csrw " STR(CSR_MSTATUS) ", %[mstatus]\n" \
"csrw " STR(CSR_MTVEC) ", %[mtvec]" \
: [mstatus] "+&r"(mstatus), [mtvec] "+&r"(mtvec), \
[tinfo] "+&r"(tinfo), [ret] "=&r"(ret) \
: [addr] "m"(*addr), [mprv] "r"(MSTATUS_MPRV), \
[taddr] "r"((ulong)trap) \
: "a4", "memory"); \
return ret; \
}
#define DEFINE_UNPRIVILEGED_SC_FUNCTION(type, aqrl, insn) \
static type sc_##type##aqrl(type *addr, type val, \
struct sbi_trap_info *trap) \
{ \
register ulong tinfo asm("a3"); \
register ulong mstatus = 0; \
register ulong mtvec = (ulong)sbi_hart_expected_trap; \
type ret = 0; \
trap->cause = 0; \
asm volatile( \
"add %[tinfo], %[taddr], zero\n" \
"csrrw %[mtvec], " STR(CSR_MTVEC) ", %[mtvec]\n" \
"csrrs %[mstatus], " STR(CSR_MSTATUS) ", %[mprv]\n" \
".option push\n" \
".option norvc\n" \
#insn " %[ret], %[val], %[addr]\n" \
".option pop\n" \
"csrw " STR(CSR_MSTATUS) ", %[mstatus]\n" \
"csrw " STR(CSR_MTVEC) ", %[mtvec]" \
: [mstatus] "+&r"(mstatus), [mtvec] "+&r"(mtvec), \
[tinfo] "+&r"(tinfo), [ret] "=&r"(ret) \
: [addr] "m"(*addr), [mprv] "r"(MSTATUS_MPRV), \
[val] "r"(val), [taddr] "r"((ulong)trap) \
: "a4", "memory"); \
return ret; \
}
DEFINE_UNPRIVILEGED_LR_FUNCTION(s32, , lr.w);
DEFINE_UNPRIVILEGED_LR_FUNCTION(s32, _aq, lr.w.aq);
DEFINE_UNPRIVILEGED_LR_FUNCTION(s32, _rl, lr.w.rl);
DEFINE_UNPRIVILEGED_LR_FUNCTION(s32, _aqrl, lr.w.aqrl);
DEFINE_UNPRIVILEGED_SC_FUNCTION(s32, , sc.w);
DEFINE_UNPRIVILEGED_SC_FUNCTION(s32, _aq, sc.w.aq);
DEFINE_UNPRIVILEGED_SC_FUNCTION(s32, _rl, sc.w.rl);
DEFINE_UNPRIVILEGED_SC_FUNCTION(s32, _aqrl, sc.w.aqrl);
#if __riscv_xlen == 64
DEFINE_UNPRIVILEGED_LR_FUNCTION(s64, , lr.d);
DEFINE_UNPRIVILEGED_LR_FUNCTION(s64, _aq, lr.d.aq);
DEFINE_UNPRIVILEGED_LR_FUNCTION(s64, _rl, lr.d.rl);
DEFINE_UNPRIVILEGED_LR_FUNCTION(s64, _aqrl, lr.d.aqrl);
DEFINE_UNPRIVILEGED_SC_FUNCTION(s64, , sc.d);
DEFINE_UNPRIVILEGED_SC_FUNCTION(s64, _aq, sc.d.aq);
DEFINE_UNPRIVILEGED_SC_FUNCTION(s64, _rl, sc.d.rl);
DEFINE_UNPRIVILEGED_SC_FUNCTION(s64, _aqrl, sc.d.aqrl);
#endif
#define DEFINE_ATOMIC_FUNCTION(name, type, func) \
static int atomic_##name(ulong insn, struct sbi_trap_regs *regs) \
{ \
struct sbi_trap_info uptrap; \
ulong addr = GET_RS1(insn, regs); \
ulong val = GET_RS2(insn, regs); \
ulong rd_val = 0; \
ulong fail = 1; \
while (fail) { \
rd_val = lr_##type((void *)addr, &uptrap); \
if (uptrap.cause) { \
return sbi_trap_redirect(regs, &uptrap); \
} \
fail = sc_##type((void *)addr, func, &uptrap); \
if (uptrap.cause) { \
return sbi_trap_redirect(regs, &uptrap); \
} \
} \
SET_RD(insn, regs, rd_val); \
regs->mepc += 4; \
return 0; \
}
DEFINE_ATOMIC_FUNCTION(add_w, s32, rd_val + val);
DEFINE_ATOMIC_FUNCTION(add_w_aq, s32_aq, rd_val + val);
DEFINE_ATOMIC_FUNCTION(add_w_rl, s32_rl, rd_val + val);
DEFINE_ATOMIC_FUNCTION(add_w_aqrl, s32_aqrl, rd_val + val);
DEFINE_ATOMIC_FUNCTION(and_w, s32, rd_val & val);
DEFINE_ATOMIC_FUNCTION(and_w_aq, s32_aq, rd_val & val);
DEFINE_ATOMIC_FUNCTION(and_w_rl, s32_rl, rd_val & val);
DEFINE_ATOMIC_FUNCTION(and_w_aqrl, s32_aqrl, rd_val & val);
DEFINE_ATOMIC_FUNCTION(or_w, s32, rd_val | val);
DEFINE_ATOMIC_FUNCTION(or_w_aq, s32_aq, rd_val | val);
DEFINE_ATOMIC_FUNCTION(or_w_rl, s32_rl, rd_val | val);
DEFINE_ATOMIC_FUNCTION(or_w_aqrl, s32_aqrl, rd_val | val);
DEFINE_ATOMIC_FUNCTION(xor_w, s32, rd_val ^ val);
DEFINE_ATOMIC_FUNCTION(xor_w_aq, s32_aq, rd_val ^ val);
DEFINE_ATOMIC_FUNCTION(xor_w_rl, s32_rl, rd_val ^ val);
DEFINE_ATOMIC_FUNCTION(xor_w_aqrl, s32_aqrl, rd_val ^ val);
DEFINE_ATOMIC_FUNCTION(swap_w, s32, val);
DEFINE_ATOMIC_FUNCTION(swap_w_aq, s32_aq, val);
DEFINE_ATOMIC_FUNCTION(swap_w_rl, s32_rl, val);
DEFINE_ATOMIC_FUNCTION(swap_w_aqrl, s32_aqrl, val);
DEFINE_ATOMIC_FUNCTION(max_w, s32, (s32)rd_val > (s32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(max_w_aq, s32_aq, (s32)rd_val > (s32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(max_w_rl, s32_rl, (s32)rd_val > (s32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(max_w_aqrl, s32_aqrl, (s32)rd_val > (s32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(maxu_w, s32, (u32)rd_val > (u32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(maxu_w_aq, s32_aq, (u32)rd_val > (u32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(maxu_w_rl, s32_rl, (u32)rd_val > (u32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(maxu_w_aqrl, s32_aqrl, (u32)rd_val > (u32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(min_w, s32, (s32)rd_val < (s32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(min_w_aq, s32_aq, (s32)rd_val < (s32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(min_w_rl, s32_rl, (s32)rd_val < (s32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(min_w_aqrl, s32_aqrl, (s32)rd_val < (s32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(minu_w, s32, (u32)rd_val < (u32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(minu_w_aq, s32_aq, (u32)rd_val < (u32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(minu_w_rl, s32_rl, (u32)rd_val < (u32)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(minu_w_aqrl, s32_aqrl, (u32)rd_val < (u32)val ? rd_val : val);
#if __riscv_xlen == 64
DEFINE_ATOMIC_FUNCTION(add_d, s64, rd_val + val);
DEFINE_ATOMIC_FUNCTION(add_d_aq, s64_aq, rd_val + val);
DEFINE_ATOMIC_FUNCTION(add_d_rl, s64_rl, rd_val + val);
DEFINE_ATOMIC_FUNCTION(add_d_aqrl, s64_aqrl, rd_val + val);
DEFINE_ATOMIC_FUNCTION(and_d, s64, rd_val & val);
DEFINE_ATOMIC_FUNCTION(and_d_aq, s64_aq, rd_val & val);
DEFINE_ATOMIC_FUNCTION(and_d_rl, s64_rl, rd_val & val);
DEFINE_ATOMIC_FUNCTION(and_d_aqrl, s64_aqrl, rd_val & val);
DEFINE_ATOMIC_FUNCTION(or_d, s64, rd_val | val);
DEFINE_ATOMIC_FUNCTION(or_d_aq, s64_aq, rd_val | val);
DEFINE_ATOMIC_FUNCTION(or_d_rl, s64_rl, rd_val | val);
DEFINE_ATOMIC_FUNCTION(or_d_aqrl, s64_aqrl, rd_val | val);
DEFINE_ATOMIC_FUNCTION(xor_d, s64, rd_val ^ val);
DEFINE_ATOMIC_FUNCTION(xor_d_aq, s64_aq, rd_val ^ val);
DEFINE_ATOMIC_FUNCTION(xor_d_rl, s64_rl, rd_val ^ val);
DEFINE_ATOMIC_FUNCTION(xor_d_aqrl, s64_aqrl, rd_val ^ val);
DEFINE_ATOMIC_FUNCTION(swap_d, s64, val);
DEFINE_ATOMIC_FUNCTION(swap_d_aq, s64_aq, val);
DEFINE_ATOMIC_FUNCTION(swap_d_rl, s64_rl, val);
DEFINE_ATOMIC_FUNCTION(swap_d_aqrl, s64_aqrl, val);
DEFINE_ATOMIC_FUNCTION(max_d, s64, (s64)rd_val > (s64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(max_d_aq, s64_aq, (s64)rd_val > (s64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(max_d_rl, s64_rl, (s64)rd_val > (s64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(max_d_aqrl, s64_aqrl, (s64)rd_val > (s64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(maxu_d, s64, (u64)rd_val > (u64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(maxu_d_aq, s64_aq, (u64)rd_val > (u64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(maxu_d_rl, s64_rl, (u64)rd_val > (u64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(maxu_d_aqrl, s64_aqrl, (u64)rd_val > (u64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(min_d, s64, (s64)rd_val < (s64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(min_d_aq, s64_aq, (s64)rd_val < (s64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(min_d_rl, s64_rl, (s64)rd_val < (s64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(min_d_aqrl, s64_aqrl, (s64)rd_val < (s64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(minu_d, s64, (u64)rd_val < (u64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(minu_d_aq, s64_aq, (u64)rd_val < (u64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(minu_d_rl, s64_rl, (u64)rd_val < (u64)val ? rd_val : val);
DEFINE_ATOMIC_FUNCTION(minu_d_aqrl, s64_aqrl, (u64)rd_val < (u64)val ? rd_val : val);
#endif
static const illegal_insn_func amoadd_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_add_w, /* 8 */
atomic_add_w_rl, /* 9 */
atomic_add_w_aq, /* 10 */
atomic_add_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_add_d, /* 12 */
atomic_add_d_rl, /* 13 */
atomic_add_d_aq, /* 14 */
atomic_add_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static const illegal_insn_func amoswap_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_swap_w, /* 8 */
atomic_swap_w_rl, /* 9 */
atomic_swap_w_aq, /* 10 */
atomic_swap_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_swap_d, /* 12 */
atomic_swap_d_rl, /* 13 */
atomic_swap_d_aq, /* 14 */
atomic_swap_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static const illegal_insn_func amoxor_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_xor_w, /* 8 */
atomic_xor_w_rl, /* 9 */
atomic_xor_w_aq, /* 10 */
atomic_xor_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_xor_d, /* 12 */
atomic_xor_d_rl, /* 13 */
atomic_xor_d_aq, /* 14 */
atomic_xor_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static const illegal_insn_func amoor_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_or_w, /* 8 */
atomic_or_w_rl, /* 9 */
atomic_or_w_aq, /* 10 */
atomic_or_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_or_d, /* 12 */
atomic_or_d_rl, /* 13 */
atomic_or_d_aq, /* 14 */
atomic_or_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static const illegal_insn_func amoand_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_and_w, /* 8 */
atomic_and_w_rl, /* 9 */
atomic_and_w_aq, /* 10 */
atomic_and_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_and_d, /* 12 */
atomic_and_d_rl, /* 13 */
atomic_and_d_aq, /* 14 */
atomic_and_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static const illegal_insn_func amomin_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_min_w, /* 8 */
atomic_min_w_rl, /* 9 */
atomic_min_w_aq, /* 10 */
atomic_min_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_min_d, /* 12 */
atomic_min_d_rl, /* 13 */
atomic_min_d_aq, /* 14 */
atomic_min_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static const illegal_insn_func amomax_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_max_w, /* 8 */
atomic_max_w_rl, /* 9 */
atomic_max_w_aq, /* 10 */
atomic_max_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_max_d, /* 12 */
atomic_max_d_rl, /* 13 */
atomic_max_d_aq, /* 14 */
atomic_max_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static const illegal_insn_func amominu_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_minu_w, /* 8 */
atomic_minu_w_rl, /* 9 */
atomic_minu_w_aq, /* 10 */
atomic_minu_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_minu_d, /* 12 */
atomic_minu_d_rl, /* 13 */
atomic_minu_d_aq, /* 14 */
atomic_minu_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static const illegal_insn_func amomaxu_table[32] = {
truly_illegal_insn, /* 0 */
truly_illegal_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
truly_illegal_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
atomic_maxu_w, /* 8 */
atomic_maxu_w_rl, /* 9 */
atomic_maxu_w_aq, /* 10 */
atomic_maxu_w_aqrl, /* 11 */
#if __riscv_xlen == 64
atomic_maxu_d, /* 12 */
atomic_maxu_d_rl, /* 13 */
atomic_maxu_d_aq, /* 14 */
atomic_maxu_d_aqrl, /* 15 */
#else
truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
#endif
truly_illegal_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
truly_illegal_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
truly_illegal_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
truly_illegal_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn, /* 31 */
};
static int amoadd_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amoadd_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static int amoswap_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amoswap_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static int amoxor_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amoxor_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static int amoor_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amoor_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static int amoand_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amoand_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static int amomin_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amomin_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static int amomax_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amomax_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static int amominu_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amominu_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static int amomaxu_insn(ulong insn, struct sbi_trap_regs *regs)
{
return amomaxu_table[(GET_FUNC3(insn) << 2) + GET_AQRL(insn)](insn, regs);
}
static const illegal_insn_func amo_insn_table[32] = {
amoadd_insn, /* 0 */
amoswap_insn, /* 1 */
truly_illegal_insn, /* 2 */
truly_illegal_insn, /* 3 */
amoxor_insn, /* 4 */
truly_illegal_insn, /* 5 */
truly_illegal_insn, /* 6 */
truly_illegal_insn, /* 7 */
amoor_insn, /* 8 */
truly_illegal_insn, /* 9 */
truly_illegal_insn, /* 10 */
truly_illegal_insn, /* 11 */
amoand_insn, /* 12 */
truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */
truly_illegal_insn, /* 15 */
amomin_insn, /* 16 */
truly_illegal_insn, /* 17 */
truly_illegal_insn, /* 18 */
truly_illegal_insn, /* 19 */
amomax_insn, /* 20 */
truly_illegal_insn, /* 21 */
truly_illegal_insn, /* 22 */
truly_illegal_insn, /* 23 */
amominu_insn, /* 24 */
truly_illegal_insn, /* 25 */
truly_illegal_insn, /* 26 */
truly_illegal_insn, /* 27 */
amomaxu_insn, /* 28 */
truly_illegal_insn, /* 29 */
truly_illegal_insn, /* 30 */
truly_illegal_insn /* 31 */
};
int sbi_illegal_atomic(ulong insn, struct sbi_trap_regs *regs)
{
return amo_insn_table[(insn >> 27) & 0x1f](insn, regs);
}
#else
#error "need a or zalrsc"
#endif

View File

@@ -13,15 +13,14 @@
#include <sbi/sbi_bitops.h> #include <sbi/sbi_bitops.h>
#include <sbi/sbi_emulate_csr.h> #include <sbi/sbi_emulate_csr.h>
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_illegal_atomic.h>
#include <sbi/sbi_illegal_insn.h> #include <sbi/sbi_illegal_insn.h>
#include <sbi/sbi_pmu.h> #include <sbi/sbi_pmu.h>
#include <sbi/sbi_trap.h> #include <sbi/sbi_trap.h>
#include <sbi/sbi_unpriv.h> #include <sbi/sbi_unpriv.h>
#include <sbi/sbi_console.h> #include <sbi/sbi_console.h>
typedef int (*illegal_insn_func)(ulong insn, struct sbi_trap_regs *regs); int truly_illegal_insn(ulong insn, struct sbi_trap_regs *regs)
static int truly_illegal_insn(ulong insn, struct sbi_trap_regs *regs)
{ {
struct sbi_trap_info trap; struct sbi_trap_info trap;
@@ -123,7 +122,7 @@ static const illegal_insn_func illegal_insn_table[32] = {
truly_illegal_insn, /* 8 */ truly_illegal_insn, /* 8 */
truly_illegal_insn, /* 9 */ truly_illegal_insn, /* 9 */
truly_illegal_insn, /* 10 */ truly_illegal_insn, /* 10 */
truly_illegal_insn, /* 11 */ sbi_illegal_atomic, /* 11 */
truly_illegal_insn, /* 12 */ truly_illegal_insn, /* 12 */
truly_illegal_insn, /* 13 */ truly_illegal_insn, /* 13 */
truly_illegal_insn, /* 14 */ truly_illegal_insn, /* 14 */

View File

@@ -13,10 +13,13 @@
#include <sbi/sbi_console.h> #include <sbi/sbi_console.h>
#include <sbi/sbi_cppc.h> #include <sbi/sbi_cppc.h>
#include <sbi/sbi_domain.h> #include <sbi/sbi_domain.h>
#include <sbi/sbi_double_trap.h>
#include <sbi/sbi_ecall.h> #include <sbi/sbi_ecall.h>
#include <sbi/sbi_fwft.h> #include <sbi/sbi_fwft.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_hartmask.h> #include <sbi/sbi_hartmask.h>
#include <sbi/sbi_hart_pmp.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h> #include <sbi/sbi_heap.h>
#include <sbi/sbi_hsm.h> #include <sbi/sbi_hsm.h>
#include <sbi/sbi_ipi.h> #include <sbi/sbi_ipi.h>
@@ -73,6 +76,7 @@ static void sbi_boot_print_general(struct sbi_scratch *scratch)
const struct sbi_hsm_device *hdev; const struct sbi_hsm_device *hdev;
const struct sbi_ipi_device *idev; const struct sbi_ipi_device *idev;
const struct sbi_timer_device *tdev; const struct sbi_timer_device *tdev;
const struct sbi_hart_protection *hprot;
const struct sbi_console_device *cdev; const struct sbi_console_device *cdev;
const struct sbi_system_reset_device *srdev; const struct sbi_system_reset_device *srdev;
const struct sbi_system_suspend_device *susp_dev; const struct sbi_system_suspend_device *susp_dev;
@@ -89,6 +93,9 @@ static void sbi_boot_print_general(struct sbi_scratch *scratch)
sbi_printf("Platform Features : %s\n", str); sbi_printf("Platform Features : %s\n", str);
sbi_printf("Platform HART Count : %u\n", sbi_printf("Platform HART Count : %u\n",
sbi_platform_hart_count(plat)); sbi_platform_hart_count(plat));
hprot = sbi_hart_protection_best();
sbi_printf("Platform HART Protection : %s\n",
(hprot) ? hprot->name : "---");
idev = sbi_ipi_get_device(); idev = sbi_ipi_get_device();
sbi_printf("Platform IPI Device : %s\n", sbi_printf("Platform IPI Device : %s\n",
(idev) ? idev->name : "---"); (idev) ? idev->name : "---");
@@ -160,7 +167,7 @@ static void sbi_boot_print_domains(struct sbi_scratch *scratch)
static void sbi_boot_print_hart(struct sbi_scratch *scratch, u32 hartid) static void sbi_boot_print_hart(struct sbi_scratch *scratch, u32 hartid)
{ {
int xlen; int xlen;
char str[128]; char str[256];
const struct sbi_domain *dom = sbi_domain_thishart_ptr(); const struct sbi_domain *dom = sbi_domain_thishart_ptr();
if (scratch->options & SBI_SCRATCH_NO_BOOT_PRINTS) if (scratch->options & SBI_SCRATCH_NO_BOOT_PRINTS)
@@ -266,12 +273,6 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
rc = sbi_sse_init(scratch, true);
if (rc) {
sbi_printf("%s: sse init failed (error %d)\n", __func__, rc);
sbi_hart_hang();
}
rc = sbi_pmu_init(scratch, true); rc = sbi_pmu_init(scratch, true);
if (rc) { if (rc) {
sbi_printf("%s: pmu init failed (error %d)\n", sbi_printf("%s: pmu init failed (error %d)\n",
@@ -285,6 +286,8 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
sbi_boot_print_banner(scratch); sbi_boot_print_banner(scratch);
sbi_double_trap_init(scratch);
rc = sbi_irqchip_init(scratch, true); rc = sbi_irqchip_init(scratch, true);
if (rc) { if (rc) {
sbi_printf("%s: irqchip init failed (error %d)\n", sbi_printf("%s: irqchip init failed (error %d)\n",
@@ -321,13 +324,13 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
sbi_printf("%s: mpxy init failed (error %d)\n", __func__, rc); sbi_printf("%s: mpxy init failed (error %d)\n", __func__, rc);
sbi_hart_hang(); sbi_hart_hang();
} }
/* /*
* Note: Finalize domains after HSM initialization so that we * Note: Finalize domains after HSM initialization
* can startup non-root domains.
* Note: Finalize domains before HART PMP configuration so * Note: Finalize domains before HART PMP configuration so
* that we use correct domain for configuring PMP. * that we use correct domain for configuring PMP.
*/ */
rc = sbi_domain_finalize(scratch, hartid); rc = sbi_domain_finalize(scratch);
if (rc) { if (rc) {
sbi_printf("%s: domain finalize failed (error %d)\n", sbi_printf("%s: domain finalize failed (error %d)\n",
__func__, rc); __func__, rc);
@@ -346,6 +349,16 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
sbi_hart_hang(); sbi_hart_hang();
} }
/*
* Note: SSE events callbacks can be registered by other drivers so
* sbi_sse_init() needs to be called after all drivers have been probed.
*/
rc = sbi_sse_init(scratch, true);
if (rc) {
sbi_printf("%s: sse init failed (error %d)\n", __func__, rc);
sbi_hart_hang();
}
/* /*
* Note: Ecall initialization should be after platform final * Note: Ecall initialization should be after platform final
* initialization so that all available platform devices are * initialization so that all available platform devices are
@@ -366,12 +379,23 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
run_all_tests(); run_all_tests();
/* /*
* Configure PMP at last because if SMEPMP is detected, * Note: Startup domains after all initialization are done
* M-mode access to the S/U space will be rescinded. * otherwise boot HART of non-root domain can crash.
*/ */
rc = sbi_hart_pmp_configure(scratch); rc = sbi_domain_startup(scratch, hartid);
if (rc) { if (rc) {
sbi_printf("%s: PMP configure failed (error %d)\n", sbi_printf("%s: domain startup failed (error %d)\n",
__func__, rc);
sbi_hart_hang();
}
/*
* Configure hart isolation at last because if SMEPMP is,
* detected, M-mode access to the S/U space will be rescinded.
*/
rc = sbi_hart_protection_configure(scratch);
if (rc) {
sbi_printf("%s: hart isolation configure failed (error %d)\n",
__func__, rc); __func__, rc);
sbi_hart_hang(); sbi_hart_hang();
} }
@@ -408,10 +432,6 @@ static void __noreturn init_warm_startup(struct sbi_scratch *scratch,
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
rc = sbi_sse_init(scratch, false);
if (rc)
sbi_hart_hang();
rc = sbi_pmu_init(scratch, false); rc = sbi_pmu_init(scratch, false);
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
@@ -444,11 +464,15 @@ static void __noreturn init_warm_startup(struct sbi_scratch *scratch,
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
rc = sbi_sse_init(scratch, false);
if (rc)
sbi_hart_hang();
/* /*
* Configure PMP at last because if SMEPMP is detected, * Configure hart isolation at last because if SMEPMP is,
* M-mode access to the S/U space will be rescinded. * detected, M-mode access to the S/U space will be rescinded.
*/ */
rc = sbi_hart_pmp_configure(scratch); rc = sbi_hart_protection_configure(scratch);
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
@@ -469,7 +493,7 @@ static void __noreturn init_warm_resume(struct sbi_scratch *scratch,
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
rc = sbi_hart_pmp_configure(scratch); rc = sbi_hart_protection_configure(scratch);
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
@@ -489,7 +513,7 @@ static void __noreturn init_warmboot(struct sbi_scratch *scratch, u32 hartid)
if (hstate == SBI_HSM_STATE_SUSPENDED) { if (hstate == SBI_HSM_STATE_SUSPENDED) {
init_warm_resume(scratch, hartid); init_warm_resume(scratch, hartid);
} else { } else {
sbi_ipi_raw_clear(); sbi_ipi_raw_clear(true);
init_warm_startup(scratch, hartid); init_warm_startup(scratch, hartid);
} }
} }
@@ -561,6 +585,19 @@ void __noreturn sbi_init(struct sbi_scratch *scratch)
init_warmboot(scratch, hartid); init_warmboot(scratch, hartid);
} }
void sbi_revert_entry_count(struct sbi_scratch *scratch)
{
unsigned long *entry_count, *init_count;
if (!entry_count_offset || !init_count_offset)
sbi_hart_hang();
entry_count = sbi_scratch_offset_ptr(scratch, entry_count_offset);
init_count = sbi_scratch_offset_ptr(scratch, init_count_offset);
*entry_count = *init_count;
}
unsigned long sbi_entry_count(u32 hartindex) unsigned long sbi_entry_count(u32 hartindex)
{ {
struct sbi_scratch *scratch; struct sbi_scratch *scratch;

View File

@@ -15,9 +15,11 @@
#include <sbi/sbi_domain.h> #include <sbi/sbi_domain.h>
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_hsm.h> #include <sbi/sbi_hsm.h>
#include <sbi/sbi_init.h> #include <sbi/sbi_init.h>
#include <sbi/sbi_ipi.h> #include <sbi/sbi_ipi.h>
#include <sbi/sbi_list.h>
#include <sbi/sbi_platform.h> #include <sbi/sbi_platform.h>
#include <sbi/sbi_pmu.h> #include <sbi/sbi_pmu.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
@@ -32,8 +34,14 @@ _Static_assert(
"type of sbi_ipi_data.ipi_type has changed, please redefine SBI_IPI_EVENT_MAX" "type of sbi_ipi_data.ipi_type has changed, please redefine SBI_IPI_EVENT_MAX"
); );
struct sbi_ipi_device_node {
struct sbi_dlist head;
const struct sbi_ipi_device *dev;
};
static unsigned long ipi_data_off; static unsigned long ipi_data_off;
static const struct sbi_ipi_device *ipi_dev = NULL; static const struct sbi_ipi_device *ipi_dev = NULL;
static SBI_LIST_HEAD(ipi_dev_node_list);
static const struct sbi_ipi_event_ops *ipi_ops_array[SBI_IPI_EVENT_MAX]; static const struct sbi_ipi_event_ops *ipi_ops_array[SBI_IPI_EVENT_MAX];
static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartindex, static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartindex,
@@ -80,7 +88,7 @@ static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartindex,
*/ */
if (!__atomic_fetch_or(&ipi_data->ipi_type, if (!__atomic_fetch_or(&ipi_data->ipi_type,
BIT(event), __ATOMIC_RELAXED)) BIT(event), __ATOMIC_RELAXED))
ret = sbi_ipi_raw_send(remote_hartindex); ret = sbi_ipi_raw_send(remote_hartindex, false);
sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_SENT); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_SENT);
@@ -116,6 +124,11 @@ int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data)
struct sbi_domain *dom = sbi_domain_thishart_ptr(); struct sbi_domain *dom = sbi_domain_thishart_ptr();
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr(); struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
if (hmask == 0 && hbase != -1UL) {
/* Nothing to do, but it's not an error either. */
return 0;
}
/* Find the target harts */ /* Find the target harts */
rc = sbi_hsm_hart_interruptible_mask(dom, &target_mask); rc = sbi_hsm_hart_interruptible_mask(dom, &target_mask);
if (rc) if (rc)
@@ -123,6 +136,7 @@ int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data)
if (hbase != -1UL) { if (hbase != -1UL) {
struct sbi_hartmask tmp_mask = { 0 }; struct sbi_hartmask tmp_mask = { 0 };
int count = sbi_popcount(hmask);
for (i = hbase; hmask; i++, hmask >>= 1) { for (i = hbase; hmask; i++, hmask >>= 1) {
if (hmask & 1UL) if (hmask & 1UL)
@@ -130,6 +144,9 @@ int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data)
} }
sbi_hartmask_and(&target_mask, &target_mask, &tmp_mask); sbi_hartmask_and(&target_mask, &target_mask, &tmp_mask);
if (sbi_hartmask_weight(&target_mask) != count)
return SBI_EINVAL;
} }
/* Send IPIs */ /* Send IPIs */
@@ -239,7 +256,7 @@ void sbi_ipi_process(void)
sbi_scratch_offset_ptr(scratch, ipi_data_off); sbi_scratch_offset_ptr(scratch, ipi_data_off);
sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_RECVD); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_RECVD);
sbi_ipi_raw_clear(); sbi_ipi_raw_clear(false);
ipi_type = atomic_raw_xchg_ulong(&ipi_data->ipi_type, 0); ipi_type = atomic_raw_xchg_ulong(&ipi_data->ipi_type, 0);
ipi_event = 0; ipi_event = 0;
@@ -254,8 +271,10 @@ void sbi_ipi_process(void)
} }
} }
int sbi_ipi_raw_send(u32 hartindex) int sbi_ipi_raw_send(u32 hartindex, bool all_devices)
{ {
struct sbi_ipi_device_node *entry;
if (!ipi_dev || !ipi_dev->ipi_send) if (!ipi_dev || !ipi_dev->ipi_send)
return SBI_EINVAL; return SBI_EINVAL;
@@ -270,14 +289,31 @@ int sbi_ipi_raw_send(u32 hartindex)
*/ */
wmb(); wmb();
ipi_dev->ipi_send(hartindex); if (all_devices) {
sbi_list_for_each_entry(entry, &ipi_dev_node_list, head) {
if (entry->dev->ipi_send)
entry->dev->ipi_send(hartindex);
}
} else {
ipi_dev->ipi_send(hartindex);
}
return 0; return 0;
} }
void sbi_ipi_raw_clear(void) void sbi_ipi_raw_clear(bool all_devices)
{ {
if (ipi_dev && ipi_dev->ipi_clear) struct sbi_ipi_device_node *entry;
ipi_dev->ipi_clear();
if (all_devices) {
sbi_list_for_each_entry(entry, &ipi_dev_node_list, head) {
if (entry->dev->ipi_clear)
entry->dev->ipi_clear();
}
} else {
if (ipi_dev && ipi_dev->ipi_clear)
ipi_dev->ipi_clear();
}
/* /*
* Ensure that memory or MMIO writes after this * Ensure that memory or MMIO writes after this
@@ -296,12 +332,22 @@ const struct sbi_ipi_device *sbi_ipi_get_device(void)
return ipi_dev; return ipi_dev;
} }
void sbi_ipi_set_device(const struct sbi_ipi_device *dev) void sbi_ipi_add_device(const struct sbi_ipi_device *dev)
{ {
if (!dev || ipi_dev) struct sbi_ipi_device_node *entry;
if (!dev)
return; return;
ipi_dev = dev; entry = sbi_zalloc(sizeof(*entry));
if (!entry)
return;
SBI_INIT_LIST_HEAD(&entry->head);
entry->dev = dev;
sbi_list_add_tail(&entry->head, &ipi_dev_node_list);
if (!ipi_dev || ipi_dev->rating < dev->rating)
ipi_dev = dev;
} }
int sbi_ipi_init(struct sbi_scratch *scratch, bool cold_boot) int sbi_ipi_init(struct sbi_scratch *scratch, bool cold_boot)
@@ -321,11 +367,6 @@ int sbi_ipi_init(struct sbi_scratch *scratch, bool cold_boot)
if (ret < 0) if (ret < 0)
return ret; return ret;
ipi_halt_event = ret; ipi_halt_event = ret;
/* Initialize platform IPI support */
ret = sbi_platform_ipi_init(sbi_platform_ptr(scratch));
if (ret)
return ret;
} else { } else {
if (!ipi_data_off) if (!ipi_data_off)
return SBI_ENOMEM; return SBI_ENOMEM;
@@ -338,7 +379,7 @@ int sbi_ipi_init(struct sbi_scratch *scratch, bool cold_boot)
ipi_data->ipi_type = 0x00; ipi_data->ipi_type = 0x00;
/* Clear any pending IPIs for the current hart */ /* Clear any pending IPIs for the current hart */
sbi_ipi_raw_clear(); sbi_ipi_raw_clear(true);
/* Enable software interrupts */ /* Enable software interrupts */
csr_set(CSR_MIE, MIP_MSIP); csr_set(CSR_MIE, MIP_MSIP);

View File

@@ -11,6 +11,8 @@
#include <sbi/sbi_domain.h> #include <sbi/sbi_domain.h>
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_platform.h> #include <sbi/sbi_platform.h>
#include <sbi/sbi_mpxy.h> #include <sbi/sbi_mpxy.h>
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
@@ -19,8 +21,8 @@
#include <sbi/sbi_console.h> #include <sbi/sbi_console.h>
#include <sbi/sbi_byteorder.h> #include <sbi/sbi_byteorder.h>
/** Offset of pointer to MPXY state in scratch space */ /** Shared memory size across all harts */
static unsigned long mpxy_state_offset; static unsigned long mpxy_shmem_size = PAGE_SIZE;
/** List of MPXY proxy channels */ /** List of MPXY proxy channels */
static SBI_LIST_HEAD(mpxy_channel_list); static SBI_LIST_HEAD(mpxy_channel_list);
@@ -43,17 +45,17 @@ static SBI_LIST_HEAD(mpxy_channel_list);
#define CAP_EVENTSSTATE_POS 2 #define CAP_EVENTSSTATE_POS 2
#define CAP_EVENTSSTATE_MASK (1U << CAP_EVENTSSTATE_POS) #define CAP_EVENTSSTATE_MASK (1U << CAP_EVENTSSTATE_POS)
/** Channel Capability - Get Notification function support */ /** Channel Capability - Send Message With Response function support */
#define CAP_GET_NOTIFICATIONS_POS 3 #define CAP_SEND_MSG_WITH_RESP_POS 3
#define CAP_GET_NOTIFICATIONS_MASK (1U << CAP_GET_NOTIFICATIONS_POS) #define CAP_SEND_MSG_WITH_RESP_MASK (1U << CAP_SEND_MSG_WITH_RESP_POS)
/** Channel Capability - Send Message Without Response function support */ /** Channel Capability - Send Message Without Response function support */
#define CAP_SEND_MSG_WITHOUT_RESP_POS 4 #define CAP_SEND_MSG_WITHOUT_RESP_POS 4
#define CAP_SEND_MSG_WITHOUT_RESP_MASK (1U << CAP_SEND_MSG_WITHOUT_RESP_POS) #define CAP_SEND_MSG_WITHOUT_RESP_MASK (1U << CAP_SEND_MSG_WITHOUT_RESP_POS)
/** Channel Capability - Send Message With Response function support */ /** Channel Capability - Get Notification function support */
#define CAP_SEND_MSG_WITH_RESP_POS 5 #define CAP_GET_NOTIFICATIONS_POS 5
#define CAP_SEND_MSG_WITH_RESP_MASK (1U << CAP_SEND_MSG_WITH_RESP_POS) #define CAP_GET_NOTIFICATIONS_MASK (1U << CAP_GET_NOTIFICATIONS_POS)
/** Helpers to enable/disable channel capability bits /** Helpers to enable/disable channel capability bits
* _c: capability variable * _c: capability variable
@@ -63,17 +65,10 @@ static SBI_LIST_HEAD(mpxy_channel_list);
#define CAP_DISABLE(_c, _m) INSERT_FIELD(_c, _m, 0) #define CAP_DISABLE(_c, _m) INSERT_FIELD(_c, _m, 0)
#define CAP_GET(_c, _m) EXTRACT_FIELD(_c, _m) #define CAP_GET(_c, _m) EXTRACT_FIELD(_c, _m)
#if __riscv_xlen == 64
#define SHMEM_PHYS_ADDR(_hi, _lo) (_lo) #define SHMEM_PHYS_ADDR(_hi, _lo) (_lo)
#elif __riscv_xlen == 32
#define SHMEM_PHYS_ADDR(_hi, _lo) (((u64)(_hi) << 32) | (_lo))
#else
#error "Undefined XLEN"
#endif
/** Per hart shared memory */ /** Per hart shared memory */
struct mpxy_shmem { struct mpxy_shmem {
unsigned long shmem_size;
unsigned long shmem_addr_lo; unsigned long shmem_addr_lo;
unsigned long shmem_addr_hi; unsigned long shmem_addr_hi;
}; };
@@ -87,10 +82,17 @@ struct mpxy_state {
struct mpxy_shmem shmem; struct mpxy_shmem shmem;
}; };
static struct mpxy_state *sbi_domain_get_mpxy_state(struct sbi_domain *dom,
u32 hartindex);
/** Macro to obtain the current hart's MPXY state pointer in current domain */
#define sbi_domain_mpxy_state_thishart_ptr() \
sbi_domain_get_mpxy_state(sbi_domain_thishart_ptr(), \
current_hartindex())
/** Disable hart shared memory */ /** Disable hart shared memory */
static inline void sbi_mpxy_shmem_disable(struct mpxy_state *ms) static inline void sbi_mpxy_shmem_disable(struct mpxy_state *ms)
{ {
ms->shmem.shmem_size = 0;
ms->shmem.shmem_addr_lo = INVALID_ADDR; ms->shmem.shmem_addr_lo = INVALID_ADDR;
ms->shmem.shmem_addr_hi = INVALID_ADDR; ms->shmem.shmem_addr_hi = INVALID_ADDR;
} }
@@ -170,9 +172,8 @@ bool sbi_mpxy_channel_available(void)
static void mpxy_std_attrs_init(struct sbi_mpxy_channel *channel) static void mpxy_std_attrs_init(struct sbi_mpxy_channel *channel)
{ {
struct mpxy_state *ms = sbi_domain_mpxy_state_thishart_ptr();
u32 capability = 0; u32 capability = 0;
struct mpxy_state *ms =
sbi_scratch_thishart_offset_ptr(mpxy_state_offset);
/* Reset values */ /* Reset values */
channel->attrs.msi_control = 0; channel->attrs.msi_control = 0;
@@ -228,37 +229,113 @@ int sbi_mpxy_register_channel(struct sbi_mpxy_channel *channel)
/* Initialize channel specific attributes */ /* Initialize channel specific attributes */
mpxy_std_attrs_init(channel); mpxy_std_attrs_init(channel);
SBI_INIT_LIST_HEAD(&channel->head); /* Update shared memory size if required */
if (mpxy_shmem_size < channel->attrs.msg_data_maxlen) {
mpxy_shmem_size = channel->attrs.msg_data_maxlen;
mpxy_shmem_size = (mpxy_shmem_size + (PAGE_SIZE - 1)) / PAGE_SIZE;
}
sbi_list_add_tail(&channel->head, &mpxy_channel_list); sbi_list_add_tail(&channel->head, &mpxy_channel_list);
return SBI_OK; return SBI_OK;
} }
/** Setup per domain MPXY state data */
static int domain_mpxy_state_data_setup(struct sbi_domain *dom,
struct sbi_domain_data *data,
void *data_ptr)
{
struct mpxy_state **dom_hartindex_to_mpxy_state_table = data_ptr;
struct mpxy_state *ms;
u32 i;
sbi_hartmask_for_each_hartindex(i, dom->possible_harts) {
ms = sbi_zalloc(sizeof(*ms));
if (!ms)
return SBI_ENOMEM;
/*
* TODO: Proper support for checking msi support from
* platform. Currently disable msi and sse and use
* polling
*/
ms->msi_avail = false;
ms->sse_avail = false;
sbi_mpxy_shmem_disable(ms);
dom_hartindex_to_mpxy_state_table[i] = ms;
}
return 0;
}
/** Cleanup per domain MPXY state data */
static void domain_mpxy_state_data_cleanup(struct sbi_domain *dom,
struct sbi_domain_data *data,
void *data_ptr)
{
struct mpxy_state **dom_hartindex_to_mpxy_state_table = data_ptr;
u32 i;
sbi_hartmask_for_each_hartindex(i, dom->possible_harts)
sbi_free(dom_hartindex_to_mpxy_state_table[i]);
}
static struct sbi_domain_data dmspriv = {
.data_setup = domain_mpxy_state_data_setup,
.data_cleanup = domain_mpxy_state_data_cleanup,
};
/**
* Get per-domain MPXY state pointer for a given domain and HART index
* @param dom pointer to domain
* @param hartindex the HART index
*
* @return per-domain MPXY state pointer for given HART index
*/
static struct mpxy_state *sbi_domain_get_mpxy_state(struct sbi_domain *dom,
u32 hartindex)
{
struct mpxy_state **dom_hartindex_to_mpxy_state_table;
dom_hartindex_to_mpxy_state_table = sbi_domain_data_ptr(dom, &dmspriv);
if (!dom_hartindex_to_mpxy_state_table ||
!sbi_hartindex_valid(hartindex))
return NULL;
return dom_hartindex_to_mpxy_state_table[hartindex];
}
int sbi_mpxy_init(struct sbi_scratch *scratch) int sbi_mpxy_init(struct sbi_scratch *scratch)
{ {
mpxy_state_offset = sbi_scratch_alloc_type_offset(struct mpxy_state); int ret;
if (!mpxy_state_offset)
return SBI_ENOMEM;
/** /**
* TODO: Proper support for checking msi support from platform. * Allocate per-domain and per-hart MPXY state data.
* Currently disable msi and sse and use polling * The data type is "struct mpxy_state **" whose memory space will be
* dynamically allocated by domain_setup_data_one() and
* domain_mpxy_state_data_setup(). Calculate needed size of memory space
* here.
*/ */
struct mpxy_state *ms = dmspriv.data_size = sizeof(struct mpxy_state *) * sbi_hart_count();
sbi_scratch_thishart_offset_ptr(mpxy_state_offset); ret = sbi_domain_register_data(&dmspriv);
ms->msi_avail = false; if (ret)
ms->sse_avail = false; return ret;
sbi_mpxy_shmem_disable(ms);
return sbi_platform_mpxy_init(sbi_platform_ptr(scratch)); return sbi_platform_mpxy_init(sbi_platform_ptr(scratch));
} }
int sbi_mpxy_set_shmem(unsigned long shmem_size, unsigned long shmem_phys_lo, unsigned long sbi_mpxy_get_shmem_size(void)
unsigned long shmem_phys_hi, unsigned long flags)
{ {
struct mpxy_state *ms = return mpxy_shmem_size;
sbi_scratch_thishart_offset_ptr(mpxy_state_offset); }
int sbi_mpxy_set_shmem(unsigned long shmem_phys_lo,
unsigned long shmem_phys_hi,
unsigned long flags)
{
struct mpxy_state *ms = sbi_domain_mpxy_state_thishart_ptr();
unsigned long *ret_buf; unsigned long *ret_buf;
/** Disable shared memory if both hi and lo have all bit 1s */ /** Disable shared memory if both hi and lo have all bit 1s */
@@ -272,13 +349,26 @@ int sbi_mpxy_set_shmem(unsigned long shmem_size, unsigned long shmem_phys_lo,
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
/** Check shared memory size and address aligned to 4K Page */ /** Check shared memory size and address aligned to 4K Page */
if (!shmem_size || (shmem_size & ~PAGE_MASK) || if (shmem_phys_lo & ~PAGE_MASK)
(shmem_phys_lo & ~PAGE_MASK))
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
/*
* On RV32, the M-mode can only access the first 4GB of
* the physical address space because M-mode does not have
* MMU to access full 34-bit physical address space.
* So fail if the upper 32 bits of the physical address
* is non-zero on RV32.
*
* On RV64, kernel sets upper 64bit address part to zero.
* So fail if the upper 64bit of the physical address
* is non-zero on RV64.
*/
if (shmem_phys_hi)
return SBI_ERR_INVALID_ADDRESS;
if (!sbi_domain_check_addr_range(sbi_domain_thishart_ptr(), if (!sbi_domain_check_addr_range(sbi_domain_thishart_ptr(),
SHMEM_PHYS_ADDR(shmem_phys_hi, shmem_phys_lo), SHMEM_PHYS_ADDR(shmem_phys_hi, shmem_phys_lo),
shmem_size, PRV_S, mpxy_shmem_size, PRV_S,
SBI_DOMAIN_READ | SBI_DOMAIN_WRITE)) SBI_DOMAIN_READ | SBI_DOMAIN_WRITE))
return SBI_ERR_INVALID_ADDRESS; return SBI_ERR_INVALID_ADDRESS;
@@ -286,15 +376,13 @@ int sbi_mpxy_set_shmem(unsigned long shmem_size, unsigned long shmem_phys_lo,
if (flags == SBI_EXT_MPXY_SHMEM_FLAG_OVERWRITE_RETURN) { if (flags == SBI_EXT_MPXY_SHMEM_FLAG_OVERWRITE_RETURN) {
ret_buf = (unsigned long *)(ulong)SHMEM_PHYS_ADDR(shmem_phys_hi, ret_buf = (unsigned long *)(ulong)SHMEM_PHYS_ADDR(shmem_phys_hi,
shmem_phys_lo); shmem_phys_lo);
sbi_hart_map_saddr((unsigned long)ret_buf, shmem_size); sbi_hart_protection_map_range((unsigned long)ret_buf, mpxy_shmem_size);
ret_buf[0] = cpu_to_lle(ms->shmem.shmem_size); ret_buf[0] = cpu_to_lle(ms->shmem.shmem_addr_lo);
ret_buf[1] = cpu_to_lle(ms->shmem.shmem_addr_lo); ret_buf[1] = cpu_to_lle(ms->shmem.shmem_addr_hi);
ret_buf[2] = cpu_to_lle(ms->shmem.shmem_addr_hi); sbi_hart_protection_unmap_range((unsigned long)ret_buf, mpxy_shmem_size);
sbi_hart_unmap_saddr();
} }
/** Setup the new shared memory */ /** Setup the new shared memory */
ms->shmem.shmem_size = shmem_size;
ms->shmem.shmem_addr_lo = shmem_phys_lo; ms->shmem.shmem_addr_lo = shmem_phys_lo;
ms->shmem.shmem_addr_hi = shmem_phys_hi; ms->shmem.shmem_addr_hi = shmem_phys_hi;
@@ -303,15 +391,12 @@ int sbi_mpxy_set_shmem(unsigned long shmem_size, unsigned long shmem_phys_lo,
int sbi_mpxy_get_channel_ids(u32 start_index) int sbi_mpxy_get_channel_ids(u32 start_index)
{ {
u32 node_index = 0, node_ret = 0; struct mpxy_state *ms = sbi_domain_mpxy_state_thishart_ptr();
u32 remaining, returned, max_channelids; u32 remaining, returned, max_channelids;
u32 node_index = 0, node_ret = 0;
struct sbi_mpxy_channel *channel;
u32 channels_count = 0; u32 channels_count = 0;
u32 *shmem_base; u32 *shmem_base;
struct sbi_mpxy_channel *channel;
/* Check if the shared memory is being setup or not. */
struct mpxy_state *ms =
sbi_scratch_thishart_offset_ptr(mpxy_state_offset);
if (!mpxy_shmem_enabled(ms)) if (!mpxy_shmem_enabled(ms))
return SBI_ERR_NO_SHMEM; return SBI_ERR_NO_SHMEM;
@@ -323,12 +408,11 @@ int sbi_mpxy_get_channel_ids(u32 start_index)
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
shmem_base = hart_shmem_base(ms); shmem_base = hart_shmem_base(ms);
sbi_hart_map_saddr((unsigned long)hart_shmem_base(ms), sbi_hart_protection_map_range((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
ms->shmem.shmem_size);
/** number of channel ids which can be stored in shmem adjusting /** number of channel ids which can be stored in shmem adjusting
* for remaining and returned fields */ * for remaining and returned fields */
max_channelids = (ms->shmem.shmem_size / sizeof(u32)) - 2; max_channelids = (mpxy_shmem_size / sizeof(u32)) - 2;
/* total remaining from the start index */ /* total remaining from the start index */
remaining = channels_count - start_index; remaining = channels_count - start_index;
/* how many can be returned */ /* how many can be returned */
@@ -351,20 +435,18 @@ int sbi_mpxy_get_channel_ids(u32 start_index)
shmem_base[0] = cpu_to_le32(remaining); shmem_base[0] = cpu_to_le32(remaining);
shmem_base[1] = cpu_to_le32(returned); shmem_base[1] = cpu_to_le32(returned);
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
return SBI_SUCCESS; return SBI_SUCCESS;
} }
int sbi_mpxy_read_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count) int sbi_mpxy_read_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
{ {
struct mpxy_state *ms = sbi_domain_mpxy_state_thishart_ptr();
int ret = SBI_SUCCESS; int ret = SBI_SUCCESS;
u32 *attr_ptr, end_id; u32 *attr_ptr, end_id;
void *shmem_base; void *shmem_base;
struct mpxy_state *ms =
sbi_scratch_thishart_offset_ptr(mpxy_state_offset);
if (!mpxy_shmem_enabled(ms)) if (!mpxy_shmem_enabled(ms))
return SBI_ERR_NO_SHMEM; return SBI_ERR_NO_SHMEM;
@@ -378,14 +460,13 @@ int sbi_mpxy_read_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
/* Sanity check for base_attr_id and attr_count */ /* Sanity check for base_attr_id and attr_count */
if (!attr_count || (attr_count > (ms->shmem.shmem_size / ATTR_SIZE))) if (!attr_count || (attr_count > (mpxy_shmem_size / ATTR_SIZE)))
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
shmem_base = hart_shmem_base(ms); shmem_base = hart_shmem_base(ms);
end_id = base_attr_id + attr_count - 1; end_id = base_attr_id + attr_count - 1;
sbi_hart_map_saddr((unsigned long)hart_shmem_base(ms), sbi_hart_protection_map_range((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
ms->shmem.shmem_size);
/* Standard attributes range check */ /* Standard attributes range check */
if (mpxy_is_std_attr(base_attr_id)) { if (mpxy_is_std_attr(base_attr_id)) {
@@ -424,7 +505,7 @@ int sbi_mpxy_read_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
base_attr_id, attr_count); base_attr_id, attr_count);
} }
out: out:
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
return ret; return ret;
} }
@@ -442,8 +523,8 @@ out:
static int mpxy_check_write_std_attr(struct sbi_mpxy_channel *channel, static int mpxy_check_write_std_attr(struct sbi_mpxy_channel *channel,
u32 attr_id, u32 attr_val) u32 attr_id, u32 attr_val)
{ {
int ret = SBI_SUCCESS;
struct sbi_mpxy_channel_attrs *attrs = &channel->attrs; struct sbi_mpxy_channel_attrs *attrs = &channel->attrs;
int ret = SBI_SUCCESS;
switch(attr_id) { switch(attr_id) {
case SBI_MPXY_ATTR_MSI_CONTROL: case SBI_MPXY_ATTR_MSI_CONTROL:
@@ -475,11 +556,9 @@ static int mpxy_check_write_std_attr(struct sbi_mpxy_channel *channel,
* Write the attribute value * Write the attribute value
*/ */
static void mpxy_write_std_attr(struct sbi_mpxy_channel *channel, u32 attr_id, static void mpxy_write_std_attr(struct sbi_mpxy_channel *channel, u32 attr_id,
u32 attr_val) u32 attr_val)
{ {
struct mpxy_state *ms = struct mpxy_state *ms = sbi_domain_mpxy_state_thishart_ptr();
sbi_scratch_thishart_offset_ptr(mpxy_state_offset);
struct sbi_mpxy_channel_attrs *attrs = &channel->attrs; struct sbi_mpxy_channel_attrs *attrs = &channel->attrs;
switch(attr_id) { switch(attr_id) {
@@ -513,17 +592,16 @@ static void mpxy_write_std_attr(struct sbi_mpxy_channel *channel, u32 attr_id,
int sbi_mpxy_write_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count) int sbi_mpxy_write_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
{ {
struct mpxy_state *ms = sbi_domain_mpxy_state_thishart_ptr();
u32 *mem_ptr, attr_id, end_id, attr_val;
struct sbi_mpxy_channel *channel;
int ret, mem_idx; int ret, mem_idx;
void *shmem_base; void *shmem_base;
u32 *mem_ptr, attr_id, end_id, attr_val;
struct mpxy_state *ms =
sbi_scratch_thishart_offset_ptr(mpxy_state_offset);
if (!mpxy_shmem_enabled(ms)) if (!mpxy_shmem_enabled(ms))
return SBI_ERR_NO_SHMEM; return SBI_ERR_NO_SHMEM;
struct sbi_mpxy_channel *channel = mpxy_find_channel(channel_id); channel = mpxy_find_channel(channel_id);
if (!channel) if (!channel)
return SBI_ERR_NOT_SUPPORTED; return SBI_ERR_NOT_SUPPORTED;
@@ -533,13 +611,13 @@ int sbi_mpxy_write_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
/* Sanity check for base_attr_id and attr_count */ /* Sanity check for base_attr_id and attr_count */
if (!attr_count || (attr_count > (ms->shmem.shmem_size / ATTR_SIZE))) if (!attr_count || (attr_count > (mpxy_shmem_size / ATTR_SIZE)))
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
shmem_base = hart_shmem_base(ms); shmem_base = hart_shmem_base(ms);
end_id = base_attr_id + attr_count - 1; end_id = base_attr_id + attr_count - 1;
sbi_hart_map_saddr((unsigned long)shmem_base, ms->shmem.shmem_size); sbi_hart_protection_map_range((unsigned long)shmem_base, mpxy_shmem_size);
mem_ptr = (u32 *)shmem_base; mem_ptr = (u32 *)shmem_base;
@@ -596,7 +674,7 @@ int sbi_mpxy_write_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
base_attr_id, attr_count); base_attr_id, attr_count);
} }
out: out:
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range((unsigned long)shmem_base, mpxy_shmem_size);
return ret; return ret;
} }
@@ -604,17 +682,16 @@ int sbi_mpxy_send_message(u32 channel_id, u8 msg_id,
unsigned long msg_data_len, unsigned long msg_data_len,
unsigned long *resp_data_len) unsigned long *resp_data_len)
{ {
int ret; struct mpxy_state *ms = sbi_domain_mpxy_state_thishart_ptr();
struct sbi_mpxy_channel *channel;
void *shmem_base, *resp_buf; void *shmem_base, *resp_buf;
u32 resp_bufsize; u32 resp_bufsize;
int ret;
struct mpxy_state *ms =
sbi_scratch_thishart_offset_ptr(mpxy_state_offset);
if (!mpxy_shmem_enabled(ms)) if (!mpxy_shmem_enabled(ms))
return SBI_ERR_NO_SHMEM; return SBI_ERR_NO_SHMEM;
struct sbi_mpxy_channel *channel = mpxy_find_channel(channel_id); channel = mpxy_find_channel(channel_id);
if (!channel) if (!channel)
return SBI_ERR_NOT_SUPPORTED; return SBI_ERR_NOT_SUPPORTED;
@@ -624,30 +701,29 @@ int sbi_mpxy_send_message(u32 channel_id, u8 msg_id,
if (!resp_data_len && !channel->send_message_without_response) if (!resp_data_len && !channel->send_message_without_response)
return SBI_ERR_NOT_SUPPORTED; return SBI_ERR_NOT_SUPPORTED;
if (msg_data_len > ms->shmem.shmem_size || if (msg_data_len > mpxy_shmem_size ||
msg_data_len > channel->attrs.msg_data_maxlen) msg_data_len > channel->attrs.msg_data_maxlen)
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
shmem_base = hart_shmem_base(ms); shmem_base = hart_shmem_base(ms);
sbi_hart_map_saddr((unsigned long)shmem_base, ms->shmem.shmem_size); sbi_hart_protection_map_range((unsigned long)shmem_base, mpxy_shmem_size);
if (resp_data_len) { if (resp_data_len) {
resp_buf = shmem_base; resp_buf = shmem_base;
resp_bufsize = ms->shmem.shmem_size; resp_bufsize = mpxy_shmem_size;
ret = channel->send_message_with_response(channel, msg_id, ret = channel->send_message_with_response(channel, msg_id,
shmem_base, shmem_base,
msg_data_len, msg_data_len,
resp_buf, resp_buf,
resp_bufsize, resp_bufsize,
resp_data_len); resp_data_len);
} } else {
else {
ret = channel->send_message_without_response(channel, msg_id, ret = channel->send_message_without_response(channel, msg_id,
shmem_base, shmem_base,
msg_data_len); msg_data_len);
} }
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range((unsigned long)shmem_base, mpxy_shmem_size);
if (ret == SBI_ERR_TIMEOUT || ret == SBI_ERR_IO) if (ret == SBI_ERR_TIMEOUT || ret == SBI_ERR_IO)
return ret; return ret;
@@ -655,7 +731,7 @@ int sbi_mpxy_send_message(u32 channel_id, u8 msg_id,
return SBI_ERR_FAILED; return SBI_ERR_FAILED;
if (resp_data_len && if (resp_data_len &&
(*resp_data_len > ms->shmem.shmem_size || (*resp_data_len > mpxy_shmem_size ||
*resp_data_len > channel->attrs.msg_data_maxlen)) *resp_data_len > channel->attrs.msg_data_maxlen))
return SBI_ERR_FAILED; return SBI_ERR_FAILED;
@@ -664,34 +740,30 @@ int sbi_mpxy_send_message(u32 channel_id, u8 msg_id,
int sbi_mpxy_get_notification_events(u32 channel_id, unsigned long *events_len) int sbi_mpxy_get_notification_events(u32 channel_id, unsigned long *events_len)
{ {
int ret; struct mpxy_state *ms = sbi_domain_mpxy_state_thishart_ptr();
struct sbi_mpxy_channel *channel;
void *eventsbuf, *shmem_base; void *eventsbuf, *shmem_base;
int ret;
struct mpxy_state *ms =
sbi_scratch_thishart_offset_ptr(mpxy_state_offset);
if (!mpxy_shmem_enabled(ms)) if (!mpxy_shmem_enabled(ms))
return SBI_ERR_NO_SHMEM; return SBI_ERR_NO_SHMEM;
struct sbi_mpxy_channel *channel = mpxy_find_channel(channel_id); channel = mpxy_find_channel(channel_id);
if (!channel) if (!channel || !channel->get_notification_events)
return SBI_ERR_NOT_SUPPORTED;
if (!channel->get_notification_events)
return SBI_ERR_NOT_SUPPORTED; return SBI_ERR_NOT_SUPPORTED;
shmem_base = hart_shmem_base(ms); shmem_base = hart_shmem_base(ms);
sbi_hart_map_saddr((unsigned long)shmem_base, ms->shmem.shmem_size); sbi_hart_protection_map_range((unsigned long)shmem_base, mpxy_shmem_size);
eventsbuf = shmem_base; eventsbuf = shmem_base;
ret = channel->get_notification_events(channel, eventsbuf, ret = channel->get_notification_events(channel, eventsbuf,
ms->shmem.shmem_size, mpxy_shmem_size,
events_len); events_len);
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range((unsigned long)shmem_base, mpxy_shmem_size);
if (ret) if (ret)
return ret; return ret;
if (*events_len > ms->shmem.shmem_size) if (*events_len > (mpxy_shmem_size - 16))
return SBI_ERR_FAILED; return SBI_ERR_FAILED;
return SBI_SUCCESS; return SBI_SUCCESS;

View File

@@ -13,6 +13,7 @@
#include <sbi/sbi_domain.h> #include <sbi/sbi_domain.h>
#include <sbi/sbi_ecall_interface.h> #include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h> #include <sbi/sbi_heap.h>
#include <sbi/sbi_platform.h> #include <sbi/sbi_platform.h>
#include <sbi/sbi_pmu.h> #include <sbi/sbi_pmu.h>
@@ -56,6 +57,14 @@ union sbi_pmu_ctr_info {
#error "Can't handle firmware counters beyond BITS_PER_LONG" #error "Can't handle firmware counters beyond BITS_PER_LONG"
#endif #endif
/** HW event configuration parameters */
struct sbi_pmu_hw_event_config {
/* event_data value from sbi_pmu_ctr_cfg_match() */
uint64_t event_data;
/* HW events flags from sbi_pmu_ctr_cfg_match() */
uint64_t flags;
};
/** Per-HART state of the PMU counters */ /** Per-HART state of the PMU counters */
struct sbi_pmu_hart_state { struct sbi_pmu_hart_state {
/* HART to which this state belongs */ /* HART to which this state belongs */
@@ -72,6 +81,12 @@ struct sbi_pmu_hart_state {
* and hence can optimally share the same memory. * and hence can optimally share the same memory.
*/ */
uint64_t fw_counters_data[SBI_PMU_FW_CTR_MAX]; uint64_t fw_counters_data[SBI_PMU_FW_CTR_MAX];
/* HW events configuration parameters from
* sbi_pmu_ctr_cfg_match() command which are
* used for restoring RAW hardware events after
* cpu suspending.
*/
struct sbi_pmu_hw_event_config hw_counters_cfg[SBI_PMU_HW_CTR_MAX];
}; };
/** Offset of pointer to PMU HART state in scratch space */ /** Offset of pointer to PMU HART state in scratch space */
@@ -206,6 +221,12 @@ static int pmu_ctr_validate(struct sbi_pmu_hart_state *phs,
return event_idx_type; return event_idx_type;
} }
static bool pmu_ctr_idx_validate(unsigned long cbase, unsigned long cmask)
{
/* Do a basic sanity check of counter base & mask */
return cmask && cbase + sbi_fls(cmask) < total_ctrs;
}
int sbi_pmu_ctr_fw_read(uint32_t cidx, uint64_t *cval) int sbi_pmu_ctr_fw_read(uint32_t cidx, uint64_t *cval)
{ {
int event_idx_type; int event_idx_type;
@@ -309,11 +330,11 @@ int sbi_pmu_add_raw_event_counter_map(uint64_t select, uint64_t select_mask, u32
void sbi_pmu_ovf_irq() void sbi_pmu_ovf_irq()
{ {
/* /*
* We need to disable LCOFIP before returning to S-mode or we will loop * We need to disable the overflow irq before returning to S-mode or we will loop
* on LCOFIP being triggered * on an irq being triggered
*/ */
csr_clear(CSR_MIE, MIP_LCOFIP); csr_clear(CSR_MIE, sbi_pmu_irq_mask());
sbi_sse_inject_event(SBI_SSE_EVENT_LOCAL_PMU); sbi_sse_inject_event(SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW);
} }
static int pmu_ctr_enable_irq_hw(int ctr_idx) static int pmu_ctr_enable_irq_hw(int ctr_idx)
@@ -344,7 +365,7 @@ static int pmu_ctr_enable_irq_hw(int ctr_idx)
* Otherwise, there will be race conditions where we may clear the bit * Otherwise, there will be race conditions where we may clear the bit
* the software is yet to handle the interrupt. * the software is yet to handle the interrupt.
*/ */
if (!(mip_val & MIP_LCOFIP)) { if (!(mip_val & sbi_pmu_irq_mask())) {
mhpmevent_curr &= of_mask; mhpmevent_curr &= of_mask;
csr_write_num(mhpmevent_csr, mhpmevent_curr); csr_write_num(mhpmevent_csr, mhpmevent_curr);
} }
@@ -405,11 +426,21 @@ int sbi_pmu_irq_bit(void)
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr(); struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF)) if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
return MIP_LCOFIP; return IRQ_PMU_OVF;
if (pmu_dev && pmu_dev->hw_counter_irq_bit) if (pmu_dev && pmu_dev->hw_counter_irq_bit)
return pmu_dev->hw_counter_irq_bit(); return pmu_dev->hw_counter_irq_bit();
return 0; return -1;
}
unsigned long sbi_pmu_irq_mask(void)
{
int irq_bit = sbi_pmu_irq_bit();
if (irq_bit < 0)
return 0;
return BIT(irq_bit);
} }
static int pmu_ctr_start_fw(struct sbi_pmu_hart_state *phs, static int pmu_ctr_start_fw(struct sbi_pmu_hart_state *phs,
@@ -427,7 +458,7 @@ static int pmu_ctr_start_fw(struct sbi_pmu_hart_state *phs,
!pmu_dev->fw_counter_write_value || !pmu_dev->fw_counter_write_value ||
!pmu_dev->fw_counter_start) { !pmu_dev->fw_counter_start) {
return SBI_EINVAL; return SBI_EINVAL;
} }
if (ival_update) if (ival_update)
pmu_dev->fw_counter_write_value(phs->hartid, pmu_dev->fw_counter_write_value(phs->hartid,
@@ -447,6 +478,61 @@ static int pmu_ctr_start_fw(struct sbi_pmu_hart_state *phs,
return 0; return 0;
} }
static void pmu_update_inhibit_flags(unsigned long flags, uint64_t *mhpmevent_val)
{
if (flags & SBI_PMU_CFG_FLAG_SET_VUINH)
*mhpmevent_val |= MHPMEVENT_VUINH;
if (flags & SBI_PMU_CFG_FLAG_SET_VSINH)
*mhpmevent_val |= MHPMEVENT_VSINH;
if (flags & SBI_PMU_CFG_FLAG_SET_UINH)
*mhpmevent_val |= MHPMEVENT_UINH;
if (flags & SBI_PMU_CFG_FLAG_SET_SINH)
*mhpmevent_val |= MHPMEVENT_SINH;
}
static int pmu_update_hw_mhpmevent(struct sbi_pmu_hw_event *hw_evt, int ctr_idx,
unsigned long flags, unsigned long eindex,
uint64_t data)
{
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
uint64_t mhpmevent_val;
/* Get the final mhpmevent value to be written from platform */
mhpmevent_val = sbi_platform_pmu_xlate_to_mhpmevent(plat, eindex, data);
if (!mhpmevent_val || ctr_idx < 3 || ctr_idx >= SBI_PMU_HW_CTR_MAX)
return SBI_EFAIL;
/**
* Always set the OVF bit(disable interrupts) and inhibit counting of
* events in M-mode. The OVF bit should be enabled during the start call.
*/
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
mhpmevent_val = (mhpmevent_val & ~MHPMEVENT_SSCOF_MASK) |
MHPMEVENT_MINH | MHPMEVENT_OF;
if (pmu_dev && pmu_dev->hw_counter_disable_irq)
pmu_dev->hw_counter_disable_irq(ctr_idx);
/* Update the inhibit flags based on inhibit flags received from supervisor */
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
pmu_update_inhibit_flags(flags, &mhpmevent_val);
if (pmu_dev && pmu_dev->hw_counter_filter_mode)
pmu_dev->hw_counter_filter_mode(flags, ctr_idx);
#if __riscv_xlen == 32
csr_write_num(CSR_MHPMEVENT3 + ctr_idx - 3, mhpmevent_val & 0xFFFFFFFF);
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
csr_write_num(CSR_MHPMEVENT3H + ctr_idx - 3,
mhpmevent_val >> BITS_PER_LONG);
#else
csr_write_num(CSR_MHPMEVENT3 + ctr_idx - 3, mhpmevent_val);
#endif
return 0;
}
int sbi_pmu_ctr_start(unsigned long cbase, unsigned long cmask, int sbi_pmu_ctr_start(unsigned long cbase, unsigned long cmask,
unsigned long flags, uint64_t ival) unsigned long flags, uint64_t ival)
{ {
@@ -462,7 +548,7 @@ int sbi_pmu_ctr_start(unsigned long cbase, unsigned long cmask,
int i, cidx; int i, cidx;
uint64_t edata; uint64_t edata;
if ((cbase + sbi_fls(cmask)) >= total_ctrs) if (!pmu_ctr_idx_validate(cbase, cmask))
return ret; return ret;
if (flags & SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT) if (flags & SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT)
@@ -483,9 +569,20 @@ int sbi_pmu_ctr_start(unsigned long cbase, unsigned long cmask,
: 0x0; : 0x0;
ret = pmu_ctr_start_fw(phs, cidx, event_code, edata, ret = pmu_ctr_start_fw(phs, cidx, event_code, edata,
ival, bUpdate); ival, bUpdate);
} } else {
else if (cidx >= 3) {
struct sbi_pmu_hw_event_config *ev_cfg =
&phs->hw_counters_cfg[cidx];
ret = pmu_update_hw_mhpmevent(&hw_event_map[cidx], cidx,
ev_cfg->flags,
phs->active_events[cidx],
ev_cfg->event_data);
if (ret)
return ret;
}
ret = pmu_ctr_start_hw(cidx, ival, bUpdate); ret = pmu_ctr_start_hw(cidx, ival, bUpdate);
}
} }
return ret; return ret;
@@ -567,8 +664,8 @@ int sbi_pmu_ctr_stop(unsigned long cbase, unsigned long cmask,
uint32_t event_code; uint32_t event_code;
int i, cidx; int i, cidx;
if ((cbase + sbi_fls(cmask)) >= total_ctrs) if (!pmu_ctr_idx_validate(cbase, cmask))
return SBI_EINVAL; return ret;
if (flag & SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT) if (flag & SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT)
return SBI_ENO_SHMEM; return SBI_ENO_SHMEM;
@@ -591,68 +688,13 @@ int sbi_pmu_ctr_stop(unsigned long cbase, unsigned long cmask,
} }
} }
/* Clear MIP_LCOFIP to avoid spurious interrupts */ /* Clear PMU overflow interrupt to avoid spurious ones */
if (phs->sse_enabled) if (phs->sse_enabled)
csr_clear(CSR_MIP, MIP_LCOFIP); csr_clear(CSR_MIP, sbi_pmu_irq_mask());
return ret; return ret;
} }
static void pmu_update_inhibit_flags(unsigned long flags, uint64_t *mhpmevent_val)
{
if (flags & SBI_PMU_CFG_FLAG_SET_VUINH)
*mhpmevent_val |= MHPMEVENT_VUINH;
if (flags & SBI_PMU_CFG_FLAG_SET_VSINH)
*mhpmevent_val |= MHPMEVENT_VSINH;
if (flags & SBI_PMU_CFG_FLAG_SET_UINH)
*mhpmevent_val |= MHPMEVENT_UINH;
if (flags & SBI_PMU_CFG_FLAG_SET_SINH)
*mhpmevent_val |= MHPMEVENT_SINH;
}
static int pmu_update_hw_mhpmevent(struct sbi_pmu_hw_event *hw_evt, int ctr_idx,
unsigned long flags, unsigned long eindex,
uint64_t data)
{
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
uint64_t mhpmevent_val;
/* Get the final mhpmevent value to be written from platform */
mhpmevent_val = sbi_platform_pmu_xlate_to_mhpmevent(plat, eindex, data);
if (!mhpmevent_val || ctr_idx < 3 || ctr_idx >= SBI_PMU_HW_CTR_MAX)
return SBI_EFAIL;
/**
* Always set the OVF bit(disable interrupts) and inhibit counting of
* events in M-mode. The OVF bit should be enabled during the start call.
*/
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
mhpmevent_val = (mhpmevent_val & ~MHPMEVENT_SSCOF_MASK) |
MHPMEVENT_MINH | MHPMEVENT_OF;
if (pmu_dev && pmu_dev->hw_counter_disable_irq)
pmu_dev->hw_counter_disable_irq(ctr_idx);
/* Update the inhibit flags based on inhibit flags received from supervisor */
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
pmu_update_inhibit_flags(flags, &mhpmevent_val);
if (pmu_dev && pmu_dev->hw_counter_filter_mode)
pmu_dev->hw_counter_filter_mode(flags, ctr_idx);
#if __riscv_xlen == 32
csr_write_num(CSR_MHPMEVENT3 + ctr_idx - 3, mhpmevent_val & 0xFFFFFFFF);
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
csr_write_num(CSR_MHPMEVENT3H + ctr_idx - 3,
mhpmevent_val >> BITS_PER_LONG);
#else
csr_write_num(CSR_MHPMEVENT3 + ctr_idx - 3, mhpmevent_val);
#endif
return 0;
}
static int pmu_fixed_ctr_update_inhibit_bits(int fixed_ctr, unsigned long flags) static int pmu_fixed_ctr_update_inhibit_bits(int fixed_ctr, unsigned long flags)
{ {
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr(); struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
@@ -722,12 +764,13 @@ static int pmu_ctr_find_hw(struct sbi_pmu_hart_state *phs,
return SBI_EINVAL; return SBI_EINVAL;
/** /**
* If Sscof is present try to find the programmable counter for * If Sscofpmf or Andes PMU is present, try to find
* cycle/instret as well. * the programmable counter for cycle/instret as well.
*/ */
fixed_ctr = pmu_ctr_find_fixed_hw(event_idx); fixed_ctr = pmu_ctr_find_fixed_hw(event_idx);
if (fixed_ctr >= 0 && if (fixed_ctr >= 0 &&
!sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF)) !sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF) &&
!sbi_hart_has_extension(scratch, SBI_HART_EXT_XANDESPMU))
return pmu_fixed_ctr_update_inhibit_bits(fixed_ctr, flags); return pmu_fixed_ctr_update_inhibit_bits(fixed_ctr, flags);
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_11) if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_11)
@@ -763,6 +806,7 @@ static int pmu_ctr_find_hw(struct sbi_pmu_hart_state *phs,
continue; continue;
/* We found a valid counter that is not started yet */ /* We found a valid counter that is not started yet */
ctr_idx = cbase; ctr_idx = cbase;
break;
} }
} }
@@ -800,7 +844,7 @@ static int pmu_ctr_find_fw(struct sbi_pmu_hart_state *phs,
cidx = i + cbase; cidx = i + cbase;
if (cidx < num_hw_ctrs || total_ctrs <= cidx) if (cidx < num_hw_ctrs || total_ctrs <= cidx)
continue; continue;
if (phs->active_events[i] != SBI_PMU_EVENT_IDX_INVALID) if (phs->active_events[cidx] != SBI_PMU_EVENT_IDX_INVALID)
continue; continue;
if (SBI_PMU_FW_PLATFORM == event_code && if (SBI_PMU_FW_PLATFORM == event_code &&
pmu_dev && pmu_dev->fw_counter_match_encoding) { pmu_dev && pmu_dev->fw_counter_match_encoding) {
@@ -810,7 +854,7 @@ static int pmu_ctr_find_fw(struct sbi_pmu_hart_state *phs,
continue; continue;
} }
return i; return cidx;
} }
return SBI_ENOTSUPP; return SBI_ENOTSUPP;
@@ -828,8 +872,7 @@ int sbi_pmu_ctr_cfg_match(unsigned long cidx_base, unsigned long cidx_mask,
int ret, event_type, ctr_idx = SBI_ENOTSUPP; int ret, event_type, ctr_idx = SBI_ENOTSUPP;
u32 event_code; u32 event_code;
/* Do a basic sanity check of counter base & mask */ if (!pmu_ctr_idx_validate(cidx_base, cidx_mask))
if ((cidx_base + sbi_fls(cidx_mask)) >= total_ctrs)
return SBI_EINVAL; return SBI_EINVAL;
event_type = pmu_event_validate(phs, event_idx, event_data); event_type = pmu_event_validate(phs, event_idx, event_data);
@@ -857,12 +900,20 @@ int sbi_pmu_ctr_cfg_match(unsigned long cidx_base, unsigned long cidx_mask,
/* Any firmware counter can be used track any firmware event */ /* Any firmware counter can be used track any firmware event */
ctr_idx = pmu_ctr_find_fw(phs, cidx_base, cidx_mask, ctr_idx = pmu_ctr_find_fw(phs, cidx_base, cidx_mask,
event_code, event_data); event_code, event_data);
if (event_code == SBI_PMU_FW_PLATFORM) if ((event_code == SBI_PMU_FW_PLATFORM) && (ctr_idx >= num_hw_ctrs))
phs->fw_counters_data[ctr_idx - num_hw_ctrs] = phs->fw_counters_data[ctr_idx - num_hw_ctrs] =
event_data; event_data;
} else { } else {
ctr_idx = pmu_ctr_find_hw(phs, cidx_base, cidx_mask, flags, ctr_idx = pmu_ctr_find_hw(phs, cidx_base, cidx_mask, flags,
event_idx, event_data); event_idx, event_data);
if (ctr_idx >= 0) {
struct sbi_pmu_hw_event_config *ev_cfg =
&phs->hw_counters_cfg[ctr_idx];
ev_cfg->event_data = event_data;
/* Remove flags that are used in match call only */
ev_cfg->flags = flags & SBI_PMU_CFG_EVENT_MASK;
}
} }
if (ctr_idx < 0) if (ctr_idx < 0)
@@ -1003,7 +1054,7 @@ int sbi_pmu_event_get_info(unsigned long shmem_phys_lo, unsigned long shmem_phys
SBI_DOMAIN_READ | SBI_DOMAIN_WRITE)) SBI_DOMAIN_READ | SBI_DOMAIN_WRITE))
return SBI_ERR_INVALID_ADDRESS; return SBI_ERR_INVALID_ADDRESS;
sbi_hart_map_saddr(shmem_phys_lo, shmem_size); sbi_hart_protection_map_range(shmem_phys_lo, shmem_size);
einfo = (struct sbi_pmu_event_info *)(shmem_phys_lo); einfo = (struct sbi_pmu_event_info *)(shmem_phys_lo);
for (i = 0; i < num_events; i++) { for (i = 0; i < num_events; i++) {
@@ -1037,7 +1088,7 @@ int sbi_pmu_event_get_info(unsigned long shmem_phys_lo, unsigned long shmem_phys
} }
} }
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range(shmem_phys_lo, shmem_size);
return 0; return 0;
} }
@@ -1086,30 +1137,43 @@ void sbi_pmu_exit(struct sbi_scratch *scratch)
static void pmu_sse_enable(uint32_t event_id) static void pmu_sse_enable(uint32_t event_id)
{ {
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr(); unsigned long irq_mask = sbi_pmu_irq_mask();
phs->sse_enabled = true; csr_set(CSR_MIE, irq_mask);
csr_clear(CSR_MIDELEG, sbi_pmu_irq_bit());
csr_clear(CSR_MIP, MIP_LCOFIP);
csr_set(CSR_MIE, MIP_LCOFIP);
} }
static void pmu_sse_disable(uint32_t event_id) static void pmu_sse_disable(uint32_t event_id)
{ {
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr(); unsigned long irq_mask = sbi_pmu_irq_mask();
csr_clear(CSR_MIE, MIP_LCOFIP); csr_clear(CSR_MIE, irq_mask);
csr_clear(CSR_MIP, MIP_LCOFIP); csr_clear(CSR_MIP, irq_mask);
csr_set(CSR_MIDELEG, sbi_pmu_irq_bit());
phs->sse_enabled = false;
} }
static void pmu_sse_complete(uint32_t event_id) static void pmu_sse_complete(uint32_t event_id)
{ {
csr_set(CSR_MIE, MIP_LCOFIP); csr_set(CSR_MIE, sbi_pmu_irq_mask());
}
static void pmu_sse_register(uint32_t event_id)
{
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
phs->sse_enabled = true;
csr_clear(CSR_MIDELEG, sbi_pmu_irq_mask());
}
static void pmu_sse_unregister(uint32_t event_id)
{
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
phs->sse_enabled = false;
csr_set(CSR_MIDELEG, sbi_pmu_irq_mask());
} }
static const struct sbi_sse_cb_ops pmu_sse_cb_ops = { static const struct sbi_sse_cb_ops pmu_sse_cb_ops = {
.register_cb = pmu_sse_register,
.unregister_cb = pmu_sse_unregister,
.enable_cb = pmu_sse_enable, .enable_cb = pmu_sse_enable,
.disable_cb = pmu_sse_disable, .disable_cb = pmu_sse_disable,
.complete_cb = pmu_sse_complete, .complete_cb = pmu_sse_complete,
@@ -1152,9 +1216,10 @@ int sbi_pmu_init(struct sbi_scratch *scratch, bool cold_boot)
return SBI_EINVAL; return SBI_EINVAL;
total_ctrs = num_hw_ctrs + SBI_PMU_FW_CTR_MAX; total_ctrs = num_hw_ctrs + SBI_PMU_FW_CTR_MAX;
}
sbi_sse_set_cb_ops(SBI_SSE_EVENT_LOCAL_PMU, &pmu_sse_cb_ops); if (sbi_pmu_irq_bit() >= 0)
sbi_sse_add_event(SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, &pmu_sse_cb_ops);
}
phs = pmu_get_hart_state_ptr(scratch); phs = pmu_get_hart_state_ptr(scratch);
if (!phs) { if (!phs) {

View File

@@ -14,18 +14,31 @@
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
u32 last_hartindex_having_scratch = 0; #define DEFAULT_SCRATCH_ALLOC_ALIGN __SIZEOF_POINTER__
u32 hartindex_to_hartid_table[SBI_HARTMASK_MAX_BITS + 1] = { -1U };
struct sbi_scratch *hartindex_to_scratch_table[SBI_HARTMASK_MAX_BITS + 1] = { 0 }; u32 sbi_scratch_hart_count;
u32 hartindex_to_hartid_table[SBI_HARTMASK_MAX_BITS] = { [0 ... SBI_HARTMASK_MAX_BITS-1] = -1U };
struct sbi_scratch *hartindex_to_scratch_table[SBI_HARTMASK_MAX_BITS];
static spinlock_t extra_lock = SPIN_LOCK_INITIALIZER; static spinlock_t extra_lock = SPIN_LOCK_INITIALIZER;
static unsigned long extra_offset = SBI_SCRATCH_EXTRA_SPACE_OFFSET; static unsigned long extra_offset = SBI_SCRATCH_EXTRA_SPACE_OFFSET;
/*
* Get the alignment size.
* Return DEFAULT_SCRATCH_ALLOC_ALIGNMENT or riscv,cbom_block_size
*/
static unsigned long sbi_get_scratch_alloc_align(void)
{
const struct sbi_platform *plat = sbi_platform_thishart_ptr();
if (!plat || !plat->cbom_block_size)
return DEFAULT_SCRATCH_ALLOC_ALIGN;
return plat->cbom_block_size;
}
u32 sbi_hartid_to_hartindex(u32 hartid) u32 sbi_hartid_to_hartindex(u32 hartid)
{ {
u32 i; sbi_for_each_hartindex(i)
for (i = 0; i <= last_hartindex_having_scratch; i++)
if (hartindex_to_hartid_table[i] == hartid) if (hartindex_to_hartid_table[i] == hartid)
return i; return i;
@@ -36,27 +49,30 @@ typedef struct sbi_scratch *(*hartid2scratch)(ulong hartid, ulong hartindex);
int sbi_scratch_init(struct sbi_scratch *scratch) int sbi_scratch_init(struct sbi_scratch *scratch)
{ {
u32 i, h; u32 h, hart_count;
const struct sbi_platform *plat = sbi_platform_ptr(scratch); const struct sbi_platform *plat = sbi_platform_ptr(scratch);
for (i = 0; i < plat->hart_count; i++) { hart_count = plat->hart_count;
if (hart_count > SBI_HARTMASK_MAX_BITS)
hart_count = SBI_HARTMASK_MAX_BITS;
sbi_scratch_hart_count = hart_count;
sbi_for_each_hartindex(i) {
h = (plat->hart_index2id) ? plat->hart_index2id[i] : i; h = (plat->hart_index2id) ? plat->hart_index2id[i] : i;
hartindex_to_hartid_table[i] = h; hartindex_to_hartid_table[i] = h;
hartindex_to_scratch_table[i] = hartindex_to_scratch_table[i] =
((hartid2scratch)scratch->hartid_to_scratch)(h, i); ((hartid2scratch)scratch->hartid_to_scratch)(h, i);
} }
last_hartindex_having_scratch = plat->hart_count - 1;
return 0; return 0;
} }
unsigned long sbi_scratch_alloc_offset(unsigned long size) unsigned long sbi_scratch_alloc_offset(unsigned long size)
{ {
u32 i;
void *ptr; void *ptr;
unsigned long ret = 0; unsigned long ret = 0;
struct sbi_scratch *rscratch; struct sbi_scratch *rscratch;
unsigned long scratch_alloc_align = 0;
/* /*
* We have a simple brain-dead allocator which never expects * We have a simple brain-dead allocator which never expects
@@ -70,8 +86,14 @@ unsigned long sbi_scratch_alloc_offset(unsigned long size)
if (!size) if (!size)
return 0; return 0;
size += __SIZEOF_POINTER__ - 1; scratch_alloc_align = sbi_get_scratch_alloc_align();
size &= ~((unsigned long)__SIZEOF_POINTER__ - 1);
/*
* We let the allocation align to cacheline bytes to avoid livelock on
* certain platforms due to atomic variables from the same cache line.
*/
size += scratch_alloc_align - 1;
size &= ~(scratch_alloc_align - 1);
spin_lock(&extra_lock); spin_lock(&extra_lock);
@@ -85,7 +107,7 @@ done:
spin_unlock(&extra_lock); spin_unlock(&extra_lock);
if (ret) { if (ret) {
for (i = 0; i <= sbi_scratch_last_hartindex(); i++) { sbi_for_each_hartindex(i) {
rscratch = sbi_hartindex_to_scratch(i); rscratch = sbi_hartindex_to_scratch(i);
if (!rscratch) if (!rscratch)
continue; continue;

View File

@@ -15,6 +15,7 @@
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_fifo.h> #include <sbi/sbi_fifo.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h> #include <sbi/sbi_heap.h>
#include <sbi/sbi_hsm.h> #include <sbi/sbi_hsm.h>
#include <sbi/sbi_ipi.h> #include <sbi/sbi_ipi.h>
@@ -23,6 +24,7 @@
#include <sbi/sbi_pmu.h> #include <sbi/sbi_pmu.h>
#include <sbi/sbi_sse.h> #include <sbi/sbi_sse.h>
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi/sbi_slist.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
#include <sbi/sbi_trap.h> #include <sbi/sbi_trap.h>
@@ -39,21 +41,11 @@
#define EVENT_IS_GLOBAL(__event_id) ((__event_id) & SBI_SSE_EVENT_GLOBAL_BIT) #define EVENT_IS_GLOBAL(__event_id) ((__event_id) & SBI_SSE_EVENT_GLOBAL_BIT)
static const uint32_t supported_events[] = {
SBI_SSE_EVENT_LOCAL_RAS,
SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP,
SBI_SSE_EVENT_GLOBAL_RAS,
SBI_SSE_EVENT_LOCAL_PMU,
SBI_SSE_EVENT_LOCAL_SOFTWARE,
SBI_SSE_EVENT_GLOBAL_SOFTWARE,
};
#define EVENT_COUNT array_size(supported_events)
#define sse_event_invoke_cb(_event, _cb, ...) \ #define sse_event_invoke_cb(_event, _cb, ...) \
{ \ { \
if (_event->cb_ops && _event->cb_ops->_cb) \ const struct sbi_sse_cb_ops *__ops = _event->info->cb_ops; \
_event->cb_ops->_cb(_event->event_id, ##__VA_ARGS__); \ if (__ops && __ops->_cb) \
__ops->_cb(_event->event_id, ##__VA_ARGS__); \
} }
struct sse_entry_state { struct sse_entry_state {
@@ -110,7 +102,7 @@ struct sbi_sse_event {
struct sbi_sse_event_attrs attrs; struct sbi_sse_event_attrs attrs;
uint32_t event_id; uint32_t event_id;
u32 hartindex; u32 hartindex;
const struct sbi_sse_cb_ops *cb_ops; struct sse_event_info *info;
struct sbi_dlist node; struct sbi_dlist node;
}; };
@@ -167,6 +159,12 @@ struct sse_global_event {
spinlock_t lock; spinlock_t lock;
}; };
struct sse_event_info {
uint32_t event_id;
const struct sbi_sse_cb_ops *cb_ops;
SBI_SLIST_NODE(sse_event_info);
};
static unsigned int local_event_count; static unsigned int local_event_count;
static unsigned int global_event_count; static unsigned int global_event_count;
static struct sse_global_event *global_events; static struct sse_global_event *global_events;
@@ -180,6 +178,58 @@ static u32 sse_ipi_inject_event = SBI_IPI_EVENT_MAX;
static int sse_ipi_inject_send(unsigned long hartid, uint32_t event_id); static int sse_ipi_inject_send(unsigned long hartid, uint32_t event_id);
struct sse_event_info global_software_event = {
.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,
SBI_SLIST_NODE_INIT(NULL),
};
struct sse_event_info local_software_event = {
.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,
SBI_SLIST_NODE_INIT(&global_software_event),
};
static SBI_SLIST_HEAD(supported_events, sse_event_info) =
SBI_SLIST_HEAD_INIT(&local_software_event);
/*
* This array is used to distinguish between standard event and platform
* events in order to return SBI_ERR_NOT_SUPPORTED for them.
*/
static const uint32_t standard_events[] = {
SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,
SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP,
SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,
SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,
SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,
SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,
SBI_SSE_EVENT_LOCAL_SOFTWARE,
SBI_SSE_EVENT_GLOBAL_SOFTWARE,
};
static bool sse_is_standard_event(uint32_t event_id)
{
int i;
for (i = 0; i < array_size(standard_events); i++) {
if (event_id == standard_events[i])
return true;
}
return false;
}
static struct sse_event_info *sse_event_info_get(uint32_t event_id)
{
struct sse_event_info *info;
SBI_SLIST_FOR_EACH_ENTRY(info, supported_events) {
if (info->event_id == event_id)
return info;
}
return NULL;
}
static unsigned long sse_event_state(struct sbi_sse_event *e) static unsigned long sse_event_state(struct sbi_sse_event *e)
{ {
return e->attrs.status & SBI_SSE_ATTR_STATUS_STATE_MASK; return e->attrs.status & SBI_SSE_ATTR_STATUS_STATE_MASK;
@@ -244,30 +294,41 @@ static void sse_event_set_state(struct sbi_sse_event *e,
e->attrs.status |= new_state; e->attrs.status |= new_state;
} }
static struct sbi_sse_event *sse_event_get(uint32_t event_id) static int sse_event_get(uint32_t event_id, struct sbi_sse_event **eret)
{ {
unsigned int i; unsigned int i;
struct sbi_sse_event *e; struct sbi_sse_event *e;
struct sse_hart_state *shs; struct sse_hart_state *shs;
if (!eret)
return SBI_EINVAL;
if (EVENT_IS_GLOBAL(event_id)) { if (EVENT_IS_GLOBAL(event_id)) {
for (i = 0; i < global_event_count; i++) { for (i = 0; i < global_event_count; i++) {
e = &global_events[i].event; e = &global_events[i].event;
if (e->event_id == event_id) { if (e->event_id == event_id) {
spin_lock(&global_events[i].lock); spin_lock(&global_events[i].lock);
return e; *eret = e;
return SBI_SUCCESS;
} }
} }
} else { } else {
shs = sse_thishart_state_ptr(); shs = sse_thishart_state_ptr();
for (i = 0; i < local_event_count; i++) { for (i = 0; i < local_event_count; i++) {
e = &shs->local_events[i]; e = &shs->local_events[i];
if (e->event_id == event_id) if (e->event_id == event_id) {
return e; *eret = e;
return SBI_SUCCESS;
}
} }
} }
return NULL; /* Check if the event is a standard one but not supported */
if (sse_is_standard_event(event_id))
return SBI_ENOTSUPP;
/* If not supported nor a standard event, it is invalid */
return SBI_EINVAL;
} }
static void sse_event_put(struct sbi_sse_event *e) static void sse_event_put(struct sbi_sse_event *e)
@@ -328,7 +389,7 @@ static int sse_event_set_hart_id_check(struct sbi_sse_event *e,
struct sbi_domain *hd = sbi_domain_thishart_ptr(); struct sbi_domain *hd = sbi_domain_thishart_ptr();
if (!sse_event_is_global(e)) if (!sse_event_is_global(e))
return SBI_EBAD_RANGE; return SBI_EDENIED;
if (!sbi_domain_is_assigned_hart(hd, sbi_hartid_to_hartindex(hartid))) if (!sbi_domain_is_assigned_hart(hd, sbi_hartid_to_hartindex(hartid)))
return SBI_EINVAL; return SBI_EINVAL;
@@ -367,10 +428,12 @@ static int sse_event_set_attr_check(struct sbi_sse_event *e, uint32_t attr_id,
return sse_event_set_hart_id_check(e, val); return sse_event_set_hart_id_check(e, val);
case SBI_SSE_ATTR_INTERRUPTED_FLAGS: case SBI_SSE_ATTR_INTERRUPTED_FLAGS:
if (val & ~(SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPP | if (val & ~(SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP |
SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPIE | SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE |
SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV | SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV |
SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP)) SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP |
SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP |
SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT))
return SBI_EINVAL; return SBI_EINVAL;
__attribute__((__fallthrough__)); __attribute__((__fallthrough__));
case SBI_SSE_ATTR_INTERRUPTED_SEPC: case SBI_SSE_ATTR_INTERRUPTED_SEPC:
@@ -384,7 +447,13 @@ static int sse_event_set_attr_check(struct sbi_sse_event *e, uint32_t attr_id,
return SBI_OK; return SBI_OK;
default: default:
return SBI_EBAD_RANGE; /*
* Attribute range validity was already checked by
* sbi_sse_attr_check(). If we end up here, attribute was not
* handled by the above 'case' statements and thus it is
* read-only.
*/
return SBI_EDENIED;
} }
} }
@@ -452,10 +521,14 @@ static unsigned long sse_interrupted_flags(unsigned long mstatus)
{ {
unsigned long hstatus, flags = 0; unsigned long hstatus, flags = 0;
if (mstatus & (MSTATUS_SPIE)) if (mstatus & MSTATUS_SPIE)
flags |= SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPIE; flags |= SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE;
if (mstatus & (MSTATUS_SPP)) if (mstatus & MSTATUS_SPP)
flags |= SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPP; flags |= SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP;
if (mstatus & MSTATUS_SPELP)
flags |= SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP;
if (mstatus & MSTATUS_SDT)
flags |= SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT;
if (misa_extension('H')) { if (misa_extension('H')) {
hstatus = csr_read(CSR_HSTATUS); hstatus = csr_read(CSR_HSTATUS);
@@ -513,9 +586,13 @@ static void sse_event_inject(struct sbi_sse_event *e,
regs->a7 = e->attrs.entry.arg; regs->a7 = e->attrs.entry.arg;
regs->mepc = e->attrs.entry.pc; regs->mepc = e->attrs.entry.pc;
/* Return to S-mode with virtualization disabled */ /*
regs->mstatus &= ~(MSTATUS_MPP | MSTATUS_SIE); * Return to S-mode with virtualization disabled, not expected landing
* pad, supervisor trap disabled.
*/
regs->mstatus &= ~(MSTATUS_MPP | MSTATUS_SIE | MSTATUS_SPELP);
regs->mstatus |= (PRV_S << MSTATUS_MPP_SHIFT); regs->mstatus |= (PRV_S << MSTATUS_MPP_SHIFT);
regs->mstatus |= MSTATUS_SDT;
#if __riscv_xlen == 64 #if __riscv_xlen == 64
regs->mstatus &= ~MSTATUS_MPV; regs->mstatus &= ~MSTATUS_MPV;
@@ -566,13 +643,21 @@ static void sse_event_resume(struct sbi_sse_event *e,
regs->mstatus |= MSTATUS_SIE; regs->mstatus |= MSTATUS_SIE;
regs->mstatus &= ~MSTATUS_SPIE; regs->mstatus &= ~MSTATUS_SPIE;
if (i_ctx->flags & SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPIE) if (i_ctx->flags & SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE)
regs->mstatus |= MSTATUS_SPIE; regs->mstatus |= MSTATUS_SPIE;
regs->mstatus &= ~MSTATUS_SPP; regs->mstatus &= ~MSTATUS_SPP;
if (i_ctx->flags & SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPP) if (i_ctx->flags & SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP)
regs->mstatus |= MSTATUS_SPP; regs->mstatus |= MSTATUS_SPP;
regs->mstatus &= ~MSTATUS_SPELP;
if (i_ctx->flags & SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP)
regs->mstatus |= MSTATUS_SPELP;
regs->mstatus &= ~MSTATUS_SDT;
if (i_ctx->flags & SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT)
regs->mstatus |= MSTATUS_SDT;
regs->a7 = i_ctx->a7; regs->a7 = i_ctx->a7;
regs->a6 = i_ctx->a6; regs->a6 = i_ctx->a6;
csr_write(CSR_SEPC, i_ctx->sepc); csr_write(CSR_SEPC, i_ctx->sepc);
@@ -653,8 +738,7 @@ static void sse_ipi_inject_process(struct sbi_scratch *scratch)
/* Mark all queued events as pending */ /* Mark all queued events as pending */
while (!sbi_fifo_dequeue(sse_inject_fifo_r, &evt)) { while (!sbi_fifo_dequeue(sse_inject_fifo_r, &evt)) {
e = sse_event_get(evt.event_id); if (sse_event_get(evt.event_id, &e))
if (!e)
continue; continue;
sse_event_set_pending(e); sse_event_set_pending(e);
@@ -696,10 +780,9 @@ static int sse_inject_event(uint32_t event_id, unsigned long hartid)
int ret; int ret;
struct sbi_sse_event *e; struct sbi_sse_event *e;
e = sse_event_get(event_id); ret = sse_event_get(event_id, &e);
if (!e) if (ret)
return SBI_EINVAL; return ret;
/* In case of global event, provided hart_id is ignored */ /* In case of global event, provided hart_id is ignored */
if (sse_event_is_global(e)) if (sse_event_is_global(e))
@@ -788,9 +871,9 @@ int sbi_sse_enable(uint32_t event_id)
int ret; int ret;
struct sbi_sse_event *e; struct sbi_sse_event *e;
e = sse_event_get(event_id); ret = sse_event_get(event_id, &e);
if (!e) if (ret)
return SBI_EINVAL; return ret;
sse_enabled_event_lock(e); sse_enabled_event_lock(e);
ret = sse_event_enable(e); ret = sse_event_enable(e);
@@ -805,9 +888,9 @@ int sbi_sse_disable(uint32_t event_id)
int ret; int ret;
struct sbi_sse_event *e; struct sbi_sse_event *e;
e = sse_event_get(event_id); ret = sse_event_get(event_id, &e);
if (!e) if (ret)
return SBI_EINVAL; return ret;
sse_enabled_event_lock(e); sse_enabled_event_lock(e);
ret = sse_event_disable(e); ret = sse_event_disable(e);
@@ -826,7 +909,7 @@ int sbi_sse_hart_mask(void)
return SBI_EFAIL; return SBI_EFAIL;
if (state->masked) if (state->masked)
return SBI_EALREADY_STARTED; return SBI_EALREADY_STOPPED;
state->masked = true; state->masked = true;
@@ -841,7 +924,7 @@ int sbi_sse_hart_unmask(void)
return SBI_EFAIL; return SBI_EFAIL;
if (!state->masked) if (!state->masked)
return SBI_EALREADY_STOPPED; return SBI_EALREADY_STARTED;
state->masked = false; state->masked = false;
@@ -863,19 +946,26 @@ int sbi_sse_inject_event(uint32_t event_id)
return sse_inject_event(event_id, current_hartid()); return sse_inject_event(event_id, current_hartid());
} }
int sbi_sse_set_cb_ops(uint32_t event_id, const struct sbi_sse_cb_ops *cb_ops) int sbi_sse_add_event(uint32_t event_id, const struct sbi_sse_cb_ops *cb_ops)
{ {
struct sbi_sse_event *e; struct sse_event_info *info;
if (cb_ops->set_hartid_cb && !EVENT_IS_GLOBAL(event_id)) /* Do not allow adding an event twice */
info = sse_event_info_get(event_id);
if (info)
return SBI_EALREADY;
if (cb_ops && cb_ops->set_hartid_cb && !EVENT_IS_GLOBAL(event_id))
return SBI_EINVAL; return SBI_EINVAL;
e = sse_event_get(event_id); info = sbi_zalloc(sizeof(*info));
if (!e) if (!info)
return SBI_EINVAL; return SBI_ENOMEM;
e->cb_ops = cb_ops; info->cb_ops = cb_ops;
sse_event_put(e); info->event_id = event_id;
SBI_SLIST_ADD(info, supported_events);
return SBI_OK; return SBI_OK;
} }
@@ -943,11 +1033,11 @@ int sbi_sse_read_attrs(uint32_t event_id, uint32_t base_attr_id,
if (ret) if (ret)
return ret; return ret;
e = sse_event_get(event_id); ret = sse_event_get(event_id, &e);
if (!e) if (ret)
return SBI_EINVAL; return ret;
sbi_hart_map_saddr(output_phys_lo, sizeof(unsigned long) * attr_count); sbi_hart_protection_map_range(output_phys_lo, sizeof(unsigned long) * attr_count);
/* /*
* Copy all attributes at once since struct sse_event_attrs is matching * Copy all attributes at once since struct sse_event_attrs is matching
@@ -960,7 +1050,7 @@ int sbi_sse_read_attrs(uint32_t event_id, uint32_t base_attr_id,
attrs = (unsigned long *)output_phys_lo; attrs = (unsigned long *)output_phys_lo;
copy_attrs(attrs, &e_attrs[base_attr_id], attr_count); copy_attrs(attrs, &e_attrs[base_attr_id], attr_count);
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range(output_phys_lo, sizeof(unsigned long) * attr_count);
sse_event_put(e); sse_event_put(e);
@@ -975,7 +1065,7 @@ static int sse_write_attrs(struct sbi_sse_event *e, uint32_t base_attr_id,
uint32_t id, end_id = base_attr_id + attr_count; uint32_t id, end_id = base_attr_id + attr_count;
unsigned long *attrs = (unsigned long *)input_phys; unsigned long *attrs = (unsigned long *)input_phys;
sbi_hart_map_saddr(input_phys, sizeof(unsigned long) * attr_count); sbi_hart_protection_map_range(input_phys, sizeof(unsigned long) * attr_count);
for (id = base_attr_id; id < end_id; id++) { for (id = base_attr_id; id < end_id; id++) {
val = attrs[attr++]; val = attrs[attr++];
@@ -991,7 +1081,7 @@ static int sse_write_attrs(struct sbi_sse_event *e, uint32_t base_attr_id,
} }
out: out:
sbi_hart_unmap_saddr(); sbi_hart_protection_unmap_range(input_phys, sizeof(unsigned long) * attr_count);
return ret; return ret;
} }
@@ -1008,9 +1098,9 @@ int sbi_sse_write_attrs(uint32_t event_id, uint32_t base_attr_id,
if (ret) if (ret)
return ret; return ret;
e = sse_event_get(event_id); ret = sse_event_get(event_id, &e);
if (!e) if (ret)
return SBI_EINVAL; return ret;
ret = sse_write_attrs(e, base_attr_id, attr_count, input_phys_lo); ret = sse_write_attrs(e, base_attr_id, attr_count, input_phys_lo);
sse_event_put(e); sse_event_put(e);
@@ -1033,9 +1123,9 @@ int sbi_sse_register(uint32_t event_id, unsigned long handler_entry_pc,
SBI_DOMAIN_EXECUTE)) SBI_DOMAIN_EXECUTE))
return SBI_EINVALID_ADDR; return SBI_EINVALID_ADDR;
e = sse_event_get(event_id); ret = sse_event_get(event_id, &e);
if (!e) if (ret)
return SBI_EINVAL; return ret;
ret = sse_event_register(e, handler_entry_pc, handler_entry_arg); ret = sse_event_register(e, handler_entry_pc, handler_entry_arg);
sse_event_put(e); sse_event_put(e);
@@ -1048,9 +1138,9 @@ int sbi_sse_unregister(uint32_t event_id)
int ret; int ret;
struct sbi_sse_event *e; struct sbi_sse_event *e;
e = sse_event_get(event_id); ret = sse_event_get(event_id, &e);
if (!e) if (ret)
return SBI_EINVAL; return ret;
ret = sse_event_unregister(e); ret = sse_event_unregister(e);
sse_event_put(e); sse_event_put(e);
@@ -1058,9 +1148,10 @@ int sbi_sse_unregister(uint32_t event_id)
return ret; return ret;
} }
static void sse_event_init(struct sbi_sse_event *e, uint32_t event_id) static void sse_event_init(struct sbi_sse_event *e, struct sse_event_info *info)
{ {
e->event_id = event_id; e->event_id = info->event_id;
e->info = info;
e->hartindex = current_hartindex(); e->hartindex = current_hartindex();
e->attrs.hartid = current_hartid(); e->attrs.hartid = current_hartid();
/* Declare all events as injectable */ /* Declare all events as injectable */
@@ -1069,10 +1160,10 @@ static void sse_event_init(struct sbi_sse_event *e, uint32_t event_id)
static void sse_event_count_init() static void sse_event_count_init()
{ {
unsigned int i; struct sse_event_info *info;
for (i = 0; i < EVENT_COUNT; i++) { SBI_SLIST_FOR_EACH_ENTRY(info, supported_events) {
if (EVENT_IS_GLOBAL(supported_events[i])) if (EVENT_IS_GLOBAL(info->event_id))
global_event_count++; global_event_count++;
else else
local_event_count++; local_event_count++;
@@ -1082,18 +1173,19 @@ static void sse_event_count_init()
static int sse_global_init() static int sse_global_init()
{ {
struct sbi_sse_event *e; struct sbi_sse_event *e;
unsigned int i, ev = 0; unsigned int ev = 0;
struct sse_event_info *info;
global_events = sbi_zalloc(sizeof(*global_events) * global_event_count); global_events = sbi_zalloc(sizeof(*global_events) * global_event_count);
if (!global_events) if (!global_events)
return SBI_ENOMEM; return SBI_ENOMEM;
for (i = 0; i < EVENT_COUNT; i++) { SBI_SLIST_FOR_EACH_ENTRY(info, supported_events) {
if (!EVENT_IS_GLOBAL(supported_events[i])) if (!EVENT_IS_GLOBAL(info->event_id))
continue; continue;
e = &global_events[ev].event; e = &global_events[ev].event;
sse_event_init(e, supported_events[i]); sse_event_init(e, info);
SPIN_LOCK_INIT(global_events[ev].lock); SPIN_LOCK_INIT(global_events[ev].lock);
ev++; ev++;
@@ -1104,16 +1196,16 @@ static int sse_global_init()
static void sse_local_init(struct sse_hart_state *shs) static void sse_local_init(struct sse_hart_state *shs)
{ {
unsigned int i, ev = 0; unsigned int ev = 0;
struct sse_event_info *info;
SBI_INIT_LIST_HEAD(&shs->enabled_event_list); SBI_INIT_LIST_HEAD(&shs->enabled_event_list);
SPIN_LOCK_INIT(shs->enabled_event_lock); SPIN_LOCK_INIT(shs->enabled_event_lock);
for (i = 0; i < EVENT_COUNT; i++) { SBI_SLIST_FOR_EACH_ENTRY(info, supported_events) {
if (EVENT_IS_GLOBAL(supported_events[i])) if (EVENT_IS_GLOBAL(info->event_id))
continue; continue;
sse_event_init(&shs->local_events[ev++], info);
sse_event_init(&shs->local_events[ev++], supported_events[i]);
} }
} }
@@ -1143,7 +1235,8 @@ int sbi_sse_init(struct sbi_scratch *scratch, bool cold_boot)
} }
sse_inject_fifo_mem_off = sbi_scratch_alloc_offset( sse_inject_fifo_mem_off = sbi_scratch_alloc_offset(
EVENT_COUNT * sizeof(struct sse_ipi_inject_data)); (global_event_count + local_event_count) *
sizeof(struct sse_ipi_inject_data));
if (!sse_inject_fifo_mem_off) { if (!sse_inject_fifo_mem_off) {
sbi_scratch_free_offset(sse_inject_fifo_off); sbi_scratch_free_offset(sse_inject_fifo_off);
sbi_scratch_free_offset(shs_ptr_off); sbi_scratch_free_offset(shs_ptr_off);
@@ -1180,7 +1273,8 @@ int sbi_sse_init(struct sbi_scratch *scratch, bool cold_boot)
sse_inject_mem = sse_inject_mem =
sbi_scratch_offset_ptr(scratch, sse_inject_fifo_mem_off); sbi_scratch_offset_ptr(scratch, sse_inject_fifo_mem_off);
sbi_fifo_init(sse_inject_q, sse_inject_mem, EVENT_COUNT, sbi_fifo_init(sse_inject_q, sse_inject_mem,
(global_event_count + local_event_count),
sizeof(struct sse_ipi_inject_data)); sizeof(struct sse_ipi_inject_data));
return 0; return 0;
@@ -1188,21 +1282,18 @@ int sbi_sse_init(struct sbi_scratch *scratch, bool cold_boot)
void sbi_sse_exit(struct sbi_scratch *scratch) void sbi_sse_exit(struct sbi_scratch *scratch)
{ {
int i;
struct sbi_sse_event *e; struct sbi_sse_event *e;
struct sse_event_info *info;
for (i = 0; i < EVENT_COUNT; i++) { SBI_SLIST_FOR_EACH_ENTRY(info, supported_events) {
e = sse_event_get(supported_events[i]); if (sse_event_get(info->event_id, &e))
if (!e)
continue; continue;
if (e->attrs.hartid != current_hartid()) if (e->attrs.hartid != current_hartid())
goto skip; goto skip;
if (sse_event_state(e) > SBI_SSE_STATE_REGISTERED) { if (sse_event_state(e) > SBI_SSE_STATE_REGISTERED)
sbi_printf("Event %d in invalid state at exit", i);
sse_event_set_state(e, SBI_SSE_STATE_UNUSED); sse_event_set_state(e, SBI_SSE_STATE_UNUSED);
}
skip: skip:
sse_event_put(e); sse_event_put(e);

Some files were not shown because too many files have changed in this diff Show More