163 Commits

Author SHA1 Message Date
Anup Patel
057eb10b6d lib: utils/gpio: Fix RV32 compile error for designware GPIO driver
Currently, we see following compile error in the designeware GPIO driver
for RV32 systems:

lib/utils/gpio/fdt_gpio_designware.c:115:20: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
  115 |         chip->dr = (void *)addr + (bank * 0xc);
      |                    ^
lib/utils/gpio/fdt_gpio_designware.c:116:21: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
  116 |         chip->ext = (void *)addr + (bank * 4) + 0x50;

We fix the above error using an explicit type-cast to 'unsigned long'.

Fixes: 7828eebaaa ("gpio/desginware: add Synopsys DesignWare APB GPIO support")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-07-19 11:51:59 +05:30
Anup Patel
c6a35733b7 lib: utils: Fix sbi_hartid_to_scratch() usage in ACLINT drivers
The cold_init() functions of ACLINT drivers should skip the HART
if sbi_hartid_to_scratch() returns NULL because we might be dealing
with a HART that is disabled in the device tree.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-07-09 11:04:57 +05:30
Ben Dooks
7828eebaaa gpio/desginware: add Synopsys DesignWare APB GPIO support
Add a driver for the Synopsys DesignWare APB GPIO IP block found in many
SoCs.

Signed-off-by: Ben Dooks <ben.dooks@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-07-07 10:04:59 +05:30
Heinrich Schuchardt
eb736a5118 lib: sbi_pmu: Avoid out of bounds access
On a misconfigured system we could access phs->active_events[] out of
bounds. Check that num_hw_ctrs is less or equal SBI_PMU_HW_CTR_MAX.

Addresses-Coverity-ID: 1566113 ("Out-of-bounds read")
Addresses-Coverity-ID: 1566114 ("Out-of-bounds write")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-07-05 09:29:24 +05:30
Gianluca Guida
0907de38db lib: sbi: fix comment indent
Use tabs rather than spaces.

Signed-off-by: Gianluca Guida <gianluca@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-07-05 09:25:32 +05:30
Anup Patel
2552799a1d include: Bump-up version to 1.3
This patch updates OpenSBI version to 1.3 as part of
release preparation.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-06-23 11:01:49 +05:30
Gianluca Guida
8bd666a25b lib: sbi: check A2 register in ecall_dbcn_handler.
Do not ignore register A2 (high bits of physical address) in the dbcn
handler (RV64).

Signed-off-by: Gianluca Guida <gianluca@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-06-23 08:46:07 +05:30
Guo Ren
27c957a43b lib: reset: Move fdt_reset_init into generic_early_init
The fdt_reset_thead driver needs to modify the __reset_thead_csr_stub
text region for the secondary harts booting. After that, the
sbi_hart_pmp_configure may lock down the text region with M_READABLE &
M_EXECUTABLE attributes in the future. Currently, the M_READABLE &
M_EXECUtABLE have no effect on m-mode, the L-bit in pmpcfg csr is
useless for the current opensbi scenario. See:

Priv-isa-spec 3.7.1.2. Locking and Privilege Mode
When the L bit is clear, any M-mode access matching the PMP entry will
succeed; the R/W/X permissions apply only to S and U modes.

That's why current fdt_reset_thead could still work well after commit:
230278dcf1 ("lib: sbi: Add separate entries for firmware RX and RW
regions"). So this patch fixes up a fake bug for the M-mode permission
setting of the future.

Fixes: 230278dcf1 ("lib: sbi: Add separate entries for firmware RX and RW regions")
Link: http://lists.infradead.org/pipermail/opensbi/2023-June/005176.html
Reported-by: Jessica Clarke <jrtc27@jrtc27.com>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-06-21 11:12:42 +05:30
Xiang W
d64942f0e4 firmware: Fix find hart index
After the loop to find the hartid is launched, assigning -1 to
index will fail in the subsequent compare instruction bge. Fix
This.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-06-21 10:20:51 +05:30
Alexandre Ghiti
8153b2622b platform/lib: Set no-map attribute on all PMP regions
This reverts commit 6966ad0abe ("platform/lib: Allow the OS to map the
regions that are protected by PMP").

It was thought at the time of this commit that allowing the kernel to map
PMP protected regions was safe but it is actually not: for example, the
hibernation process will try to access any linear mapping page and then
will fault on such mapped PMP regions [1]. Another issue is that the
device tree specification [2] states that a !no-map region must be
declared as EfiBootServicesData/Code in the EFI memory map which would make
the PMP protected regions reclaimable by the kernel. And to circumvent
this, RISC-V edk2 diverges from the DT specification to declare those
regions as EfiReserved.

The no-map attribute was removed to allow the kernel to use hugepages
larger than 2MB to map the linear mapping to improve the performance but
actually a recent talk from Mike Rapoport [3] stated that the
performance benefit was marginal.

For all those reasons, let's mark all the PMP protected regions as "no-map".

[1] https://lore.kernel.org/linux-riscv/CAAYs2=gQvkhTeioMmqRDVGjdtNF_vhB+vm_1dHJxPNi75YDQ_Q@mail.gmail.com/
[2] "3.5.4 /reserved-memory and UEFI" https://github.com/devicetree-org/devicetree-specification/releases/download/v0.4-rc1/devicetree-specification-v0.4-rc1.pdf
[3] https://lwn.net/Articles/931406/

Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-06-15 18:27:17 +05:30
Anup Patel
932be2cde1 README.md: Improve project copyright information
Over-time a lot of organizations and individuals have contributed to
the OpenSBI project so let us add copyright RISC-V International to
respect the contributions from all RISC-V members.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-14 11:26:06 +05:30
Anup Patel
524feec7b7 docs: Add OpenSBI logo and use it in the top-level README.md
We do have an official OpenSBI logo which was designed few months ago
and was also approved by RISC-V International. Lets add this logo
under docs and also use it in the top-level README.md

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-14 11:26:04 +05:30
Anup Patel
355796c5bc lib: utils/irqchip: Use scratch space to save per-HART IMSIC pointer
Instead of using a global array indexed by hartid, we should use
scratch space to save per-HART IMSIC pointer and IMSIC file number.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-06 16:01:14 +05:30
Anup Patel
1df52fa7e8 lib: utils/irqchip: Don't check hartid in imsic_update_hartid_table()
The imsic_map_hartid_to_data() already checks hartid before using
so we don't need to check in imsic_update_hartid_table().

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 16:51:07 +05:30
Anup Patel
b3594ac1d1 lib: utils/irqchip: Use scratch space to save per-HART PLIC pointer
Instead of using a global array indexed by hartid, we should use
scratch space to save per-HART PLIC pointer and PLIC context numbers.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 16:50:56 +05:30
Anup Patel
f0516beae0 lib: utils/timer: Use scratch space to save per-HART MTIMER pointer
Instead of using a global array indexed by hartid, we should use
scratch space to save per-HART MTIMER pointer.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 16:46:50 +05:30
Anup Patel
acbd8fce9e lib: utils/ipi: Use scratch space to save per-HART MSWI pointer
Instead of using a global array indexed by hartid, we should use
scratch space to save per-HART MSWI pointer.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 16:02:59 +05:30
Anup Patel
3c1c972cb6 lib: utils/fdt: Use heap in FDT domain parsing
Let's use heap allocation in FDT domain parsing instead of using
a fixed size global array.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:59:35 +05:30
Anup Patel
7e5636ac37 lib: utils/timer: Use heap in ACLINT MTIMER driver
Let's use heap allocation in ACLINT MTIMER driver instead of using
a fixed size global array.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:58:42 +05:30
Anup Patel
30137166c6 lib: utils/irqchip: Use heap in PLIC, APLIC and IMSIC drivers
Let's use heap allocation in PLIC, APLIC, and IMSIC irqchip drivers
instead of using a fixed size global array.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:57:58 +05:30
Anup Patel
5a8cfcdf19 lib: utils/ipi: Use heap in ACLINT MSWI driver
Let's use heap allocation in ACLINT MSWI driver instead of using
a fixed size global array.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:55:56 +05:30
Anup Patel
903e88caaf lib: utils/i2c: Use heap in DesignWare and SiFive I2C drivers
Let's use heap allocation in DesignWare and SiFive I2C drivers
instead of using a fixed size global array.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:55:45 +05:30
Anup Patel
fa5ad2e6f9 lib: utils/gpio: Use heap in SiFive and StartFive GPIO drivers
Let's use heap allocation in SiFive and Starfive GPIO drivers
instead of using a fixed size global array.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:55:29 +05:30
Anup Patel
66daafe3ba lib: sbi: Use scratch space to save per-HART domain pointer
Instead of using a global array indexed by hartid, we should use
scratch space to save per-HART domain pointer.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:55:21 +05:30
Anup Patel
ef4542dc13 lib: sbi: Use heap for root domain creation
Let's use heap allocation in root domain creation instead of using
a fixed size global array.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:50:33 +05:30
Anup Patel
bbff53fe3b lib: sbi_pmu: Use heap for per-HART PMU state
Instead of using a global array for per-HART PMU state, we should
use heap to on-demand allocate per-HART PMU state when the HART
is initialized in cold boot or warm boot path.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:48:43 +05:30
Anup Patel
2a04f70373 lib: sbi: Print scratch size and usage at boot time
The scratch space being a scarce resource so let us print it's
size and usage at boot time.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:46:22 +05:30
Anup Patel
40d36a6673 lib: sbi: Introduce simple heap allocator
We provide simple heap allocator to manage the heap space provided
by OpenSBI firmware and platform.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:46:09 +05:30
Anup Patel
5cf9a54016 platform: Allow platforms to specify heap size
We extend struct sbi_platform and struct sbi_scratch to allow platforms
specify the heap size to the OpenSBI firmwares. The OpenSBI firmwares
will use this information to determine the location of heap and provide
heap base address in per-HART scratch space.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-06-05 15:45:33 +05:30
Anup Patel
aad7a37705 include: sbi_scratch: Add helper macros to access data type
Reading and writing a data type in scratch space is a very common
use-case so let us add related helper macros in sbi_scratch.h.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-06-05 15:42:50 +05:30
Andrew Jones
bdde2ecd27 lib: sbi: Align system suspend errors with spec
The spec says sbi_system_suspend() will return SBI_ERR_INVALID_PARAM
when "sleep_type is reserved or is platform-specific and unimplemented"
and SBI_ERR_NOT_SUPPORTED when sleep_type "is not reserved and is
implemented, but the platform does not support it due to one or more
missing dependencies." Ensure SBI_ERR_INVALID_PARAM is returned for
reserved sleep types and that the system suspend driver can choose
which of the two error types to return itself by returning an error
from its check function rather than a boolean.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-06-04 15:18:40 +05:30
Heinrich Schuchardt
df75e09956 lib: utils/ipi: buffer overrun aclint_mswi_cold_init
The parameter checks in aclint_mswi_cold_init() don't guard against a
buffer overrun.

mswi_hartid2data is defined as an array of SBI_HARTMASK_MAX_BITS entries.
The current check allows

    mswi->hart_count = ACLINT_MSWI_MAX_HARTS
    mswi->first_hartid = SBI_HARTMASK_MAX_BITS - 1.

With these values mswi_hartid2data will be accessed at index

    SBI_HARTMASK_MAX_BITS + SBI_HARTMASK_MAX_BITS - 2.

We have to check the sum of mswi->first_hartid and mswi->hart_count.

Furthermore mswi->hart_count = 0 would not make much sense.

Addresses-Coverity-ID: 1529705 ("Out-of-bounds write")
Fixes: 5a049fe1d6 ("lib: utils/ipi: Add ACLINT MSWI library")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-06-04 15:13:50 +05:30
Xiang W
122f2260b3 lib: utils: Improve fdt_timer
Remove dummy driver. Optimize fdt_timer_cold_init to exit the
loop early.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-26 12:47:22 +05:30
Xiang W
9a0bdd0c84 lib: utils: Improve fdt_ipi
Remove dummy driver. Optimize fdt_ipi_cold_init to exit the loop
early.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-26 12:43:25 +05:30
Xiang W
264d0be1fd lib: utils: Improve fdt_serial_init
A final check of all DT nodes does not necessarily find a match, so
SBI_ENODEV needs to be returned. Optimize removal of current_driver.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-26 12:37:25 +05:30
Xiang W
8b99a7f7d8 lib: sbi: Fix return of sbi_console_init
console is not a required peripheral. So it should return success when
the console does not exist.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-26 12:36:54 +05:30
Filip Filmar
d4c46e0ff1 Makefile: Dereference symlinks on install
Adds the `-L` flag (follow symlinks) to the `cp` commands used to
install `libsbi.a` and `include/sbi/*`.

This should make no difference in regular compilation. However,
it does make a difference when compiling with bazel.  Namely,
bazel's sandboxing will turn all the source files into symlinks.
After installation with `cp` the destination files will be
symlinks pointing to the sandbox symlinks. As the sandbox files
are removed when compilation ends, the just-copied symlinks
become dangling symlinks.

The resulting include files will be
unusable due to the dangling symlink issues. Adding `-L` when
copying ensures that the files obtained by executing the `install`
targets are always dereferenced to files, rather than symlinks,
eliminating this issue.

Signed-off-by: Filip Filmar <fmil@google.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-22 08:52:57 +05:30
Andrew Jones
33f1722f2b lib: sbi: Document sbi_ecall_extension members
With the introduction of the register_extensions callback the
range members (extid_start and extid_end) may now change and it
has become a bit subtle as to when a probe function should be
implemented. Document all the members and their relationship to
the register_extensions callback.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-21 16:54:02 +05:30
Andrew Jones
c3e31cbf36 lib: sbi: Remove 0/1 probe implementations
When a probe implementation just returns zero for not available and
one for available then we don't need it, as the extension won't be
registered at all if it would return zero and the Base extension
probe function will already set out_val to 1 if not probe function
is implemented. Currently all probe functions only return zero or
one, so remove them all.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-21 16:54:00 +05:30
Xiang W
767b5fc418 lib: sbi: Optimize probe of srst/susp
No need to do a fully comprehensive count, just find a supported reset
or suspend type

Signed-off-by: Xiang W <wxjstz@126.com>
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-21 16:53:38 +05:30
Andrew Jones
8b952d4fcd lib: sbi: Only register available extensions
When an extension implements a probe function it means there's a
chance that the extension is not available. Use this function in the
register_extensions callback to determine if the extension should be
registered at all. Where the probe implementation is simple, just
open code the check.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-21 16:53:02 +05:30
Andrew Jones
042f0c3ea2 lib: sbi: pmu: Remove unnecessary probe function
The absence of a probe implementation means that the extension is
always available. Remove the implementation for the PMU extension,
which does no checking, and indeed even has a comment saying it's
always available.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-21 16:30:29 +05:30
Andrew Jones
e307ba7d46 lib: sbi: Narrow vendor extension range
The vendor extension ID range is large, but at runtime at most
a single ID will be available. Narrow the range in the
register_extensions callback. After narrowing, we no longer
need to check that the extension ID is correct in the other
callbacks, as those callbacks will never be invoked with
anything other than the single ID.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-21 16:29:46 +05:30
Andrew Jones
f58c14090f lib: sbi: Introduce register_extensions extension callback
Rather than registering all extensions on their behalf in
sbi_ecall_init(), introduce another extension callback and
invoke that instead. For now, implement each callback by
simply registering the extension, which means this patch
has no intended functional change. In later patches, extension
callbacks will be modified to choose when to register and to
possibly narrow the extension ID range prior to registering.
When an extension range needs to remove IDs, leaving gaps, then
multiple invocations of sbi_ecall_register_extension() may be
used. In summary, later patches for current extensions and the
introductions of future extensions will use the new callback to
ensure that only valid extension IDs from the initial range,
which are also available, will be registered.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-21 16:27:38 +05:30
Xiang W
dc1c7db05e lib: sbi: Simplify BITS_PER_LONG definition
No need to use #elif ladder when defining BITS_PER_LONG.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-11 12:56:56 +05:30
Xiang W
6bc02dede8 lib: sbi: Simplify sbi_ipi_process remove goto
Simplify sbi_ipi_process() by removing goto statement.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-11 12:46:42 +05:30
Xiang W
4e3353057a lib: sbi: Remove unnecessary semicolon
We have redundant semicolon at quite a few places so let's remove it.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-05-11 12:31:34 +05:30
Tan En De
7919530308 lib: sbi: Add debug print when sbi_pmu_init fails
Since sbi_pmu_init is called after sbi_console_init,
the sbi_printf can be called when sbi_pmu_init fails.

Signed-off-by: Tan En De <ende.tan@starfivetech.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2023-04-20 14:19:44 +05:30
Anup Patel
f5dfd99139 lib: sbi: Don't check SBI error range for legacy console getchar
The legacy console getchar SBI call returns character value in
the sbiret.error field so the "SBI_SUCCESS < ret" check in
sbi_ecall_handler() results in unwanted error prints for the
legacy console getchar SBI call. Let's suppress these unwanted
error prints.

Fixes: 67b2a40892 ("lib: sbi: sbi_ecall: Check the range of
SBI error")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-04-17 09:48:13 +05:30
Alexandre Ghiti
674e0199b2 lib: sbi: Fix counter index calculation for SBI_PMU_CFG_FLAG_SKIP_MATCH
As per the SBI specification, we should "unconditionally select the first
counter from the set of counters specified by the counter_idx_base and
counter_idx_mask", so implement this behaviour.

Suggested-by: Atish Patra <atishp@atishpatra.org>
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2023-04-17 09:26:28 +05:30
Alexandre Ghiti
bdb3c42bca lib: sbi: Do not clear active_events for cycle/instret when stopping
Those events are enabled by default and should not be reset afterwards
since when using SBI_PMU_CFG_FLAG_SKIP_MATCH, it leads to unaccessible
counters after the first use.

Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
2023-04-17 09:26:26 +05:30
Bin Meng
e41dbb507c firmware: Change to use positive offset to access relocation entries
The codes currently skip the very first relocation entry, but later
reference the elements in the relocation entry using minus offsets.

Change to use positive offsets so that there is no need to skip the
first relocation entry.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-04-17 08:55:55 +05:30
Bin Meng
f692289ed4 firmware: Optimize loading relocation type
't5' already contains relocation type so don't bother reloading it.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-04-17 08:55:49 +05:30
Lad Prabhakar
eeab500a65 platform: generic: andes/renesas: Add SBI EXT to check for enabling IOCP errata
I/O Coherence Port (IOCP) provides an AXI interface for connecting
external non-caching masters, such as DMA controllers. The accesses
from IOCP are coherent with D-Caches and L2 Cache.

IOCP is a specification option and is disabled on the Renesas RZ/Five
SoC (which is based on Andes AX45MP core) due to this reason IP blocks
using DMA will fail.

As a workaround for SoCs with IOCP disabled CMO needs to be handled by
software. Firstly OpenSBI configures the memory region as
"Memory, Non-cacheable, Bufferable" and passes this region as a global
shared dma pool as a DT node. With DMA_GLOBAL_POOL enabled all DMA
allocations happen from this region and synchronization callbacks are
implemented to synchronize when doing DMA transactions.

SBI_EXT_ANDES_IOCP_SW_WORKAROUND checks if the IOCP errata should be
applied to handle cache management.

Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Reviewed-by: Yu Chien Peter Lin <peterlin@andestech.com>
2023-04-14 17:35:04 +05:30
Xiang W
bf40e07f6f lib: sbi: Optimize sbi_tlb queue waiting
When tlb_fifo is full, it will wait and affect the ipi update to
other harts. This patch is optimized.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-04-14 13:45:30 +05:30
Xiang W
80078ab088 sbi: tlb: Simplify to tlb_process_count/tlb_process function
tlb_process_count is only used when count=1, so refactor to
tlb_process_once and add the return value to be reused in
tlb_process

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-04-13 13:49:57 +05:30
Xiang W
24dde46b8d lib: sbi: Optimize sbi_ipi
The original sbi_ipi will be processed by hart by hart, after optimization,
send ipi first and finally wait together.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-04-13 12:44:50 +05:30
Xiang W
66fa925353 lib: sbi: Optimize sbi_tlb
Originally, the process and sync of sbi_tlb need to wait for each other.
Evasion by atomic addition and subtraction.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-04-13 12:37:55 +05:30
Heinrich Schuchardt
2868f26131 lib: utils: fdt_fixup: avoid buffer overrun
fdt_reserved_memory_fixup() uses filtered_order[PMP_COUNT]. The index
must not reach PMP_COUNT.

Fixes: 199189bd1c ("lib: utils: Mark only the largest region as reserved in FDT")
Addresses-Coverity-ID: 1536994 ("Out-of-bounds write")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-04-07 11:22:25 +05:30
Gabriel Somlo
ee016a7bb0 docs: Correct FW_JUMP_FDT_ADDR calculation example
When using `PLATFORM=generic` defaults, the kernel is loaded at
`FW_JUMP_ADDR`, and the FDT is loaded at `FW_JUMP_FDT_ADDR.

Therefore, the maximum kernel size before `FW_JUMP_FDT_ADDR` must
be increased is `$(( FW_JUMP_FDT_ADDR - FW_JUMP_ADDR ))`.

The example calculation assumes `rv64`, and is wrong to boot
(off by 0x200000). Fix it and update it for the general case.

Signed-off-by: Gabriel Somlo <gsomlo@gmail.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-04-07 11:16:05 +05:30
Yu Chien Peter Lin
edc9914392 lib: sbi_pmu: Align the event type offset as per SBI specification
The bits encoded in event_idx[19:16] indicate the event type, with
an offset of 16 instead of 20.

Fixes: 13d40f21d5 ("lib: sbi: Add PMU support")
Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-04-07 10:06:59 +05:30
Sunil V L
91767d093b lib: sbi: Print the CPPC device name
If CPPC device is registered by the platform, print its name.

Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-04-07 09:36:42 +05:30
Sunil V L
33caae8069 lib: sbi: Implement SBI CPPC extension
Implement SBI CPPC extension. This extension is only available when
OpenSBI platform provides a CPPC device to generic library.

Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-04-07 09:36:01 +05:30
Sunil V L
45ba2b203c include: Add defines for SBI CPPC extension
Add SBI CPPC extension related defines to the
SBI ecall interface header.

Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-04-07 09:35:06 +05:30
Mayuresh Chitale
8e90259da8 lib: sbi_hart: clear mip csr during hart init
If mip.SEIP bit is not cleared then on HiFive Unmatched board it causes
spurious external interrupts. This breaks the boot up of HiFive Unmatched
board. Hence it is required to bring the mip CSR to a known state during
hart init and avoid spurious interrupts.

Fixes: d9e7368 ("firmware: Not to clear all the MIP")
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-04-06 18:52:03 +05:30
Anup Patel
30b9e7ee14 lib: sbi_hsm: Fix sbi_hsm_hart_start() for platform with hart hotplug
It possible that a platform supports hart hotplug (i.e. both hart_start
and hart_stop callbacks available) and all harts are start simultaneously
at platform boot-time. In this situation, the sbi_hsm_hart_start() will
call hsm_device_hart_start() for secondary harts at platform boot-time
which will fail because secondary harts were already started.

To fix above, we call hsm_device_hart_start() from sbi_hsm_hart_start()
only when entry_count is same as init_count for the secondary hart.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-04-06 18:48:19 +05:30
Anup Patel
f64dfcd2b5 lib: sbi: Introduce sbi_entry_count() function
We introduce sbi_entry_count() function which counts the number
of times a HART enters OpenSBI via cold-boot or warm-boot path.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-04-06 18:48:15 +05:30
Xiang W
73ab11dfb0 lib: sbi: Fix how to check whether the domain contains fw_region
Because firmware is split into rw/rx segments, it cannot be recorded
by a root_fw_region. This problem is solved by adding a flag
fw_region_inited to sbi_domain.

Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-04-06 16:14:35 +05:30
Xiang W
ed88a63b90 lib: sbi_scratch: Optimize the alignment code for alloc size
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-04-06 16:00:45 +05:30
Evgenii Shatokhin
c6a092cd80 lib: sbi: Clear IPIs before init_warm_startup in non-boot harts
Since commit 50d4fde1c5 ("lib: Remove redundant sbi_platform_ipi_clear()
calls"), the IPI sent from the boot hart in wake_coldboot_harts() is not
cleared in the secondary harts until they reach sbi_ipi_init(). However,
sbi_hsm_init() and sbi_hsm_hart_wait() are called earlier, so a secondary
hart might enter sbi_hsm_hart_wait() with an already pending IPI.

sbi_hsm_hart_wait() makes sure the hart leaves the loop only when it is
actually ready, so a pending unrelated IPI should not cause safety issues.
However, it might be inefficient on certain hardware, because it prevents
"wfi" from stalling the hart even if the hardware supports this, making the
hart needlessly spin in a "busy-wait" loop.

This behaviour can be observed, for example, in a QEMU VM (QEMU 7.2.0) with
"-machine virt" running a Linux guest. Inserting delays in
sbi_hsm_hart_start() allows reproducing the issue more reliably.

The comment in wait_for_coldboot() suggests that the initial IPI is needed
in the warm resume path, so let us clear it before init_warm_startup()
only.

To do this, sbi_ipi_raw_clear() was created similar to sbi_ipi_raw_send().

Signed-off-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:13:40 +05:30
Evgenii Shatokhin
e8e9ed3790 lib: sbi: Set the state of a hart to START_PENDING after the hart is ready
When a boot hart executes sbi_hsm_hart_start() to start a secondary hart,
next_arg1, next_addr and next_mode for the latter are stored in the scratch
area after the state has been set to SBI_HSM_STATE_START_PENDING.

The secondary hart waits in the loop with wfi() in sbi_hsm_hart_wait() at
that time. However, "wfi" instruction is not guaranteed to wait for an
interrupt to be received by the hart, it is just a hint for the CPU.
According to RISC-V Privileged Architectures spec. v20211203, even an
implementation of "wfi" as "nop" is legal.

So, the secondary might leave the loop in sbi_hsm_hart_wait() as soon as
its state has been set to SBI_HSM_STATE_START_PENDING, even if it got no
IPI or it got an IPI unrelated to sbi_hsm_hart_start(). This could lead to
the following race condition when booting Linux, for example:

  Boot hart (#0)                        Secondary hart (#1)
  runs Linux startup code               waits in sbi_hsm_hart_wait()

  sbi_ecall(SBI_EXT_HSM,
            SBI_EXT_HSM_HART_START,
            ...)
  enters sbi_hsm_hart_start()
  sets state of hart #1 to START_PENDING
                                        leaves sbi_hsm_hart_wait()
                                        runs to the end of init_warmboot()
                                        returns to scratch->next_addr
                                        (next_addr can be garbage here)

  sets next_addr, etc. for hart #1
  (no good: hart #1 has already left)

  sends IPI to hart #1
  (no good either)

If this happens, the secondary hart jumps to a wrong next_addr at the end
of init_warmboot(), which leads to a system hang or crash.

To reproduce the issue more reliably, one could add a delay in
sbi_hsm_hart_start() after setting the hart's state but before sending
IPI to that hart:

    hstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_STOPPED,
                            SBI_HSM_STATE_START_PENDING);
    ...
  + sbi_timer_mdelay(10);
    init_count = sbi_init_count(hartid);
    rscratch->next_arg1 = arg1;
    rscratch->next_addr = saddr;

The issue can be reproduced, for example, in a QEMU VM with '-machine virt'
and 2 or more CPUs, with Linux as the guest OS.

This patch moves writing of next_arg1, next_addr and next_mode for the
secondary hart before setting its state to SBI_HSM_STATE_START_PENDING.

In theory, it is possible that two or more harts enter sbi_hsm_hart_start()
for the same target hart simultaneously. To make sure the current hart has
exclusive access to the scratch area of the target hart at that point, a
per-hart 'start_ticket' is used. It is initially 0. The current hart tries
to acquire the ticket first (set it to 1) at the beginning of
sbi_hsm_hart_start() and only proceeds if it has successfully acquired it.

The target hart reads next_addr, etc., and then the releases the ticket
(sets it to 0) before calling sbi_hart_switch_mode(). This way, even if
some other hart manages to enter sbi_hsm_hart_start() after the ticket has
been released but before the target hart jumps to next_addr, it will not
cause problems.

atomic_cmpxchg() already has "acquire" semantics, among other things, so
no additional barriers are needed in hsm_start_ticket_acquire(). No hart
can perform or observe the update of *rscratch before setting of
'start_ticket' to 1.

atomic_write() only imposes ordering of writes, so an explicit barrier is
needed in hsm_start_ticket_release() to ensure its "release" semantics.
This guarantees that reads of scratch->next_addr, etc., in
sbi_hsm_hart_start_finish() cannot happen after 'start_ticket' has been
released.

Signed-off-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:12:43 +05:30
Evgenii Shatokhin
d56049e299 lib: sbi: Refactor the calls to sbi_hart_switch_mode()
Move them into sbi_hsm_hart_start_finish() and sbi_hsm_hart_resume_finish()
to make them easier to manage.

This will be used by subsequent patches.

Suggested-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:00:36 +05:30
Mayuresh Chitale
c631a7da27 lib: sbi_pmu: Add hartid parameter PMU device ops
Platform specific firmware event handler may leverage the hartid to program
per hart specific registers for a given counter.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:00:33 +05:30
Mayuresh Chitale
57d3aa3b0d lib: sbi_pmu: Introduce fw_counter_write_value API
Add fw_counter_write_value API for platform specific firmware events
which separates setting the counter's initial value from starting the
counter. This is required so that the fw_event_data array can be reused
to save the event data received.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:00:30 +05:30
Mayuresh Chitale
641d2e9f38 lib: sbi_pmu: Use dedicated event code for platform firmware events
For all platform specific firmware event operations use the dedicated
event code (0xFFFF) when matching against the input firmware event.
Furthermore save the real platform specific firmware event code received as
the event data for future use.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:00:28 +05:30
Mayuresh Chitale
b51ddffcc0 lib: sbi_pmu: Update sbi_pmu dev ops
Update fw_event_validate_code, fw_counter_match_code and fw_counter_start
ops which used a 32 bit event code to use the 64 bit event data instead.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:00:26 +05:30
Mayuresh Chitale
548e4b4b28 lib: sbi_pmu: Rename fw_counter_value
Rename and reuse fw_counter_value array to save both the counter values
for the SBI firmware events and event data for the SBI platform specific
firmware events.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-03-10 14:00:24 +05:30
Mayuresh Chitale
60c358e677 lib: sbi_pmu: Reserve space for implementation specific firmware events
We reserve space for SBI implementation specific custom firmware
events which can be used by M-mode firmwares and HS-mode hypervisors
for their own use. This reserved space is intentionally large to
ensure that SBI implementation has enough space to accommodate
platform specific firmware events as well.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:00:22 +05:30
Mayuresh Chitale
51951d9e9a lib: sbi_pmu: Implement sbi_pmu_counter_fw_read_hi
To support 64 bit firmware counters on RV32 systems, we implement
sbi_pmu_counter_fw_read_hi() which returns the upper 32 bits of
the firmware counter value. On RV64 (or higher) systems, this
function will always return zero.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 14:00:07 +05:30
Mayuresh Chitale
1fe8dc9955 lib: sbi_pmu: add callback for counter width
This patch adds a callback to fetch the number of bits implemented for a
custom firmware counter. If the callback fails or is not implemented then
width defaults to 63.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-10 13:46:52 +05:30
Mayuresh Chitale
506144f398 lib: serial: Cadence: Enable compatibility for cdns,uart-r1p8
The Cadence driver does not use the RX byte status feature and hence can
be advertised to be compatible with cdns,uart-r1p8 as well.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-09 21:12:35 +05:30
Minda Chen
568ea49490 platform: starfive: add PMIC power ops in JH7110 visionfive2 board
add reboot and poweroff support. The whole reboot and shutdown
pm op includes shutdown jh7110 pmu device power domain
and access on board pmic register through I2C.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-09 21:11:20 +05:30
Minda Chen
e9d08bd99c lib: utils/i2c: Add minimal StarFive jh7110 I2C driver
Starfive JH7110 I2C IP is synopsys designware.
Minimum StarFIve I2C driver to read/send bytes over I2C bus.

This allows querying information and perform operation of onboard PMIC,
as well as power-off and reset.

Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-09 21:00:57 +05:30
Bin Meng
4b28afc98b make: Add a command line option for debugging OpenSBI
Add a new make command line option "make DEBUG=1" to prevent compiler
optimizations using -O2.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-01 09:23:17 +05:30
minda.chen
908be1b85c gpio/starfive: add gpio driver and support gpio reset
Add gpio driver and gpio reset function in Starfive
JH7110 SOC platform.

Signed-off-by: minda.chen <minda.chen@starfivetech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-03-01 08:59:33 +05:30
Andrew Jones
5ccebf0a7e platform: generic: Add system suspend test
When the system-suspend-test property is present in the domain config
node as shown below, implement system suspend with a simple 5 second
delay followed by a WFI. This allows testing system suspend when the
low-level firmware doesn't support it.

  / {
    chosen {
      opensbi-domains {
          compatible = "opensbi,domain,config";
          system-suspend-test;
      };

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:50:51 +05:30
Andrew Jones
37558dccbe docs: Correct opensbi-domain property name
Replace the commas with dashes to correct the name.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:49:07 +05:30
Andrew Jones
7c964e279c lib: sbi: Implement system suspend
Fill the implementation of the system suspend ecall. A platform
implementation of the suspend callbacks is still required for this
to do anything.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:47:35 +05:30
Andrew Jones
c9917b6108 lib: sbi: Add system_suspend_allowed domain property
Only privileged domains should be allowed to suspend the entire
system. Give the root domain this property by default and allow
other domains to be given the property by specifying it in the
DT.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:45:28 +05:30
Andrew Jones
73623a0aca lib: sbi: Add system suspend skeleton
Add the SUSP extension probe and ecall support, but for now the
system suspend function is just a stub.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:43:52 +05:30
Andrew Jones
8a40306371 lib: sbi_hsm: Export some functions
A coming patch can make use of a few internal hsm functions if
we export them.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:40:21 +05:30
Andrew Jones
07673fc063 lib: sbi_hsm: Remove unnecessary include
Also remove a superfluous semicolon and add a blank line.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:38:55 +05:30
Andrew Jones
b1ae6ef33b lib: sbi_hsm: Move misplaced comment
While non-retentive suspend is not allowed for M-mode, the comment
at the top of sbi_hsm_hart_suspend() implied suspend wasn't allowed
for M-mode at all. Move the comment above the mode check which is
inside a suspend type is non-retentive check.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:38:00 +05:30
Andrew Jones
c88e039ec2 lib: sbi_hsm: Ensure errors are consistent with spec
HSM functions define when SBI_ERR_INVALID_PARAM should be returned.
Ensure it's not used for reasons that don't meet the definitions by
using the catch-all code, SBI_ERR_FAILED, for those reasons instead.
Also, in one case sbi_hart_suspend() may have returned SBI_ERR_DENIED,
which isn't defined for that function at all. Use SBI_ERR_FAILED for
that case too.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:36:58 +05:30
Andrew Jones
40f16a81d3 lib: sbi_hsm: Don't try to restore state on failed change
When a state change fails there's no need to restore the original
state as it remains the same.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:35:47 +05:30
Andrew Jones
1364d5adb2 lib: sbi_hsm: Factor out invalid state detection
Remove some redundant code by creating an invalid state detection
macro.

No functional change intended.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 19:34:41 +05:30
Bin Meng
17b3776c81 docs: domain_support: Update the DT example
commit 3e2f573e70 ("lib: utils: Disallow non-root domains from adding M-mode regions")
added access permission check in __fdt_parse_region(). With the
existing DT example in the doc OpenSBI won't boot anymore.

Let's update the DT example so that it can work out of the box.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 13:50:27 +05:30
Bin Meng
bc06ff65bf lib: utils/fdt/fdt_domain: Simplify region access permission check
The region access permission check in __fdt_parse_region() can be
simplified as masking SBI_DOMAIN_MEMREGION_{M,SU}_ACCESS_MASK is
enough.

While we are here, update the confusing comments to match the codes.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 13:49:21 +05:30
Bin Meng
5a75f5309c lib: sbi/sbi_domain: cosmetic style fixes
Minor updates to the comments for language and style fixes.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 12:26:26 +05:30
Yu Chien Peter Lin
67b2a40892 lib: sbi: sbi_ecall: Check the range of SBI error
We should also check if the return error code is greater than 0
(SBI_SUCCESS), as this is an invalid error.

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 11:46:08 +05:30
Lad Prabhakar
2491242282 platform: generic: renesas: rzfive: Configure the PMA region
On the Renesas RZ/Five SoC by default we want to configure 128MiB of memory
ranging from 0x58000000 as a non-cacheable + bufferable region in the PMA
and populate this region as PMA reserve DT node with shared DMA pool and
no-map flags set so that Linux drivers requesting any DMA'able memory go
through this region.

PMA node passed to the above stack:

        reserved-memory {
            #address-cells = <2>;
            #size-cells = <2>;
            ranges;

            pma_resv0@58000000 {
                compatible = "shared-dma-pool";
                reg = <0x0 0x58000000 0x0 0x08000000>;
                no-map;
                linux,dma-default;
            };
        };

Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 11:36:06 +05:30
Lad Prabhakar
c10095132a platform: generic: renesas: rzfive: Add support to configure the PMA
I/O Coherence Port (IOCP) provides an AXI interface for connecting
external non-caching masters, such as DMA controllers. The accesses
from IOCP are coherent with D-Caches and L2 Cache.

IOCP is a specification option and is disabled on the Renesas RZ/Five
SoC due to this reason IP blocks using DMA will fail.

The Andes AX45MP core has a Programmable Physical Memory Attributes (PMA)
block that allows dynamic adjustment of memory attributes in the runtime.
It contains a configurable amount of PMA entries implemented as CSR
registers to control the attributes of memory locations in interest.
Below are the memory attributes supported:
* Device, Non-bufferable
* Device, bufferable
* Memory, Non-cacheable, Non-bufferable
* Memory, Non-cacheable, Bufferable
* Memory, Write-back, No-allocate
* Memory, Write-back, Read-allocate
* Memory, Write-back, Write-allocate
* Memory, Write-back, Read and Write-allocate

More info about PMA (section 10.3):
Link: http://www.andestech.com/wp-content/uploads/AX45MP-1C-Rev.-5.0.0-Datasheet.pdf

As a workaround for SoCs with IOCP disabled CMO needs to be handled by
software. Firstly OpenSBI configures the memory region as
"Memory, Non-cacheable, Bufferable" and passes this region as a global
shared dma pool as a DT node. With DMA_GLOBAL_POOL enabled all DMA
allocations happen from this region and synchronization callbacks are
implemented to synchronize when doing DMA transactions.

Example PMA region passed as a DT node from OpenSBI:
    reserved-memory {
        #address-cells = <2>;
        #size-cells = <2>;
        ranges;

        pma_resv0@58000000 {
            compatible = "shared-dma-pool";
            reg = <0x0 0x58000000 0x0 0x08000000>;
            no-map;
            linux,dma-default;
        };
    };

Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 11:35:01 +05:30
Anup Patel
31b82e0d50 include: sbi: Remove extid parameter from vendor_ext_provider() callback
The extid parameter of vendor_ext_provider() is redundant so let us
remove it.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-02-27 11:26:37 +05:30
Anup Patel
81adc62f45 lib: sbi: Align SBI vendor extension id with mvendorid CSR
As-per the SBI specification, the lower 24bits of the SBI vendor
extension id is same as lower 24bits of the mvendorid CSR.

We update the SBI vendor extension id checking based on above.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-02-27 11:26:35 +05:30
Nylon Chen
30ea8069f4 lib: sbi_hart: Enable hcontext and scontext
According to the description in "riscv-state-enable[0]", to access
h/scontext in S-Mode, we need to enable the 57th bit.

If it is not enabled, an "illegal instruction" error will occur.

Link: a28bfae443/content.adoc [0]

Signed-off-by: Nylon Chen <nylon.chen@sifive.com>
Reviewed-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 11:22:11 +05:30
Shengyu Qu
4f2be40102 docs: fix typo in fw.md
In docs/firmware/fw.md, there's a configuration parameter called
FW_TEXT_ADDR, which actually should be FW_TEXT_START, so fix it.

Signed-off-by: Shengyu Qu <wiagn233@outlook.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 10:54:42 +05:30
Xiang W
6861ee996c lib: utils: fdt_fixup: Fix compile error
When building with GCC-10 or older versions, it throws the following
error:

 CC-DEP    platform/generic/lib/utils/fdt/fdt_fixup.dep
 CC        platform/generic/lib/utils/fdt/fdt_fixup.o
lib/utils/fdt/fdt_fixup.c: In function 'fdt_reserved_memory_fixup':
lib/utils/fdt/fdt_fixup.c:376:2: error: label at end of compound statement
  376 |  next_entry:
      |  ^~~~~~~~~~

Remove the goto statement.

Resolves: https://github.com/riscv-software-src/opensbi/issues/288

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
2023-02-27 10:49:09 +05:30
Bin Meng
99d09b601e include: fdt/fdt_helper: Change fdt_get_address() to return root.next_arg1
In sbi_domain_finalize(), when locating the coldboot hart's domain,
the coldboot hart's scratch->arg1 will be overwritten by the domain
configuration. However scratch->arg1 holds the FDT address of the
coldboot hart, and is still being accessed by fdt_get_address() in
later boot process. scratch->arg1 could then contain completely
garbage and lead to a crash.

To fix this, we change fdt_get_address() to return root domain's
next_arg1 as the FDT pointer.

Resolves: https://github.com/riscv-software-src/opensbi/issues/281
Fixes: b1678af210 ("lib: sbi: Add initial domain support")
Reported-by: Marouene Boubakri <marouene.boubakri@nxp.com>
Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 10:04:03 +05:30
Bin Meng
745aaecc64 platform: generic/andes: Fix ae350.c header dependency
The code calls various macros from riscv_asm.h which is not directly
included. Fix such dependency.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 10:02:29 +05:30
Bin Meng
aafcc90a87 platform: generic/allwinner: Fix sun20i-d1.c header dependency
The code calls various macros from riscv_asm.h and sbi_scratch.h
which are not directly included. Fix such dependency.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 10:00:15 +05:30
Bin Meng
321293c644 lib: utils/fdt: Fix fdt_pmu.c header dependency
The code calls sbi_scratch_thishart_ptr() from sbi_scratch.h which
is not directly included. Fix such dependency.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-27 09:59:01 +05:30
Anup Patel
65c2190b47 lib: sbi: Speed-up sbi_printf() and friends using nputs()
The sbi_printf() is slow for semihosting because it prints one
character at a time. To speed-up sbi_printf() for semihosting,
we use a temporary buffer and nputs().

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-02-10 10:30:18 +05:30
Anup Patel
29285aead0 lib: utils/serial: Implement console_puts() for semihosting
We implement console_puts() for semihosting serial driver to speed-up
semihosting based prints.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-02-10 10:04:59 +05:30
Anup Patel
c43903c4ea lib: sbi: Add console_puts() callback in the console device
We add console_puts() callback in the console device which allows
console drivers (such as semihosting) to implement a specialized
way to output character string.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-02-10 10:04:41 +05:30
Anup Patel
5a41a3884f lib: sbi: Implement SBI debug console extension
We implement SBI debug console extension as one of the replacement
SBI extensions. This extension is only available when OpenSBI platform
provides a console device to generic library.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
2023-02-10 09:55:18 +05:30
Anup Patel
eab48c33a1 lib: sbi: Add sbi_domain_check_addr_range() function
We add sbi_domain_check_addr_range() helper function to check
whether a given address range is accessible under a particular
domain.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-02-10 09:14:58 +05:30
Anup Patel
4e0572f57b lib: sbi: Add sbi_ngets() function
We add new sbi_ngets() which help us read characters into a
physical memory location.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
2023-02-09 22:30:06 +05:30
Anup Patel
0ee3a86fed lib: sbi: Add sbi_nputs() function
We add new sbi_nputs() which help us print a fixed number of characters
from a physical memory location.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-02-09 22:29:24 +05:30
Anup Patel
e3bf1afcc5 include: Add defines for SBI debug console extension
We add SBI debug console extension related defines to the
SBI ecall interface header.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-02-09 22:21:24 +05:30
Anup Patel
aa5dafcb5b include: sbi: Fix BSWAPx() macros for big-endian host
The BSWAPx() macros won't do any swapping for big-endian host
because the EXTRACT_BYTE() macro will pickup bytes in reverse
order. Also, the EXTRACT_BYTE() will generate compile error
for constants.

To fix this, we get remove the EXTRACT_BYTE() macro and re-write
BSWAPx() using simple mask and shift operations.

Fixes: 09b34d8cca ("include: Add support for byteorder/endianness
conversion")
Reported-by: Samuel Holland <samuel@sholland.org>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-02-09 09:31:10 +05:30
Rahul Pathak
b224ddb41f include: types: Add typedefs for endianness
If any variable/memory-location follows certain
endianness then its important to annotate it properly
so that proper conversion can be done before read/write
from that variable/memory.

Also, use these new typedefs in libfdt_env.h for deriving
its own custom fdtX_t types

Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-08 18:24:13 +05:30
Rahul Pathak
680bea02bf lib: utils/fdt: Use byteorder conversion functions in libfdt_env.h
FDT follows big-endian and CPU can be little or big
endian as per the implementation.
libfdt_env.h defines function for conversion between
fdt and cpu byteorder according to the endianness.

Currently, libfdt_env.h defines custom byte swapping
macros and then undefines them. Instead, use the generic
endianness conversion functions

Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-08 18:17:42 +05:30
Rahul Pathak
09b34d8cca include: Add support for byteorder/endianness conversion
Define macros general byteorder conversion
Define functions for endianness conversion
from general byteorder conversion macros

Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Sergey Matyukevich <sergey.matyukevich@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-08 18:10:39 +05:30
Jessica Clarke
642f3de9b9 Makefile: Add missing .dep files for fw_*.elf.ld
Since we don't currently create these, changes to fw_base.ldS do not
cause the preprocessed fw_*.elf.ld files to be rebuilt, and thus
incremental builds can end up failing with missing symbols if crossing
the recent commits that introduced _fw_rw_offset and then replaced it
with _fw_rw_start.

Reported-by: Ben Dooks <ben.dooks@sifive.com>
Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-08 16:36:55 +05:30
Andrew Jones
66b0e23a0c lib: sbi: Ensure domidx_to_domain_table is null-terminated
sbi_domain_for_each() requires domidx_to_domain_table[] to be
null-terminated. Allocate one extra element which will always
be null.

Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-08 13:42:45 +05:30
Himanshu Chauhan
199189bd1c lib: utils: Mark only the largest region as reserved in FDT
In commit 230278dcf, RX and RW regions were marked separately.
When the RW region grows (e.g. with more harts) and it isn't a
power-of-two, sbi_domain_memregion_init will upgrade the region
to the next power-of-two. This will make RX and RW both start
at the same base address, like so (with 64 harts):
Domain0 Region01 : 0x0000000080000000-0x000000008001ffff M: (R,X) S/U: ()
Domain0 Region02 : 0x0000000080000000-0x00000000800fffff M: (R,W) S/U: ()

This doesn't break the permission enforcement because of static
priorities in PMP but makes the kernel complain about the regions
overlapping each other. Like so:
[    0.000000] OF: reserved mem: OVERLAP DETECTED!
[    0.000000] mmode_resv0@80000000 (0x0000000080000000--0x0000000080020000) \
	overlaps with mmode_resv1@80000000 (0x0000000080000000--0x0000000080100000)

To fix this warning, among the multiple regions having same base
address but different sizes, add only the largest region as reserved
region during fdt fixup.

Fixes: 230278dcf (lib: sbi: Add separate entries for firmware RX and RW regions)
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-08 11:13:19 +05:30
Nick Hu
84d15f4f52 lib: sbi_hsm: Use csr_set to restore the MIP
If we use the csr_write to restore the MIP, we may clear the SEIP.
In generic behavior of QEMU, if the pending bits of PLIC are set and we
clear the SEIP, the QEMU may not set it back immediately. It may cause
the interrupts won't be handled anymore until the new interrupts arrived
and QEMU set the bits back.

Signed-off-by: Nick Hu <nick.hu@sifive.com>
Signed-off-by: Jim Shu <jim.shu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-08 10:39:21 +05:30
Nick Hu
8050081f68 firmware: Not to clear all the MIP
In generic behavior of QEMU, if the pending bits of PLIC are still set and
we clear the SEIP, the QEMU may not set the SEIP back immediately and the
interrupt may not be handled anymore until the new interrupts arrived and
QEMU set the SEIP back which is a generic behavior in QEMU.

Signed-off-by: Nick Hu <nick.hu@sifive.com>
Signed-off-by: Jim Shu <jim.shu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-02-08 10:39:20 +05:30
Jessica Clarke
c8ea836ee3 firmware: Fix fw_rw_offset computation in fw_base.S
It seems BFD just does totally nonsensical things for SHN_ABS symbols
when producing position-independent outputs (both -pie and -shared)
for various historical reasons, and so SHN_ABS symbols are still
subject to relocation as far as BFD is concerned (except AArch64,
which fixes it in limited cases that don’t apply here...).

The above affects the _fw_rw_offset provided through fw_base.ldS
linker script which results in OpenSBI firmware failing to boot
when loaded at an address different from FW_TEXT_START.

Fixes: c10e3fe5f9 ("firmware: Add RW section offset in scratch")
Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Reported-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Tested-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-01-27 08:39:49 +05:30
Samuel Holland
c45992cc2b platform: generic: allwinner: Advertise nonretentive suspend
Add D1's nonretentive suspend state to the devicetree so S-mode software
knows about it and can use it.

Latency and power measurements were taken on an Allwinner Nezha board:
 - Entry latency was measured from the beginning of sbi_ecall_handler()
   to before the call to wfi() in sun20i_d1_hart_suspend().
 - Exit latency was measured from the beginning of sbi_init() to before
   the call to sbi_hart_switch_mode() in init_warmboot().
 - There was a 17.5 mW benefit from non-retentive suspend compared to
   WFI, with a 170 mW cost during the 107 us entry/exit period. This
   provides a break-even point around 1040 us. Residency includes entry
   latency, so round this up to 1100 us.
 - The hardware power sequence latency (after the WFI) is assumed to be
   negligible, so set the wakeup latency to the exit latency.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Samuel Holland <samuel@sholland.org>
2023-01-24 17:30:21 +05:30
Samuel Holland
33bf917460 lib: utils: Add fdt_add_cpu_idle_states() helper function
Since the availability and latency properties of CPU idle states depend
on the specific SBI HSM implementation, it is appropriate that the idle
states are added to the devicetree at runtime by that implementation.

This helper function adds a platform-provided array of idle states to
the devicetree, following the SBI idle state binding.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Samuel Holland <samuel@sholland.org>
2023-01-24 17:30:21 +05:30
Lad Prabhakar
dea0922f86 platform: renesas/rzfive: Configure Local memory regions as part of root domain
Renesas RZ/Five RISC-V SoC has Instruction local memory and Data local
memory (ILM & DLM) mapped between region 0x30000 - 0x4FFFF. When a
virtual address falls within this range, the MMU doesn't trigger a page
fault; it assumes the virtual address is a physical address which can
cause undesired behaviours for statically linked applications/libraries.

To avoid this, add the ILM/DLM memory regions to the root domain region
of the PMPU with permissions set to 0x0 for S/U modes so that any access
to these regions gets blocked and for M-mode we grant full access (R/W/X).

Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-23 11:29:03 +05:30
Himanshu Chauhan
230278dcf1 lib: sbi: Add separate entries for firmware RX and RW regions
Add two entries for firmware in the root domain:

1. TEXT: fw_start to _fw_rw_offset with RX permissions
2. DATA: _fw_rw_offset to fw_size with RW permissions

These permissions are still not enforced from M-mode but lay
the ground work for enforcing them for M-mode. SU-mode don't
have any access to these regions.

Sample output:
 Domain0 Region01  : 0x0000000080000000-0x000000008001ffff M: (R,X) S/U: ()
 Domain0 Region02  : 0x0000000080020000-0x000000008003ffff M: (R,W) S/U: ()

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-23 10:34:18 +05:30
Himanshu Chauhan
b666760bfa lib: sbi: Print the RW section offset
Print the RW section offset when firmware base and size is
being printed.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-23 10:06:14 +05:30
Himanshu Chauhan
c10e3fe5f9 firmware: Add RW section offset in scratch
Add the RW section offset, provided by _fw_rw_offset symbol,
to the scratch structure. This will be used to program
separate pmp entry for RW section.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-23 10:06:14 +05:30
Himanshu Chauhan
2f40a99c9e firmware: Move dynsym and reladyn sections to RX section
Currently, the dynsym and reladyn sections are under RW data.
They are moved to the Read-only/Executable region.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-23 10:06:14 +05:30
Himanshu Chauhan
fefa548803 firmware: Split RO/RX and RW sections
Split the RO/RX and RW sections so that they can have
independent pmp entries with required permissions. The
split size is ensured to be a power-of-2 as required by
pmp.

_fw_rw_offset symbol marks the beginning of the data
section.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-23 10:06:14 +05:30
Mayuresh Chitale
a990309fa3 lib: utils: Fix reserved memory node for firmware memory
The commit 9e0ba090 introduced more fine grained permissions for memory
regions and did not update the fdt_reserved_memory_fixup() function. As
a result, the fdt_reserved_memory_fixup continued to use the older coarse
permissions which causes the reserved memory node to be not inserted
into the DT.

To fix the above issue, we correct the flags used for memory region
permission checks in the fdt_reserved_memory_fixup() function.

Fixes: 9e0ba090 ("include: sbi: Fine grain the permissions for M and SU modes")
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-22 17:49:10 +05:30
Yu Chien Peter Lin
7aaeeab9e7 lib: reset/fdt_reset_atcwdt200: Use defined macros and function in atcsmu.h
Reuse the smu related macros and function in atcsmu.h.

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-22 17:33:13 +05:30
Yu Chien Peter Lin
787296ae92 platform: andes/ae350: Implement hart hotplug using HSM extension
Add hart_start() and hart_stop() callbacks for the multi-core ae350
platform, it utilizes the ATCSMU to put the harts into power-gated
deep sleep mode. The programming sequence is stated as below:

1. Set the wakeup events to PCSm_WE
2. Set the sleep command to PCSm_CTL
3. Set the reset vector to HARTm_RESET_VECTOR_{LO|HI}
4. Write back and invalidate D-cache by executing the CCTL command L1D_WBINVAL_ALL
5. Disable I/D-cache by clearing mcache_ctl.{I|D}C_EN
6. Disable D-cache coherency by clearing mcache_ctl_.DC_COHEN
7. Wait for mcache_ctl.DC_COHSTA to be cleared to ensure the previous step is completed
8. Execute WFI

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-22 17:33:03 +05:30
Yu Chien Peter Lin
9c4eb3521e lib: utils: atcsmu: Add Andes System Management Unit support
This patch adds atcsmu support for Andes AE350 platforms. The SMU
provides system management capabilities, including clock, reset
and power control based on power domain partitions.

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-22 17:32:50 +05:30
Yu Chien Peter Lin
b1818ee244 include: types: add always inline compiler attribute
Provide __always_inline to sbi_types header.

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-22 17:32:35 +05:30
Yu Chien Peter Lin
8ecbe6d3fb lib: sbi_hsm: handle failure when hart_stop returns SBI_ENOTSUPP
Make use of generic warm-boot path when platform hart_stop callback
returns SBI_ENOTSUPP, in case certain hart can not turn off its
power domain, or it detects some error occured in power management
unit, it can fall through warm-boot flow and wait for interrupt in
sbi_hsm_hart_wait().

Also improves comment in sbi_hsm_hart_wait().

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-22 17:32:09 +05:30
Yu Chien Peter Lin
ce2a834c98 docs: generic.md: fix typo of andes-ae350
Fix hyperlink due to the typo.

Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-22 17:31:54 +05:30
Samuel Holland
da5594bf85 platform: generic: allwinner: Fix PLIC array bounds
The two referenced commits passed incorrect bounds to the PLIC save/
restore functions, causing out-of-bounds memory access. The functions
expect "num" to be the 1-based number of interrupt sources, equivalent
to the "riscv,ndev" devicetree property. Thus, "num" must be strictly
smaller than the 0-based size of the array storing the register values.

However, the referenced commits incorrectly passed in the unmodified
size of the array as "num". Fix this by reducing PLIC_SOURCES (matching
"riscv,ndev" on this platform), while keeping the same array sizes.

Addresses-Coverity-ID: 1530251 ("Out-of-bounds access")
Addresses-Coverity-ID: 1530252 ("Out-of-bounds access")
Fixes: 8509e46ca6 ("lib: utils/irqchip: plic: Ensure no out-of-bound access in priority save/restore helpers")
Fixes: 9a2eeb4aae ("lib: utils/irqchip: plic: Ensure no out-of-bound access in context save/restore helpers")
Signed-off-by: Samuel Holland <samuel@sholland.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-13 17:39:42 +05:30
Himanshu Chauhan
001106d19b docs: Update domain's region permissions and requirements
Updated the various permissions bits available for domains
defined in DT node and restrictions on them.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:30 +05:30
Himanshu Chauhan
59a08cd7d6 lib: utils: Add M-mode {R/W} flags to the MMIO regions
Add the M-mode readable/writable flags to mmio regions
of various drivers.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:28 +05:30
Himanshu Chauhan
3e2f573e70 lib: utils: Disallow non-root domains from adding M-mode regions
The M-mode regions can only be added to the root domain. The non-root
domains shouldn't be able to add them from FDT.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:25 +05:30
Himanshu Chauhan
20646e0184 lib: utils: Use SU-{R/W/X} flags for region permissions during parsing
Use the newer SU-{R/W/X} flags for checking and assigning region
permissions.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:23 +05:30
Himanshu Chauhan
44f736c96e lib: sbi: Modify the boot time region flag prints
With the finer permission semantics, the region access
permissions must be displayed separately for M and SU mode.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:19 +05:30
Himanshu Chauhan
1ac14f10f6 lib: sbi: Use finer permission sematics to decide on PMP bits
Use the fine grained permission bits to decide if the region
permissions are to be enforced on all modes. Also use the new
permission bits for deciding on R/W/X bits in pmpcfg register.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:17 +05:30
Himanshu Chauhan
22dbdb3d60 lib: sbi: Add permissions for the firmware start till end
Change the zero flag to M-mode R/W/X flag for the firmware
region.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:14 +05:30
Himanshu Chauhan
aace1e145d lib: sbi: Use finer permission semantics for address validation
Use the fine grained permisssion semantics for address validation
of a given region.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:12 +05:30
Himanshu Chauhan
9e0ba09076 include: sbi: Fine grain the permissions for M and SU modes
Split the permissions for M-mode and SU-mode. This would
help if different sections of OpenSBI need to be given
different permissions and if M-mode has different permisssions
than the SU-mode over a region.

Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Anup Patel <anup@brainfault.org>
2023-01-09 18:04:10 +05:30
Bin Meng
9e397e3960 docs: domain_support: Use capital letter for privilege modes
The RISC-V convention for the privilege mode is capital letter, like
'M-mode', instead of 'm-mode'.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-09 16:54:29 +05:30
Bin Meng
6997552ea2 lib: sbi_hsm: Rename 'priv' argument to 'arg1'
'priv' argument of sbi_hsm_hart_start() and sbi_hsm_hart_suspend()
may mislead people to think it stands for 'privilege mode', but it
is not. Change it to 'arg1' to clearly indicate the a1 register.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Samuel Holland <samuel@sholland.org>
Tested-by: Samuel Holland <samuel@sholland.org>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-09 16:52:34 +05:30
Wei Liang Lim
8020df8733 generic/starfive: Add Starfive JH7110 platform implementation
Add Starfive JH7110 platform implementation

Signed-off-by: Wei Liang Lim <weiliang.lim@starfivetech.com>
Reviewed-by: Chee Hong Ang <cheehong.ang@starfivetech.com>
Reviewed-by: Jun Liang Tan <junliang.tan@starfivetech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-07 16:00:22 +05:30
Wei Liang Lim
cb7e7c3325 platform: generic: Allow platform_override to perform firmware init
We add a generic platform override callback to allow platform specific firmware init.

Signed-off-by: Wei Liang Lim <weiliang.lim@starfivetech.com>
Reviewed-by: Chee Hong Ang <cheehong.ang@starfivetech.com>
Reviewed-by: Jun Liang Tan <junliang.tan@starfivetech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-07 15:58:54 +05:30
Anup Patel
6957ae0e91 platform: generic: Allow platform_override to select cold boot HART
We add a generic platform override callback to allow platform specific
selection of cold boot HART.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-01-07 15:58:52 +05:30
Anup Patel
f14595a7cf lib: sbi: Allow platform to influence cold boot HART selection
We add an optional cold_boot_allowed() platform callback which allows
platform support to decide which HARTs can do cold boot initialization.

If this platform callback is not available then any HART can do cold
boot initialization.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
2023-01-07 15:58:49 +05:30
Bin Meng
65638f8d6b lib: utils/sys: Allow custom HTIF base address for RV32
commit 6dde43584f ("lib: utils/sys: Extend HTIF library to allow custom base address")
forgot to update do_tohost_fromhost() codes for RV32, which still
accesses the HTIF registers using the ELF symbol address directly.

Fixes: 6dde43584f ("lib: utils/sys: Extend HTIF library to allow custom base address")
Signed-off-by: Bin Meng <bmeng@tinylab.org>
Tested-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
2023-01-06 18:01:36 +05:30
Rahul Pathak
6509127ad6 Makefile: Remove -N ldflag to prevent linker RWX warning
-N option coalesce all sections into single LOAD segment which causes
data and other sections to have executable permission causing warning
with new binutils ld 2.39.
New ld emits warning when any segment have all three permissions RWX.

ld.bfd: warning: test.elf has a LOAD segment with RWX permissions
ld.bfd: warning: fw_dynamic.elf has a LOAD segment with RWX permissions
ld.bfd: warning: fw_jump.elf has a LOAD segment with RWX permissions
ld.bfd: warning: fw_payload.elf has a LOAD segment with RWX permissions

This option was added in below commit -
commit: eeab92f242 ("Makefile: Convert to a more standard format")

Removing -N option allows to have text and rodata into one LOAD
segment and other sections into separate LOAD segment which prevents
RWX permissions on single LOAD segment. Here X == E

Current
 LOAD           0x0000000000000120 0x0000000080000000 0x0000000080000000
                 0x000000000001d4d0 0x0000000000032ed8  RWE    0x10

-N removed
  LOAD           0x0000000000001000 0x0000000080000000 0x0000000080000000
                 0x00000000000198cc 0x00000000000198cc  R E    0x1000
  LOAD           0x000000000001b000 0x000000008001a000 0x000000008001a000
                 0x00000000000034d0 0x0000000000018ed8  RW     0x1000

Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Samuel Holland <samuel@sholland.org>
2023-01-06 17:51:15 +05:30
Bin Meng
440fa818fb treewide: Replace TRUE/FALSE with true/false
C language standard uses true/false for the boolean type.
Let's switch to that for better language compatibility.

Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Samuel Holland <samuel@sholland.org>
Tested-by: Samuel Holland <samuel@sholland.org>
2023-01-06 17:26:35 +05:30
127 changed files with 4951 additions and 1026 deletions

View File

@@ -254,6 +254,7 @@ deps-y=$(platform-objs-path-y:.o=.dep)
deps-y+=$(libsbi-objs-path-y:.o=.dep)
deps-y+=$(libsbiutils-objs-path-y:.o=.dep)
deps-y+=$(firmware-objs-path-y:.o=.dep)
deps-y+=$(firmware-elfs-path-y:=.dep)
# Setup platform ABI, ISA and Code Model
ifndef PLATFORM_RISCV_ABI
@@ -330,7 +331,12 @@ GENFLAGS += $(libsbiutils-genflags-y)
GENFLAGS += $(platform-genflags-y)
GENFLAGS += $(firmware-genflags-y)
CFLAGS = -g -Wall -Werror -ffreestanding -nostdlib -fno-stack-protector -fno-strict-aliasing -O2
CFLAGS = -g -Wall -Werror -ffreestanding -nostdlib -fno-stack-protector -fno-strict-aliasing
ifneq ($(DEBUG),)
CFLAGS += -O0
else
CFLAGS += -O2
endif
CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls -mstrict-align
# enable -m(no-)save-restore option by CC_SUPPORT_SAVE_RESTORE
ifeq ($(CC_SUPPORT_SAVE_RESTORE),y)
@@ -369,7 +375,7 @@ ASFLAGS += $(firmware-asflags-y)
ARFLAGS = rcs
ELFFLAGS += $(USE_LD_FLAG)
ELFFLAGS += -Wl,--build-id=none -Wl,-N
ELFFLAGS += -Wl,--build-id=none
ELFFLAGS += $(platform-ldflags-y)
ELFFLAGS += $(firmware-ldflags-y)
@@ -395,10 +401,10 @@ merge_deps = $(CMD_PREFIX)mkdir -p `dirname $(1)`; \
cat $(2) > $(1)
copy_file = $(CMD_PREFIX)mkdir -p `dirname $(1)`; \
echo " COPY $(subst $(build_dir)/,,$(1))"; \
cp -f $(2) $(1)
cp -L -f $(2) $(1)
inst_file = $(CMD_PREFIX)mkdir -p `dirname $(1)`; \
echo " INSTALL $(subst $(install_root_dir)/,,$(1))"; \
cp -f $(2) $(1)
cp -L -f $(2) $(1)
inst_file_list = $(CMD_PREFIX)if [ ! -z "$(4)" ]; then \
mkdir -p $(1)/$(3); \
for file in $(4) ; do \
@@ -407,12 +413,17 @@ inst_file_list = $(CMD_PREFIX)if [ ! -z "$(4)" ]; then \
dest_dir=`dirname $$dest_file`; \
echo " INSTALL "$(3)"/"`echo $$rel_file`; \
mkdir -p $$dest_dir; \
cp -f $$file $$dest_file; \
cp -L -f $$file $$dest_file; \
done \
fi
inst_header_dir = $(CMD_PREFIX)mkdir -p $(1); \
echo " INSTALL $(subst $(install_root_dir)/,,$(1))"; \
cp -rf $(2) $(1)
cp -L -rf $(2) $(1)
compile_cpp_dep = $(CMD_PREFIX)mkdir -p `dirname $(1)`; \
echo " CPP-DEP $(subst $(build_dir)/,,$(1))"; \
printf %s `dirname $(1)`/ > $(1) && \
$(CC) $(CPPFLAGS) -x c -MM $(3) \
-MT `basename $(1:.dep=$(2))` >> $(1) || rm -f $(1)
compile_cpp = $(CMD_PREFIX)mkdir -p `dirname $(1)`; \
echo " CPP $(subst $(build_dir)/,,$(1))"; \
$(CPP) $(CPPFLAGS) -x c $(2) | grep -v "\#" > $(1)
@@ -543,6 +554,9 @@ $(platform_build_dir)/%.bin: $(platform_build_dir)/%.elf
$(platform_build_dir)/%.elf: $(platform_build_dir)/%.o $(platform_build_dir)/%.elf.ld $(platform_build_dir)/lib/libplatsbi.a
$(call compile_elf,$@,$@.ld,$< $(platform_build_dir)/lib/libplatsbi.a)
$(platform_build_dir)/%.dep: $(src_dir)/%.ldS $(KCONFIG_CONFIG)
$(call compile_cpp_dep,$@,.ld,$<)
$(platform_build_dir)/%.ld: $(src_dir)/%.ldS
$(call compile_cpp,$@,$<)

View File

@@ -1,11 +1,15 @@
RISC-V Open Source Supervisor Binary Interface (OpenSBI)
========================================================
![RISC-V OpenSBI](docs/riscv_opensbi_logo_final_color.png)
Copyright and License
---------------------
The OpenSBI project is copyright (c) 2019 Western Digital Corporation
or its affiliates and other contributors.
The OpenSBI project is:
* Copyright (c) 2019 Western Digital Corporation or its affiliates
* Copyright (c) 2023 RISC-V International
It is distributed under the terms of the BSD 2-clause license
("Simplified BSD License" or "FreeBSD License", SPDX: *BSD-2-Clause*).
@@ -298,6 +302,19 @@ NOTE: Using `BUILD_INFO=y` without specifying SOURCE_DATE_EPOCH will violate
purpose, and should NOT be used in a product which follows "reproducible
builds".
Building with optimization off for debugging
--------------------------------------------
When debugging OpenSBI, we may want to turn off the compiler optimization and
make debugging produce the expected results for a better debugging experience.
To build with optimization off we can just simply add `DEBUG=1`, like:
```
make DEBUG=1
```
This definition is ONLY for development and debug purpose, and should NOT be
used in a product build.
Contributing to OpenSBI
-----------------------

View File

@@ -52,6 +52,7 @@ has following details:
* **next_mode** - Privilege mode of the next booting stage for this
domain. This can be either S-mode or U-mode.
* **system_reset_allowed** - Is domain allowed to reset the system?
* **system_suspend_allowed** - Is domain allowed to suspend the system?
The memory regions represented by **regions** in **struct sbi_domain** have
following additional constraints to align with RISC-V PMP requirements:
@@ -91,6 +92,7 @@ following manner:
* **next_mode** - Next booting stage mode in coldboot HART scratch space
is the next mode for the ROOT domain
* **system_reset_allowed** - The ROOT domain is allowed to reset the system
* **system_suspend_allowed** - The ROOT domain is allowed to suspend the system
Domain Effects
--------------
@@ -124,6 +126,9 @@ The DT properties of a domain configuration DT node are as follows:
* **compatible** (Mandatory) - The compatible string of the domain
configuration. This DT property should have value *"opensbi,domain,config"*
* **system-suspend-test** (Optional) - When present, enable a system
suspend test implementation which simply waits five seconds and issues a WFI.
### Domain Memory Region Node
The domain memory region DT node describes details of a memory region and
@@ -160,8 +165,16 @@ The DT properties of a domain instance DT node are as follows:
* **regions** (Optional) - The list of domain memory region DT node phandle
and access permissions for the domain instance. Each list entry is a pair
of DT node phandle and access permissions. The access permissions are
represented as a 32bit bitmask having bits: **readable** (BIT[0]),
**writeable** (BIT[1]), **executable** (BIT[2]), and **m-mode** (BIT[3]).
represented as a 32bit bitmask having bits: **M readable** (BIT[0]),
**M writeable** (BIT[1]), **M executable** (BIT[2]), **SU readable**
(BIT[3]), **SU writable** (BIT[4]), and **SU executable** (BIT[5]).
The enforce permission bit (BIT[6]), if set, will lock the permissions
in the PMP. This will enforce the permissions on M-mode as well which
otherwise will have unrestricted access. This bit must be used with
caution because no changes can be made to a PMP entry once its locked
until the hart is reset.
Any region of a domain defined in DT node cannot have only M-bits set
in access permissions i.e. it cannot be an m-mode only accessible region.
* **boot-hart** (Optional) - The DT node phandle of the HART booting the
domain instance. If coldboot HART is assigned to the domain instance then
this DT property is ignored and the coldboot HART is assumed to be the
@@ -180,13 +193,15 @@ The DT properties of a domain instance DT node are as follows:
is used as default value.
* **next-mode** (Optional) - The 32 bit next booting stage mode for the
domain instance. The possible values of this DT property are: **0x1**
(s-mode), and **0x0** (u-mode). If this DT property is not available
(S-mode), and **0x0** (U-mode). If this DT property is not available
and coldboot HART is not assigned to the domain instance then **0x1**
is used as default value. If this DT property is not available and
coldboot HART is assigned to the domain instance then **next booting
stage mode of coldboot HART** is used as default value.
* **system-reset-allowed** (Optional) - A boolean flag representing
whether the domain instance is allowed to do system reset.
* **system-suspend-allowed** (Optional) - A boolean flag representing
whether the domain instance is allowed to do system suspend.
### Assigning HART To Domain Instance
@@ -195,9 +210,9 @@ platform support can provide the HART to domain instance assignment using
platform specific callback.
The HART to domain instance assignment can be parsed from the device tree
using optional DT property **opensbi,domain** in each CPU DT node. The
value of DT property **opensbi,domain** is the DT phandle of the domain
instance DT node. If **opensbi,domain** DT property is not specified then
using optional DT property **opensbi-domain** in each CPU DT node. The
value of DT property **opensbi-domain** is the DT phandle of the domain
instance DT node. If **opensbi-domain** DT property is not specified then
corresponding HART is assigned to **the ROOT domain**.
### Domain Configuration Only Accessible to OpenSBI
@@ -222,6 +237,7 @@ be done:
chosen {
opensbi-domains {
compatible = "opensbi,domain,config";
system-suspend-test;
tmem: tmem {
compatible = "opensbi,domain,memregion";
@@ -246,18 +262,19 @@ be done:
tdomain: trusted-domain {
compatible = "opensbi,domain,instance";
possible-harts = <&cpu0>;
regions = <&tmem 0x7>, <&tuart 0x7>;
regions = <&tmem 0x3f>, <&tuart 0x3f>;
boot-hart = <&cpu0>;
next-arg1 = <0x0 0x0>;
next-addr = <0x0 0x80100000>;
next-mode = <0x0>;
system-reset-allowed;
system-suspend-allowed;
};
udomain: untrusted-domain {
compatible = "opensbi,domain,instance";
possible-harts = <&cpu1 &cpu2 &cpu3 &cpu4>;
regions = <&tmem 0x0>, <&tuart 0x0>, <&allmem 0x7>;
regions = <&tmem 0x0>, <&tuart 0x0>, <&allmem 0x3f>;
};
};
};

View File

@@ -61,7 +61,7 @@ Firmware Configuration and Compilation
All firmware types support the following common compile time configuration
parameters:
* **FW_TEXT_ADDR** - Defines the execution address of the OpenSBI firmware.
* **FW_TEXT_START** - Defines the execution address of the OpenSBI firmware.
This configuration parameter is mandatory.
* **FW_FDT_PATH** - Path to an external flattened device tree binary file to
be embedded in the *.rodata* section of the final firmware. If this option

View File

@@ -43,18 +43,18 @@ follows:
When using the default *FW_JUMP_FDT_ADDR* with *PLATFORM=generic*, you must
ensure *FW_JUMP_FDT_ADDR* is set high enough to avoid overwriting the kernel.
You can use the following method.
You can use the following method (e.g., using bash or zsh):
```
${CROSS_COMPILE}objdump -h $KERNEL_ELF | sort -k 5,5 | awk -n '/^ +[0-9]+ /\
{addr="0x"$3; size="0x"$5; printf "0x""%x\n",addr+size}' \
| (( `tail -1` > 0x2200000 )) && echo fdt overlaps kernel,\
increase FW_JUMP_FDT_ADDR
${CROSS_COMPILE}objdump -h $KERNEL_ELF | sort -k 5,5 | awk -n '
/^ +[0-9]+ / {addr="0x"$3; size="0x"$5; printf "0x""%x\n",addr+size}' |
(( `tail -1` > (FW_JUMP_FDT_ADDR - FW_JUMP_ADDR) )) &&
echo fdt overlaps kernel, increase FW_JUMP_FDT_ADDR
${LLVM}objdump -h --show-lma $KERNEL_ELF | sort -k 5,5 | \
awk -n '/^ +[0-9]+ / {addr="0x"$3; size="0x"$5; printf "0x""%x\n",addr+size}'\
| (( `tail -1` > 0x2200000 )) && echo fdt overlaps kernel,\
increase FW_JUMP_FDT_ADDR
${LLVM}objdump -h --show-lma $KERNEL_ELF | sort -k 5,5 | awk -n '
/^ +[0-9]+ / {addr="0x"$3; size="0x"$5; printf "0x""%x\n",addr+size}' |
(( `tail -1` > (FW_JUMP_FDT_ADDR - FW_JUMP_ADDR) )) &&
echo fdt overlaps kernel, increase FW_JUMP_FDT_ADDR
```
*FW_JUMP* Example

View File

@@ -53,7 +53,7 @@ RISC-V Platforms Using Generic Platform
* **Spike** (*[spike.md]*)
* **T-HEAD C9xx series Processors** (*[thead-c9xx.md]*)
[andes-ae350.md]: andse-ae350.md
[andes-ae350.md]: andes-ae350.md
[qemu_virt.md]: qemu_virt.md
[renesas-rzfive.md]: renesas-rzfive.md
[shakti_cclass.md]: shakti_cclass.md

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

View File

@@ -79,13 +79,12 @@ _try_lottery:
lla t0, __rel_dyn_start
lla t1, __rel_dyn_end
beq t0, t1, _relocate_done
j 5f
2:
REG_L t5, -(REGBYTES*2)(t0) /* t5 <-- relocation info:type */
REG_L t5, REGBYTES(t0) /* t5 <-- relocation info:type */
li t3, R_RISCV_RELATIVE /* reloc type R_RISCV_RELATIVE */
bne t5, t3, 3f
REG_L t3, -(REGBYTES*3)(t0)
REG_L t5, -(REGBYTES)(t0) /* t5 <-- addend */
REG_L t3, 0(t0)
REG_L t5, (REGBYTES * 2)(t0) /* t5 <-- addend */
add t5, t5, t2
add t3, t3, t2
REG_S t5, 0(t3) /* store runtime address to the GOT entry */
@@ -95,18 +94,17 @@ _try_lottery:
lla t4, __dyn_sym_start
4:
REG_L t5, -(REGBYTES*2)(t0) /* t5 <-- relocation info:type */
srli t6, t5, SYM_INDEX /* t6 <--- sym table index */
andi t5, t5, 0xFF /* t5 <--- relocation type */
li t3, RELOC_TYPE
bne t5, t3, 5f
/* address R_RISCV_64 or R_RISCV_32 cases*/
REG_L t3, -(REGBYTES*3)(t0)
REG_L t3, 0(t0)
li t5, SYM_SIZE
mul t6, t6, t5
add s5, t4, t6
REG_L t6, -(REGBYTES)(t0) /* t0 <-- addend */
REG_L t6, (REGBYTES * 2)(t0) /* t0 <-- addend */
REG_L t5, REGBYTES(s5)
add t5, t5, t6
add t5, t5, t2 /* t5 <-- location to fix up in RAM */
@@ -114,8 +112,8 @@ _try_lottery:
REG_S t5, 0(t3) /* store runtime address to the variable */
5:
addi t0, t0, (REGBYTES*3)
ble t0, t1, 2b
addi t0, t0, (REGBYTES * 3)
blt t0, t1, 2b
j _relocate_done
_wait_relocate_copy_done:
j _wait_for_boot_hart
@@ -257,20 +255,28 @@ _bss_zero:
/* Preload HART details
* s7 -> HART Count
* s8 -> HART Stack Size
* s9 -> Heap Size
* s10 -> Heap Offset
*/
lla a4, platform
#if __riscv_xlen > 32
lwu s7, SBI_PLATFORM_HART_COUNT_OFFSET(a4)
lwu s8, SBI_PLATFORM_HART_STACK_SIZE_OFFSET(a4)
lwu s9, SBI_PLATFORM_HEAP_SIZE_OFFSET(a4)
#else
lw s7, SBI_PLATFORM_HART_COUNT_OFFSET(a4)
lw s8, SBI_PLATFORM_HART_STACK_SIZE_OFFSET(a4)
lw s9, SBI_PLATFORM_HEAP_SIZE_OFFSET(a4)
#endif
/* Setup scratch space for all the HARTs*/
lla tp, _fw_end
mul a5, s7, s8
add tp, tp, a5
/* Setup heap base address */
lla s10, _fw_start
sub s10, tp, s10
add tp, tp, s9
/* Keep a copy of tp */
add t3, tp, zero
/* Counter */
@@ -285,8 +291,11 @@ _scratch_init:
* t3 -> the firmware end address
* s7 -> HART count
* s8 -> HART stack size
* s9 -> Heap Size
* s10 -> Heap Offset
*/
add tp, t3, zero
sub tp, tp, s9
mul a5, s8, t1
sub tp, tp, a5
li a5, SBI_SCRATCH_SIZE
@@ -298,6 +307,16 @@ _scratch_init:
sub a5, t3, a4
REG_S a4, SBI_SCRATCH_FW_START_OFFSET(tp)
REG_S a5, SBI_SCRATCH_FW_SIZE_OFFSET(tp)
/* Store R/W section's offset in scratch space */
lla a4, __fw_rw_offset
REG_L a5, 0(a4)
REG_S a5, SBI_SCRATCH_FW_RW_OFFSET(tp)
/* Store fw_heap_offset and fw_heap_size in scratch space */
REG_S s10, SBI_SCRATCH_FW_HEAP_OFFSET(tp)
REG_S s9, SBI_SCRATCH_FW_HEAP_SIZE_OFFSET(tp)
/* Store next arg1 in scratch space */
MOV_3R s0, a0, s1, a1, s2, a2
call fw_next_arg1
@@ -422,9 +441,8 @@ _start_warm:
li ra, 0
call _reset_regs
/* Disable and clear all interrupts */
/* Disable all interrupts */
csrw CSR_MIE, zero
csrw CSR_MIP, zero
/* Find HART count and HART stack size */
lla a4, platform
@@ -453,7 +471,6 @@ _start_warm:
add s9, s9, 4
add a4, a4, 1
blt a4, s7, 1b
li a4, -1
2: add s6, a4, zero
3: bge s6, s7, _start_hang
@@ -519,6 +536,8 @@ _link_start:
RISCV_PTR FW_TEXT_START
_link_end:
RISCV_PTR _fw_reloc_end
__fw_rw_offset:
RISCV_PTR _fw_rw_start - _fw_start
.section .entry, "ax", %progbits
.align 3

View File

@@ -30,17 +30,41 @@
/* Beginning of the read-only data sections */
PROVIDE(_rodata_start = .);
.rodata :
{
PROVIDE(_rodata_start = .);
*(.rodata .rodata.*)
. = ALIGN(8);
PROVIDE(_rodata_end = .);
}
. = ALIGN(0x1000); /* Ensure next section is page aligned */
.dynsym : {
PROVIDE(__dyn_sym_start = .);
*(.dynsym)
PROVIDE(__dyn_sym_end = .);
}
.rela.dyn : {
PROVIDE(__rel_dyn_start = .);
*(.rela*)
. = ALIGN(8);
PROVIDE(__rel_dyn_end = .);
}
PROVIDE(_rodata_end = .);
/* End of the read-only data sections */
. = ALIGN(0x1000); /* Ensure next section is page aligned */
/*
* PMP regions must be to be power-of-2. RX/RW will have separate
* regions, so ensure that the split is power-of-2.
*/
. = ALIGN(1 << LOG2CEIL((SIZEOF(.rodata) + SIZEOF(.text)
+ SIZEOF(.dynsym) + SIZEOF(.rela.dyn))));
PROVIDE(_fw_rw_start = .);
/* Beginning of the read-write data sections */
@@ -59,19 +83,6 @@
PROVIDE(_data_end = .);
}
.dynsym : {
PROVIDE(__dyn_sym_start = .);
*(.dynsym)
PROVIDE(__dyn_sym_end = .);
}
.rela.dyn : {
PROVIDE(__rel_dyn_start = .);
*(.rela*)
. = ALIGN(8);
PROVIDE(__rel_dyn_end = .);
}
. = ALIGN(0x1000); /* Ensure next section is page aligned */
.bss :

View File

@@ -736,6 +736,8 @@
#define SMSTATEEN0_CS (_ULL(1) << SMSTATEEN0_CS_SHIFT)
#define SMSTATEEN0_FCSR_SHIFT 1
#define SMSTATEEN0_FCSR (_ULL(1) << SMSTATEEN0_FCSR_SHIFT)
#define SMSTATEEN0_CONTEXT_SHIFT 57
#define SMSTATEEN0_CONTEXT (_ULL(1) << SMSTATEEN0_CONTEXT_SHIFT)
#define SMSTATEEN0_IMSIC_SHIFT 58
#define SMSTATEEN0_IMSIC (_ULL(1) << SMSTATEEN0_IMSIC_SHIFT)
#define SMSTATEEN0_AIA_SHIFT 59

View File

@@ -12,13 +12,7 @@
#include <sbi/sbi_types.h>
#if __SIZEOF_POINTER__ == 8
#define BITS_PER_LONG 64
#elif __SIZEOF_POINTER__ == 4
#define BITS_PER_LONG 32
#else
#error "Unexpected __SIZEOF_POINTER__"
#endif
#define BITS_PER_LONG (8 * __SIZEOF_LONG__)
#define EXTRACT_FIELD(val, which) \
(((val) & (which)) / ((which) & ~((which)-1)))

View File

@@ -0,0 +1,61 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2023 Ventana Micro Systems Inc.
*/
#ifndef __SBI_BYTEORDER_H__
#define __SBI_BYTEORDER_H__
#include <sbi/sbi_types.h>
#define BSWAP16(x) ((((x) & 0x00ff) << 8) | \
(((x) & 0xff00) >> 8))
#define BSWAP32(x) ((((x) & 0x000000ff) << 24) | \
(((x) & 0x0000ff00) << 8) | \
(((x) & 0x00ff0000) >> 8) | \
(((x) & 0xff000000) >> 24))
#define BSWAP64(x) ((((x) & 0x00000000000000ffULL) << 56) | \
(((x) & 0x000000000000ff00ULL) << 40) | \
(((x) & 0x0000000000ff0000ULL) << 24) | \
(((x) & 0x00000000ff000000ULL) << 8) | \
(((x) & 0x000000ff00000000ULL) >> 8) | \
(((x) & 0x0000ff0000000000ULL) >> 24) | \
(((x) & 0x00ff000000000000ULL) >> 40) | \
(((x) & 0xff00000000000000ULL) >> 56))
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ /* CPU(little-endian) */
#define cpu_to_be16(x) ((uint16_t)BSWAP16(x))
#define cpu_to_be32(x) ((uint32_t)BSWAP32(x))
#define cpu_to_be64(x) ((uint64_t)BSWAP64(x))
#define be16_to_cpu(x) ((uint16_t)BSWAP16(x))
#define be32_to_cpu(x) ((uint32_t)BSWAP32(x))
#define be64_to_cpu(x) ((uint64_t)BSWAP64(x))
#define cpu_to_le16(x) ((uint16_t)(x))
#define cpu_to_le32(x) ((uint32_t)(x))
#define cpu_to_le64(x) ((uint64_t)(x))
#define le16_to_cpu(x) ((uint16_t)(x))
#define le32_to_cpu(x) ((uint32_t)(x))
#define le64_to_cpu(x) ((uint64_t)(x))
#else /* CPU(big-endian) */
#define cpu_to_be16(x) ((uint16_t)(x))
#define cpu_to_be32(x) ((uint32_t)(x))
#define cpu_to_be64(x) ((uint64_t)(x))
#define be16_to_cpu(x) ((uint16_t)(x))
#define be32_to_cpu(x) ((uint32_t)(x))
#define be64_to_cpu(x) ((uint64_t)(x))
#define cpu_to_le16(x) ((uint16_t)BSWAP16(x))
#define cpu_to_le32(x) ((uint32_t)BSWAP32(x))
#define cpu_to_le64(x) ((uint64_t)BSWAP64(x))
#define le16_to_cpu(x) ((uint16_t)BSWAP16(x))
#define le32_to_cpu(x) ((uint32_t)BSWAP32(x))
#define le64_to_cpu(x) ((uint64_t)BSWAP64(x))
#endif
#endif /* __SBI_BYTEORDER_H__ */

View File

@@ -19,6 +19,9 @@ struct sbi_console_device {
/** Write a character to the console output */
void (*console_putc)(char ch);
/** Write a character string to the console output */
unsigned long (*console_puts)(const char *str, unsigned long len);
/** Read a character from the console input */
int (*console_getc)(void);
};
@@ -33,8 +36,12 @@ void sbi_putc(char ch);
void sbi_puts(const char *str);
unsigned long sbi_nputs(const char *str, unsigned long len);
void sbi_gets(char *s, int maxwidth, char endchar);
unsigned long sbi_ngets(char *str, unsigned long len);
int __printf(2, 3) sbi_sprintf(char *out, const char *format, ...);
int __printf(3, 4) sbi_snprintf(char *out, u32 out_sz, const char *format, ...);

35
include/sbi/sbi_cppc.h Normal file
View File

@@ -0,0 +1,35 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2023 Ventana Micro Systems Inc.
*
*/
#ifndef __SBI_CPPC_H__
#define __SBI_CPPC_H__
#include <sbi/sbi_types.h>
/** CPPC device */
struct sbi_cppc_device {
/** Name of the CPPC device */
char name[32];
/** probe - returns register width if implemented, 0 otherwise */
int (*cppc_probe)(unsigned long reg);
/** read the cppc register*/
int (*cppc_read)(unsigned long reg, uint64_t *val);
/** write to the cppc register*/
int (*cppc_write)(unsigned long reg, uint64_t val);
};
int sbi_cppc_probe(unsigned long reg);
int sbi_cppc_read(unsigned long reg, uint64_t *val);
int sbi_cppc_write(unsigned long reg, uint64_t val);
const struct sbi_cppc_device *sbi_cppc_get_device(void);
void sbi_cppc_set_device(const struct sbi_cppc_device *dev);
#endif

View File

@@ -36,11 +36,53 @@ struct sbi_domain_memregion {
*/
unsigned long base;
/** Flags representing memory region attributes */
#define SBI_DOMAIN_MEMREGION_READABLE (1UL << 0)
#define SBI_DOMAIN_MEMREGION_WRITEABLE (1UL << 1)
#define SBI_DOMAIN_MEMREGION_EXECUTABLE (1UL << 2)
#define SBI_DOMAIN_MEMREGION_MMODE (1UL << 3)
#define SBI_DOMAIN_MEMREGION_ACCESS_MASK (0xfUL)
#define SBI_DOMAIN_MEMREGION_M_READABLE (1UL << 0)
#define SBI_DOMAIN_MEMREGION_M_WRITABLE (1UL << 1)
#define SBI_DOMAIN_MEMREGION_M_EXECUTABLE (1UL << 2)
#define SBI_DOMAIN_MEMREGION_SU_READABLE (1UL << 3)
#define SBI_DOMAIN_MEMREGION_SU_WRITABLE (1UL << 4)
#define SBI_DOMAIN_MEMREGION_SU_EXECUTABLE (1UL << 5)
/** Bit to control if permissions are enforced on all modes */
#define SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS (1UL << 6)
#define SBI_DOMAIN_MEMREGION_M_RWX \
(SBI_DOMAIN_MEMREGION_M_READABLE | \
SBI_DOMAIN_MEMREGION_M_WRITABLE | \
SBI_DOMAIN_MEMREGION_M_EXECUTABLE)
#define SBI_DOMAIN_MEMREGION_SU_RWX \
(SBI_DOMAIN_MEMREGION_SU_READABLE | \
SBI_DOMAIN_MEMREGION_SU_WRITABLE | \
SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
/* Unrestricted M-mode accesses but enfoced on SU-mode */
#define SBI_DOMAIN_MEMREGION_READABLE \
(SBI_DOMAIN_MEMREGION_SU_READABLE | \
SBI_DOMAIN_MEMREGION_M_RWX)
#define SBI_DOMAIN_MEMREGION_WRITEABLE \
(SBI_DOMAIN_MEMREGION_SU_WRITABLE | \
SBI_DOMAIN_MEMREGION_M_RWX)
#define SBI_DOMAIN_MEMREGION_EXECUTABLE \
(SBI_DOMAIN_MEMREGION_SU_EXECUTABLE | \
SBI_DOMAIN_MEMREGION_M_RWX)
/* Enforced accesses across all modes */
#define SBI_DOMAIN_MEMREGION_ENF_READABLE \
(SBI_DOMAIN_MEMREGION_SU_READABLE | \
SBI_DOMAIN_MEMREGION_M_READABLE)
#define SBI_DOMAIN_MEMREGION_ENF_WRITABLE \
(SBI_DOMAIN_MEMREGION_SU_WRITABLE | \
SBI_DOMAIN_MEMREGION_M_WRITABLE)
#define SBI_DOMAIN_MEMREGION_ENF_EXECUTABLE \
(SBI_DOMAIN_MEMREGION_SU_EXECUTABLE | \
SBI_DOMAIN_MEMREGION_M_EXECUTABLE)
#define SBI_DOMAIN_MEMREGION_ACCESS_MASK (0x3fUL)
#define SBI_DOMAIN_MEMREGION_M_ACCESS_MASK (0x7UL)
#define SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK (0x38UL)
#define SBI_DOMAIN_MEMREGION_SU_ACCESS_SHIFT (3)
#define SBI_DOMAIN_MEMREGION_MMIO (1UL << 31)
unsigned long flags;
@@ -78,17 +120,17 @@ struct sbi_domain {
unsigned long next_mode;
/** Is domain allowed to reset the system */
bool system_reset_allowed;
/** Is domain allowed to suspend the system */
bool system_suspend_allowed;
/** Identifies whether to include the firmware region */
bool fw_region_inited;
};
/** The root domain instance */
extern struct sbi_domain root;
/** HART id to domain table */
extern struct sbi_domain *hartid_to_domain_table[];
/** Get pointer to sbi_domain from HART id */
#define sbi_hartid_to_domain(__hartid) \
hartid_to_domain_table[__hartid]
struct sbi_domain *sbi_hartid_to_domain(u32 hartid);
/** Get pointer to sbi_domain for current HART */
#define sbi_domain_thishart_ptr() \
@@ -113,7 +155,7 @@ extern struct sbi_domain *domidx_to_domain_table[];
* Check whether given HART is assigned to specified domain
* @param dom pointer to domain
* @param hartid the HART ID
* @return TRUE if HART is assigned to domain otherwise FALSE
* @return true if HART is assigned to domain otherwise false
*/
bool sbi_domain_is_assigned_hart(const struct sbi_domain *dom, u32 hartid);
@@ -148,12 +190,27 @@ void sbi_domain_memregion_init(unsigned long addr,
* @param addr the address to be checked
* @param mode the privilege mode of access
* @param access_flags bitmask of domain access types (enum sbi_domain_access)
* @return TRUE if access allowed otherwise FALSE
* @return true if access allowed otherwise false
*/
bool sbi_domain_check_addr(const struct sbi_domain *dom,
unsigned long addr, unsigned long mode,
unsigned long access_flags);
/**
* Check whether we can access specified address range for given mode and
* memory region flags under a domain
* @param dom pointer to domain
* @param addr the start of the address range to be checked
* @param size the size of the address range to be checked
* @param mode the privilege mode of access
* @param access_flags bitmask of domain access types (enum sbi_domain_access)
* @return TRUE if access allowed otherwise FALSE
*/
bool sbi_domain_check_addr_range(const struct sbi_domain *dom,
unsigned long addr, unsigned long size,
unsigned long mode,
unsigned long access_flags);
/** Dump domain details on the console */
void sbi_domain_dump(const struct sbi_domain *dom, const char *suffix);

View File

@@ -21,10 +21,46 @@ struct sbi_trap_regs;
struct sbi_trap_info;
struct sbi_ecall_extension {
/* head is used by the extension list */
struct sbi_dlist head;
/*
* extid_start and extid_end specify the range for this extension. As
* the initial range may be wider than the valid runtime range, the
* register_extensions callback is responsible for narrowing the range
* before other callbacks may be invoked.
*/
unsigned long extid_start;
unsigned long extid_end;
/*
* register_extensions
*
* Calls sbi_ecall_register_extension() one or more times to register
* extension ID range(s) which should be handled by this extension.
* More than one sbi_ecall_extension struct and
* sbi_ecall_register_extension() call is necessary when the supported
* extension ID ranges have gaps. Additionally, extension availability
* must be checked before registering, which means, when this callback
* returns, only valid extension IDs from the initial range, which are
* also available, have been registered.
*/
int (* register_extensions)(void);
/*
* probe
*
* Implements the Base extension's probe function for the extension. As
* the register_extensions callback ensures that no other extension
* callbacks will be invoked when the extension is not available, then
* probe can never fail. However, an extension may choose to set
* out_val to a nonzero value other than one. In those cases, it should
* implement this callback.
*/
int (* probe)(unsigned long extid, unsigned long *out_val);
/*
* handle
*
* This is the extension handler. register_extensions ensures it is
* never invoked with an invalid or unavailable extension ID.
*/
int (* handle)(unsigned long extid, unsigned long funcid,
const struct sbi_trap_regs *regs,
unsigned long *out_val,

View File

@@ -29,6 +29,9 @@
#define SBI_EXT_HSM 0x48534D
#define SBI_EXT_SRST 0x53525354
#define SBI_EXT_PMU 0x504D55
#define SBI_EXT_DBCN 0x4442434E
#define SBI_EXT_SUSP 0x53555350
#define SBI_EXT_CPPC 0x43505043
/* SBI function IDs for BASE extension*/
#define SBI_EXT_BASE_GET_SPEC_VERSION 0x0
@@ -99,6 +102,7 @@
#define SBI_EXT_PMU_COUNTER_START 0x3
#define SBI_EXT_PMU_COUNTER_STOP 0x4
#define SBI_EXT_PMU_COUNTER_FW_READ 0x5
#define SBI_EXT_PMU_COUNTER_FW_READ_HI 0x6
/** General pmu event codes specified in SBI PMU extension */
enum sbi_pmu_hw_generic_events_t {
@@ -182,6 +186,17 @@ enum sbi_pmu_fw_event_code_id {
SBI_PMU_FW_HFENCE_VVMA_ASID_SENT = 20,
SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD = 21,
SBI_PMU_FW_MAX,
/*
* Event codes 22 to 255 are reserved for future use.
* Event codes 256 to 65534 are reserved for SBI implementation
* specific custom firmware events.
*/
SBI_PMU_FW_RESERVED_MAX = 0xFFFE,
/*
* Event code 0xFFFF is used for platform specific firmware
* events where the event data contains any event specific information.
*/
SBI_PMU_FW_PLATFORM = 0xFFFF,
};
/** SBI PMU event idx type */
@@ -200,10 +215,10 @@ enum sbi_pmu_ctr_type {
};
/* Helper macros to decode event idx */
#define SBI_PMU_EVENT_IDX_OFFSET 20
#define SBI_PMU_EVENT_IDX_MASK 0xFFFFF
#define SBI_PMU_EVENT_IDX_TYPE_OFFSET 16
#define SBI_PMU_EVENT_IDX_TYPE_MASK (0xF << SBI_PMU_EVENT_IDX_TYPE_OFFSET)
#define SBI_PMU_EVENT_IDX_CODE_MASK 0xFFFF
#define SBI_PMU_EVENT_IDX_TYPE_MASK 0xF0000
#define SBI_PMU_EVENT_RAW_IDX 0x20000
#define SBI_PMU_EVENT_IDX_INVALID 0xFFFFFFFF
@@ -230,6 +245,51 @@ enum sbi_pmu_ctr_type {
/* Flags defined for counter stop function */
#define SBI_PMU_STOP_FLAG_RESET (1 << 0)
/* SBI function IDs for DBCN extension */
#define SBI_EXT_DBCN_CONSOLE_WRITE 0x0
#define SBI_EXT_DBCN_CONSOLE_READ 0x1
#define SBI_EXT_DBCN_CONSOLE_WRITE_BYTE 0x2
/* SBI function IDs for SUSP extension */
#define SBI_EXT_SUSP_SUSPEND 0x0
#define SBI_SUSP_SLEEP_TYPE_SUSPEND 0x0
#define SBI_SUSP_SLEEP_TYPE_LAST SBI_SUSP_SLEEP_TYPE_SUSPEND
#define SBI_SUSP_PLATFORM_SLEEP_START 0x80000000
/* SBI function IDs for CPPC extension */
#define SBI_EXT_CPPC_PROBE 0x0
#define SBI_EXT_CPPC_READ 0x1
#define SBI_EXT_CPPC_READ_HI 0x2
#define SBI_EXT_CPPC_WRITE 0x3
enum sbi_cppc_reg_id {
SBI_CPPC_HIGHEST_PERF = 0x00000000,
SBI_CPPC_NOMINAL_PERF = 0x00000001,
SBI_CPPC_LOW_NON_LINEAR_PERF = 0x00000002,
SBI_CPPC_LOWEST_PERF = 0x00000003,
SBI_CPPC_GUARANTEED_PERF = 0x00000004,
SBI_CPPC_DESIRED_PERF = 0x00000005,
SBI_CPPC_MIN_PERF = 0x00000006,
SBI_CPPC_MAX_PERF = 0x00000007,
SBI_CPPC_PERF_REDUC_TOLERANCE = 0x00000008,
SBI_CPPC_TIME_WINDOW = 0x00000009,
SBI_CPPC_CTR_WRAP_TIME = 0x0000000A,
SBI_CPPC_REFERENCE_CTR = 0x0000000B,
SBI_CPPC_DELIVERED_CTR = 0x0000000C,
SBI_CPPC_PERF_LIMITED = 0x0000000D,
SBI_CPPC_ENABLE = 0x0000000E,
SBI_CPPC_AUTO_SEL_ENABLE = 0x0000000F,
SBI_CPPC_AUTO_ACT_WINDOW = 0x00000010,
SBI_CPPC_ENERGY_PERF_PREFERENCE = 0x00000011,
SBI_CPPC_REFERENCE_PERF = 0x00000012,
SBI_CPPC_LOWEST_FREQ = 0x00000013,
SBI_CPPC_NOMINAL_FREQ = 0x00000014,
SBI_CPPC_ACPI_LAST = SBI_CPPC_NOMINAL_FREQ,
SBI_CPPC_TRANSITION_LATENCY = 0x80000000,
SBI_CPPC_NON_ACPI_LAST = SBI_CPPC_TRANSITION_LATENCY,
};
/* SBI base specification related macros */
#define SBI_SPEC_VERSION_MAJOR_OFFSET 24
#define SBI_SPEC_VERSION_MAJOR_MASK 0x7f

44
include/sbi/sbi_heap.h Normal file
View File

@@ -0,0 +1,44 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2023 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel<apatel@ventanamicro.com>
*/
#ifndef __SBI_HEAP_H__
#define __SBI_HEAP_H__
#include <sbi/sbi_types.h>
struct sbi_scratch;
/** Allocate from heap area */
void *sbi_malloc(size_t size);
/** Zero allocate from heap area */
void *sbi_zalloc(size_t size);
/** Allocate array from heap area */
static inline void *sbi_calloc(size_t nitems, size_t size)
{
return sbi_zalloc(nitems * size);
}
/** Free-up to heap area */
void sbi_free(void *ptr);
/** Amount (in bytes) of free space in the heap area */
unsigned long sbi_heap_free_space(void);
/** Amount (in bytes) of used space in the heap area */
unsigned long sbi_heap_used_space(void);
/** Amount (in bytes) of reserved space in the heap area */
unsigned long sbi_heap_reserved_space(void);
/** Initialize heap area */
int sbi_heap_init(struct sbi_scratch *scratch);
#endif

View File

@@ -21,8 +21,12 @@ struct sbi_hsm_device {
int (*hart_start)(u32 hartid, ulong saddr);
/**
* Stop (or power-down) the current hart from running. This call
* doesn't expect to return if success.
* Stop (or power-down) the current hart from running.
*
* Return SBI_ENOTSUPP if the hart does not support platform-specific
* stop actions.
*
* For successful stop, the call won't return.
*/
int (*hart_stop)(void);
@@ -59,15 +63,21 @@ void __noreturn sbi_hsm_exit(struct sbi_scratch *scratch);
int sbi_hsm_hart_start(struct sbi_scratch *scratch,
const struct sbi_domain *dom,
u32 hartid, ulong saddr, ulong smode, ulong priv);
u32 hartid, ulong saddr, ulong smode, ulong arg1);
int sbi_hsm_hart_stop(struct sbi_scratch *scratch, bool exitnow);
void sbi_hsm_hart_resume_start(struct sbi_scratch *scratch);
void sbi_hsm_hart_resume_finish(struct sbi_scratch *scratch);
void __noreturn sbi_hsm_hart_resume_finish(struct sbi_scratch *scratch,
u32 hartid);
int sbi_hsm_hart_suspend(struct sbi_scratch *scratch, u32 suspend_type,
ulong raddr, ulong rmode, ulong priv);
ulong raddr, ulong rmode, ulong arg1);
bool sbi_hsm_hart_change_state(struct sbi_scratch *scratch, long oldstate,
long newstate);
int __sbi_hsm_hart_get_state(u32 hartid);
int sbi_hsm_hart_get_state(const struct sbi_domain *dom, u32 hartid);
int sbi_hsm_hart_interruptible_mask(const struct sbi_domain *dom,
ulong hbase, ulong *out_hmask);
void sbi_hsm_prepare_next_jump(struct sbi_scratch *scratch, u32 hartid);
void __sbi_hsm_suspend_non_ret_save(struct sbi_scratch *scratch);
void __noreturn sbi_hsm_hart_start_finish(struct sbi_scratch *scratch,
u32 hartid);
#endif

View File

@@ -16,6 +16,8 @@ struct sbi_scratch;
void __noreturn sbi_init(struct sbi_scratch *scratch);
unsigned long sbi_entry_count(u32 hartid);
unsigned long sbi_init_count(u32 hartid);
void __noreturn sbi_exit(struct sbi_scratch *scratch);

View File

@@ -30,6 +30,12 @@ struct sbi_ipi_device {
void (*ipi_clear)(u32 target_hart);
};
enum sbi_ipi_update_type {
SBI_IPI_UPDATE_SUCCESS,
SBI_IPI_UPDATE_BREAK,
SBI_IPI_UPDATE_RETRY,
};
struct sbi_scratch;
/** IPI event operations or callbacks */
@@ -41,6 +47,10 @@ struct sbi_ipi_event_ops {
* Update callback to save/enqueue data for remote HART
* Note: This is an optional callback and it is called just before
* triggering IPI to remote HART.
* @return < 0, error or failure
* @return SBI_IPI_UPDATE_SUCCESS, success
* @return SBI_IPI_UPDATE_BREAK, break IPI, done on local hart
* @return SBI_IPI_UPDATE_RETRY, need retry
*/
int (* update)(struct sbi_scratch *scratch,
struct sbi_scratch *remote_scratch,
@@ -77,6 +87,8 @@ void sbi_ipi_process(void);
int sbi_ipi_raw_send(u32 target_hart);
void sbi_ipi_raw_clear(u32 target_hart);
const struct sbi_ipi_device *sbi_ipi_get_device(void);
void sbi_ipi_set_device(const struct sbi_ipi_device *dev);

View File

@@ -31,7 +31,7 @@ struct sbi_dlist _lname = SBI_LIST_HEAD_INIT(_lname)
#define SBI_INIT_LIST_HEAD(ptr) \
do { \
(ptr)->next = ptr; (ptr)->prev = ptr; \
} while (0);
} while (0)
static inline void __sbi_list_add(struct sbi_dlist *new,
struct sbi_dlist *prev,
@@ -47,7 +47,7 @@ static inline void __sbi_list_add(struct sbi_dlist *new,
* Checks if the list is empty or not.
* @param head List head
*
* Retruns TRUE if list is empty, FALSE otherwise.
* Returns true if list is empty, false otherwise.
*/
static inline bool sbi_list_empty(struct sbi_dlist *head)
{

View File

@@ -29,12 +29,16 @@
#define SBI_PLATFORM_HART_COUNT_OFFSET (0x50)
/** Offset of hart_stack_size in struct sbi_platform */
#define SBI_PLATFORM_HART_STACK_SIZE_OFFSET (0x54)
/** Offset of heap_size in struct sbi_platform */
#define SBI_PLATFORM_HEAP_SIZE_OFFSET (0x58)
/** Offset of reserved in struct sbi_platform */
#define SBI_PLATFORM_RESERVED_OFFSET (0x5c)
/** Offset of platform_ops_addr in struct sbi_platform */
#define SBI_PLATFORM_OPS_OFFSET (0x58)
#define SBI_PLATFORM_OPS_OFFSET (0x60)
/** Offset of firmware_context in struct sbi_platform */
#define SBI_PLATFORM_FIRMWARE_CONTEXT_OFFSET (0x58 + __SIZEOF_POINTER__)
#define SBI_PLATFORM_FIRMWARE_CONTEXT_OFFSET (0x60 + __SIZEOF_POINTER__)
/** Offset of hart_index2id in struct sbi_platform */
#define SBI_PLATFORM_HART_INDEX2ID_OFFSET (0x58 + (__SIZEOF_POINTER__ * 2))
#define SBI_PLATFORM_HART_INDEX2ID_OFFSET (0x60 + (__SIZEOF_POINTER__ * 2))
#define SBI_PLATFORM_TLB_RANGE_FLUSH_LIMIT_DEFAULT (1UL << 12)
@@ -65,6 +69,9 @@ enum sbi_platform_features {
/** Platform functions */
struct sbi_platform_operations {
/* Check if specified HART is allowed to do cold boot */
bool (*cold_boot_allowed)(u32 hartid);
/* Platform nascent initialization */
int (*nascent_init)(void);
@@ -123,10 +130,10 @@ struct sbi_platform_operations {
/** Exit platform timer for current HART */
void (*timer_exit)(void);
/** platform specific SBI extension implementation probe function */
int (*vendor_ext_check)(long extid);
/** Check if SBI vendor extension is implemented or not */
bool (*vendor_ext_check)(void);
/** platform specific SBI extension implementation provider */
int (*vendor_ext_provider)(long extid, long funcid,
int (*vendor_ext_provider)(long funcid,
const struct sbi_trap_regs *regs,
unsigned long *out_value,
struct sbi_trap_info *out_trap);
@@ -135,6 +142,10 @@ struct sbi_platform_operations {
/** Platform default per-HART stack size for exception/interrupt handling */
#define SBI_PLATFORM_DEFAULT_HART_STACK_SIZE 8192
/** Platform default heap size */
#define SBI_PLATFORM_DEFAULT_HEAP_SIZE(__num_hart) \
(0x8000 + 0x800 * (__num_hart))
/** Representation of a platform */
struct sbi_platform {
/**
@@ -157,6 +168,10 @@ struct sbi_platform {
u32 hart_count;
/** Per-HART stack size for exception/interrupt handling */
u32 hart_stack_size;
/** Size of heap shared by all HARTs */
u32 heap_size;
/** Reserved for future use */
u32 reserved;
/** Pointer to sbi platform operations */
unsigned long platform_ops_addr;
/** Pointer to system firmware specific context */
@@ -344,16 +359,33 @@ static inline u32 sbi_platform_hart_stack_size(const struct sbi_platform *plat)
* @param plat pointer to struct sbi_platform
* @param hartid HART ID
*
* @return TRUE if HART is invalid and FALSE otherwise
* @return true if HART is invalid and false otherwise
*/
static inline bool sbi_platform_hart_invalid(const struct sbi_platform *plat,
u32 hartid)
{
if (!plat)
return TRUE;
return true;
if (plat->hart_count <= sbi_platform_hart_index(plat, hartid))
return TRUE;
return FALSE;
return true;
return false;
}
/**
* Check whether given HART is allowed to do cold boot
*
* @param plat pointer to struct sbi_platform
* @param hartid HART ID
*
* @return true if HART is allowed to do cold boot and false otherwise
*/
static inline bool sbi_platform_cold_boot_allowed(
const struct sbi_platform *plat,
u32 hartid)
{
if (plat && sbi_platform_ops(plat)->cold_boot_allowed)
return sbi_platform_ops(plat)->cold_boot_allowed(hartid);
return true;
}
/**
@@ -377,7 +409,7 @@ static inline int sbi_platform_nascent_init(const struct sbi_platform *plat)
* Early initialization for current HART
*
* @param plat pointer to struct sbi_platform
* @param cold_boot whether cold boot (TRUE) or warm_boot (FALSE)
* @param cold_boot whether cold boot (true) or warm_boot (false)
*
* @return 0 on success and negative error code on failure
*/
@@ -393,7 +425,7 @@ static inline int sbi_platform_early_init(const struct sbi_platform *plat,
* Final initialization for current HART
*
* @param plat pointer to struct sbi_platform
* @param cold_boot whether cold boot (TRUE) or warm_boot (FALSE)
* @param cold_boot whether cold boot (true) or warm_boot (false)
*
* @return 0 on success and negative error code on failure
*/
@@ -538,7 +570,7 @@ static inline int sbi_platform_console_init(const struct sbi_platform *plat)
* Initialize the platform interrupt controller for current HART
*
* @param plat pointer to struct sbi_platform
* @param cold_boot whether cold boot (TRUE) or warm_boot (FALSE)
* @param cold_boot whether cold boot (true) or warm_boot (false)
*
* @return 0 on success and negative error code on failure
*/
@@ -565,7 +597,7 @@ static inline void sbi_platform_irqchip_exit(const struct sbi_platform *plat)
* Initialize the platform IPI support for current HART
*
* @param plat pointer to struct sbi_platform
* @param cold_boot whether cold boot (TRUE) or warm_boot (FALSE)
* @param cold_boot whether cold boot (true) or warm_boot (false)
*
* @return 0 on success and negative error code on failure
*/
@@ -592,7 +624,7 @@ static inline void sbi_platform_ipi_exit(const struct sbi_platform *plat)
* Initialize the platform timer for current HART
*
* @param plat pointer to struct sbi_platform
* @param cold_boot whether cold boot (TRUE) or warm_boot (FALSE)
* @param cold_boot whether cold boot (true) or warm_boot (false)
*
* @return 0 on success and negative error code on failure
*/
@@ -616,27 +648,25 @@ static inline void sbi_platform_timer_exit(const struct sbi_platform *plat)
}
/**
* Check if a vendor extension is implemented or not.
* Check if SBI vendor extension is implemented or not.
*
* @param plat pointer to struct sbi_platform
* @param extid vendor SBI extension id
*
* @return 0 if extid is not implemented and 1 if implemented
* @return false if not implemented and true if implemented
*/
static inline int sbi_platform_vendor_ext_check(const struct sbi_platform *plat,
long extid)
static inline bool sbi_platform_vendor_ext_check(
const struct sbi_platform *plat)
{
if (plat && sbi_platform_ops(plat)->vendor_ext_check)
return sbi_platform_ops(plat)->vendor_ext_check(extid);
return sbi_platform_ops(plat)->vendor_ext_check();
return 0;
return false;
}
/**
* Invoke platform specific vendor SBI extension implementation.
*
* @param plat pointer to struct sbi_platform
* @param extid vendor SBI extension id
* @param funcid SBI function id within the extension id
* @param regs pointer to trap registers passed by the caller
* @param out_value output value that can be filled by the callee
@@ -646,14 +676,14 @@ static inline int sbi_platform_vendor_ext_check(const struct sbi_platform *plat,
*/
static inline int sbi_platform_vendor_ext_provider(
const struct sbi_platform *plat,
long extid, long funcid,
long funcid,
const struct sbi_trap_regs *regs,
unsigned long *out_value,
struct sbi_trap_info *out_trap)
{
if (plat && sbi_platform_ops(plat)->vendor_ext_provider) {
return sbi_platform_ops(plat)->vendor_ext_provider(extid,
funcid, regs,
return sbi_platform_ops(plat)->vendor_ext_provider(funcid,
regs,
out_value,
out_trap);
}

View File

@@ -30,37 +30,48 @@ struct sbi_pmu_device {
/**
* Validate event code of custom firmware event
* Note: SBI_PMU_FW_MAX <= event_idx_code
*/
int (*fw_event_validate_code)(uint32_t event_idx_code);
int (*fw_event_validate_encoding)(uint32_t hartid, uint64_t event_data);
/**
* Match custom firmware counter with custom firmware event
* Note: 0 <= counter_index < SBI_PMU_FW_CTR_MAX
*/
bool (*fw_counter_match_code)(uint32_t counter_index,
uint32_t event_idx_code);
bool (*fw_counter_match_encoding)(uint32_t hartid,
uint32_t counter_index,
uint64_t event_data);
/**
* Fetch the max width of this counter in number of bits.
*/
int (*fw_counter_width)(void);
/**
* Read value of custom firmware counter
* Note: 0 <= counter_index < SBI_PMU_FW_CTR_MAX
*/
uint64_t (*fw_counter_read_value)(uint32_t counter_index);
uint64_t (*fw_counter_read_value)(uint32_t hartid,
uint32_t counter_index);
/**
* Write value to custom firmware counter
* Note: 0 <= counter_index < SBI_PMU_FW_CTR_MAX
*/
void (*fw_counter_write_value)(uint32_t hartid, uint32_t counter_index,
uint64_t value);
/**
* Start custom firmware counter
* Note: SBI_PMU_FW_MAX <= event_idx_code
* Note: 0 <= counter_index < SBI_PMU_FW_CTR_MAX
*/
int (*fw_counter_start)(uint32_t counter_index,
uint32_t event_idx_code,
uint64_t init_val, bool init_val_update);
int (*fw_counter_start)(uint32_t hartid, uint32_t counter_index,
uint64_t event_data);
/**
* Stop custom firmware counter
* Note: 0 <= counter_index < SBI_PMU_FW_CTR_MAX
*/
int (*fw_counter_stop)(uint32_t counter_index);
int (*fw_counter_stop)(uint32_t hartid, uint32_t counter_index);
/**
* Custom enable irq for hardware counter

View File

@@ -18,26 +18,32 @@
#define SBI_SCRATCH_FW_START_OFFSET (0 * __SIZEOF_POINTER__)
/** Offset of fw_size member in sbi_scratch */
#define SBI_SCRATCH_FW_SIZE_OFFSET (1 * __SIZEOF_POINTER__)
/** Offset (in sbi_scratch) of the R/W Offset */
#define SBI_SCRATCH_FW_RW_OFFSET (2 * __SIZEOF_POINTER__)
/** Offset of fw_heap_offset member in sbi_scratch */
#define SBI_SCRATCH_FW_HEAP_OFFSET (3 * __SIZEOF_POINTER__)
/** Offset of fw_heap_size_offset member in sbi_scratch */
#define SBI_SCRATCH_FW_HEAP_SIZE_OFFSET (4 * __SIZEOF_POINTER__)
/** Offset of next_arg1 member in sbi_scratch */
#define SBI_SCRATCH_NEXT_ARG1_OFFSET (2 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_NEXT_ARG1_OFFSET (5 * __SIZEOF_POINTER__)
/** Offset of next_addr member in sbi_scratch */
#define SBI_SCRATCH_NEXT_ADDR_OFFSET (3 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_NEXT_ADDR_OFFSET (6 * __SIZEOF_POINTER__)
/** Offset of next_mode member in sbi_scratch */
#define SBI_SCRATCH_NEXT_MODE_OFFSET (4 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_NEXT_MODE_OFFSET (7 * __SIZEOF_POINTER__)
/** Offset of warmboot_addr member in sbi_scratch */
#define SBI_SCRATCH_WARMBOOT_ADDR_OFFSET (5 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_WARMBOOT_ADDR_OFFSET (8 * __SIZEOF_POINTER__)
/** Offset of platform_addr member in sbi_scratch */
#define SBI_SCRATCH_PLATFORM_ADDR_OFFSET (6 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_PLATFORM_ADDR_OFFSET (9 * __SIZEOF_POINTER__)
/** Offset of hartid_to_scratch member in sbi_scratch */
#define SBI_SCRATCH_HARTID_TO_SCRATCH_OFFSET (7 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_HARTID_TO_SCRATCH_OFFSET (10 * __SIZEOF_POINTER__)
/** Offset of trap_exit member in sbi_scratch */
#define SBI_SCRATCH_TRAP_EXIT_OFFSET (8 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_TRAP_EXIT_OFFSET (11 * __SIZEOF_POINTER__)
/** Offset of tmp0 member in sbi_scratch */
#define SBI_SCRATCH_TMP0_OFFSET (9 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_TMP0_OFFSET (12 * __SIZEOF_POINTER__)
/** Offset of options member in sbi_scratch */
#define SBI_SCRATCH_OPTIONS_OFFSET (10 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_OPTIONS_OFFSET (13 * __SIZEOF_POINTER__)
/** Offset of extra space in sbi_scratch */
#define SBI_SCRATCH_EXTRA_SPACE_OFFSET (11 * __SIZEOF_POINTER__)
#define SBI_SCRATCH_EXTRA_SPACE_OFFSET (14 * __SIZEOF_POINTER__)
/** Maximum size of sbi_scratch (4KB) */
#define SBI_SCRATCH_SIZE (0x1000)
@@ -53,6 +59,12 @@ struct sbi_scratch {
unsigned long fw_start;
/** Size (in bytes) of firmware linked to OpenSBI library */
unsigned long fw_size;
/** Offset (in bytes) of the R/W section */
unsigned long fw_rw_offset;
/** Offset (in bytes) of the heap area */
unsigned long fw_heap_offset;
/** Size (in bytes) of the heap area */
unsigned long fw_heap_size;
/** Arg1 (or 'a1' register) of next booting stage for this HART */
unsigned long next_arg1;
/** Address of next booting stage for this HART */
@@ -163,6 +175,9 @@ unsigned long sbi_scratch_alloc_offset(unsigned long size);
/** Free-up extra space in sbi_scratch */
void sbi_scratch_free_offset(unsigned long offset);
/** Amount (in bytes) of used space in in sbi_scratch */
unsigned long sbi_scratch_used_space(void);
/** Get pointer from offset in sbi_scratch */
#define sbi_scratch_offset_ptr(scratch, offset) (void *)((char *)(scratch) + (offset))
@@ -170,6 +185,23 @@ void sbi_scratch_free_offset(unsigned long offset);
#define sbi_scratch_thishart_offset_ptr(offset) \
(void *)((char *)sbi_scratch_thishart_ptr() + (offset))
/** Allocate offset for a data type in sbi_scratch */
#define sbi_scratch_alloc_type_offset(__type) \
sbi_scratch_alloc_offset(sizeof(__type))
/** Read a data type from sbi_scratch at given offset */
#define sbi_scratch_read_type(__scratch, __type, __offset) \
({ \
*((__type *)sbi_scratch_offset_ptr((__scratch), (__offset))); \
})
/** Write a data type to sbi_scratch at given offset */
#define sbi_scratch_write_type(__scratch, __type, __offset, __ptr) \
do { \
*((__type *)sbi_scratch_offset_ptr((__scratch), (__offset))) \
= (__type)(__ptr); \
} while (0)
/** HART id to scratch table */
extern struct sbi_scratch *hartid_to_scratch_table[];

View File

@@ -43,4 +43,38 @@ bool sbi_system_reset_supported(u32 reset_type, u32 reset_reason);
void __noreturn sbi_system_reset(u32 reset_type, u32 reset_reason);
/** System suspend device */
struct sbi_system_suspend_device {
/** Name of the system suspend device */
char name[32];
/**
* Check whether sleep type is supported by the device
*
* Returns 0 when @sleep_type supported, SBI_ERR_INVALID_PARAM
* when @sleep_type is reserved, or SBI_ERR_NOT_SUPPORTED when
* @sleep_type is not reserved and is implemented, but the
* platform doesn't support it due to missing dependencies.
*/
int (*system_suspend_check)(u32 sleep_type);
/**
* Suspend the system
*
* @sleep_type: The sleep type identifier passed to the SBI call.
* @mmode_resume_addr:
* This is the same as sbi_scratch.warmboot_addr. Some platforms
* may not be able to return from system_suspend(), so they will
* jump directly to this address instead. Platforms which can
* return from system_suspend() may ignore this parameter.
*/
int (*system_suspend)(u32 sleep_type, unsigned long mmode_resume_addr);
};
const struct sbi_system_suspend_device *sbi_system_suspend_get_device(void);
void sbi_system_suspend_set_device(struct sbi_system_suspend_device *dev);
void sbi_system_suspend_test_enable(void);
bool sbi_system_suspend_supported(u32 sleep_type);
int sbi_system_suspend(u32 sleep_type, ulong resume_addr, ulong opaque);
#endif

View File

@@ -54,16 +54,22 @@ typedef unsigned long virtual_size_t;
typedef unsigned long physical_addr_t;
typedef unsigned long physical_size_t;
#define TRUE 1
#define FALSE 0
#define true TRUE
#define false FALSE
typedef uint16_t le16_t;
typedef uint16_t be16_t;
typedef uint32_t le32_t;
typedef uint32_t be32_t;
typedef uint64_t le64_t;
typedef uint64_t be64_t;
#define true 1
#define false 0
#define NULL ((void *)0)
#define __packed __attribute__((packed))
#define __noreturn __attribute__((noreturn))
#define __aligned(x) __attribute__((aligned(x)))
#define __always_inline inline __attribute__((always_inline))
#define likely(x) __builtin_expect((x), 1)
#define unlikely(x) __builtin_expect((x), 0)

View File

@@ -11,7 +11,7 @@
#define __SBI_VERSION_H__
#define OPENSBI_VERSION_MAJOR 1
#define OPENSBI_VERSION_MINOR 2
#define OPENSBI_VERSION_MINOR 3
/**
* OpenSBI 32-bit version with:

View File

@@ -9,6 +9,29 @@
#ifndef __FDT_FIXUP_H__
#define __FDT_FIXUP_H__
struct sbi_cpu_idle_state {
const char *name;
uint32_t suspend_param;
bool local_timer_stop;
uint32_t entry_latency_us;
uint32_t exit_latency_us;
uint32_t min_residency_us;
uint32_t wakeup_latency_us;
};
/**
* Add CPU idle states to cpu nodes in the DT
*
* Add information about CPU idle states to the devicetree. This function
* assumes that CPU idle states are not already present in the devicetree, and
* that all CPU states are equally applicable to all CPUs.
*
* @param fdt: device tree blob
* @param states: array of idle state descriptions, ending with empty element
* @return zero on success and -ve on failure
*/
int fdt_add_cpu_idle_states(void *dtb, const struct sbi_cpu_idle_state *state);
/**
* Fix up the CPU node in the device tree
*
@@ -70,20 +93,6 @@ void fdt_plic_fixup(void *fdt);
*/
int fdt_reserved_memory_fixup(void *fdt);
/**
* Fix up the reserved memory subnodes in the device tree
*
* This routine adds the no-map property to the reserved memory subnodes so
* that the OS does not map those PMP protected memory regions.
*
* Platform codes must call this helper in their final_init() after fdt_fixups()
* if the OS should not map the PMP protected reserved regions.
*
* @param fdt: device tree blob
* @return zero on success and -ve on failure
*/
int fdt_reserved_memory_nomap_fixup(void *fdt);
/**
* General device tree fix-up
*

View File

@@ -11,7 +11,7 @@
#define __FDT_HELPER_H__
#include <sbi/sbi_types.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_domain.h>
struct fdt_match {
const char *compatible;
@@ -109,7 +109,7 @@ int fdt_parse_compat_addr(void *fdt, uint64_t *addr,
static inline void *fdt_get_address(void)
{
return sbi_scratch_thishart_arg1_ptr();
return (void *)root.next_arg1;
}
#endif /* __FDT_HELPER_H__ */

View File

@@ -0,0 +1,21 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2022 StarFive Technology Co., Ltd.
*
* Author: Minda Chen <minda.chen@starfivetech.com>
*/
#ifndef __DW_I2C_H__
#define __DW_I2C_H__
#include <sbi_utils/i2c/i2c.h>
int dw_i2c_init(struct i2c_adapter *, int nodeoff);
struct dw_i2c_adapter {
unsigned long addr;
struct i2c_adapter adapter;
};
#endif

View File

@@ -0,0 +1,59 @@
/*
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2023 Andes Technology Corporation
*/
#ifndef _SYS_ATCSMU_H
#define _SYS_ATCSMU_H
#include <sbi/sbi_types.h>
/* clang-format off */
#define PCS0_WE_OFFSET 0x90
#define PCSm_WE_OFFSET(i) ((i + 3) * 0x20 + PCS0_WE_OFFSET)
#define PCS0_CTL_OFFSET 0x94
#define PCSm_CTL_OFFSET(i) ((i + 3) * 0x20 + PCS0_CTL_OFFSET)
#define PCS_CTL_CMD_SHIFT 0
#define PCS_CTL_PARAM_SHIFT 3
#define SLEEP_CMD 0x3
#define WAKEUP_CMD (0x0 | (1 << PCS_CTL_PARAM_SHIFT))
#define LIGHTSLEEP_MODE 0
#define DEEPSLEEP_MODE 1
#define LIGHT_SLEEP_CMD (SLEEP_CMD | (LIGHTSLEEP_MODE << PCS_CTL_PARAM_SHIFT))
#define DEEP_SLEEP_CMD (SLEEP_CMD | (DEEPSLEEP_MODE << PCS_CTL_PARAM_SHIFT))
#define PCS0_CFG_OFFSET 0x80
#define PCSm_CFG_OFFSET(i) ((i + 3) * 0x20 + PCS0_CFG_OFFSET)
#define PCS_CFG_LIGHT_SLEEP_SHIFT 2
#define PCS_CFG_LIGHT_SLEEP (1 << PCS_CFG_LIGHT_SLEEP_SHIFT)
#define PCS_CFG_DEEP_SLEEP_SHIFT 3
#define PCS_CFG_DEEP_SLEEP (1 << PCS_CFG_DEEP_SLEEP_SHIFT)
#define RESET_VEC_LO_OFFSET 0x50
#define RESET_VEC_HI_OFFSET 0x60
#define RESET_VEC_8CORE_OFFSET 0x1a0
#define HARTn_RESET_VEC_LO(n) (RESET_VEC_LO_OFFSET + \
((n) < 4 ? 0 : RESET_VEC_8CORE_OFFSET) + \
((n) * 0x4))
#define HARTn_RESET_VEC_HI(n) (RESET_VEC_HI_OFFSET + \
((n) < 4 ? 0 : RESET_VEC_8CORE_OFFSET) + \
((n) * 0x4))
#define PCS_MAX_NR 8
#define FLASH_BASE 0x80000000ULL
/* clang-format on */
struct smu_data {
unsigned long addr;
};
int smu_set_wakeup_events(struct smu_data *smu, u32 events, u32 hartid);
bool smu_support_sleep_mode(struct smu_data *smu, u32 sleep_mode, u32 hartid);
int smu_set_command(struct smu_data *smu, u32 pcs_ctl, u32 hartid);
int smu_set_reset_vector(struct smu_data *smu, ulong wakeup_addr, u32 hartid);
#endif /* _SYS_ATCSMU_H */

View File

@@ -22,10 +22,22 @@ config SBI_ECALL_SRST
bool "System Reset extension"
default y
config SBI_ECALL_SUSP
bool "System Suspend extension"
default y
config SBI_ECALL_PMU
bool "Performance Monitoring Unit extension"
default y
config SBI_ECALL_DBCN
bool "Debug Console extension"
default y
config SBI_ECALL_CPPC
bool "CPPC extension"
default y
config SBI_ECALL_LEGACY
bool "SBI v0.1 legacy extensions"
default y

View File

@@ -34,9 +34,18 @@ libsbi-objs-$(CONFIG_SBI_ECALL_HSM) += sbi_ecall_hsm.o
carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_SRST) += ecall_srst
libsbi-objs-$(CONFIG_SBI_ECALL_SRST) += sbi_ecall_srst.o
carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_SUSP) += ecall_susp
libsbi-objs-$(CONFIG_SBI_ECALL_SUSP) += sbi_ecall_susp.o
carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_PMU) += ecall_pmu
libsbi-objs-$(CONFIG_SBI_ECALL_PMU) += sbi_ecall_pmu.o
carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_DBCN) += ecall_dbcn
libsbi-objs-$(CONFIG_SBI_ECALL_DBCN) += sbi_ecall_dbcn.o
carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_CPPC) += ecall_cppc
libsbi-objs-$(CONFIG_SBI_ECALL_CPPC) += sbi_ecall_cppc.o
carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_LEGACY) += ecall_legacy
libsbi-objs-$(CONFIG_SBI_ECALL_LEGACY) += sbi_ecall_legacy.o
@@ -50,6 +59,7 @@ libsbi-objs-y += sbi_domain.o
libsbi-objs-y += sbi_emulate_csr.o
libsbi-objs-y += sbi_fifo.o
libsbi-objs-y += sbi_hart.o
libsbi-objs-y += sbi_heap.o
libsbi-objs-y += sbi_math.o
libsbi-objs-y += sbi_hfence.o
libsbi-objs-y += sbi_hsm.o
@@ -68,3 +78,4 @@ libsbi-objs-y += sbi_tlb.o
libsbi-objs-y += sbi_trap.o
libsbi-objs-y += sbi_unpriv.o
libsbi-objs-y += sbi_expected_trap.o
libsbi-objs-y += sbi_cppc.o

View File

@@ -152,7 +152,7 @@ unsigned long csr_read_num(int csr_num)
default:
sbi_panic("%s: Unknown CSR %#x", __func__, csr_num);
break;
};
}
return ret;
@@ -220,7 +220,7 @@ void csr_write_num(int csr_num, unsigned long val)
default:
sbi_panic("%s: Unknown CSR %#x", __func__, csr_num);
break;
};
}
#undef switchcase_csr_write_64
#undef switchcase_csr_write_32

View File

@@ -12,17 +12,22 @@
#include <sbi/sbi_hart.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h>
#define CONSOLE_TBUF_MAX 256
static const struct sbi_console_device *console_dev = NULL;
static char console_tbuf[CONSOLE_TBUF_MAX];
static u32 console_tbuf_len;
static spinlock_t console_out_lock = SPIN_LOCK_INITIALIZER;
bool sbi_isprintable(char c)
{
if (((31 < c) && (c < 127)) || (c == '\f') || (c == '\r') ||
(c == '\n') || (c == '\t')) {
return TRUE;
return true;
}
return FALSE;
return false;
}
int sbi_getc(void)
@@ -41,16 +46,49 @@ void sbi_putc(char ch)
}
}
static unsigned long nputs(const char *str, unsigned long len)
{
unsigned long i, ret;
if (console_dev && console_dev->console_puts) {
ret = console_dev->console_puts(str, len);
} else {
for (i = 0; i < len; i++)
sbi_putc(str[i]);
ret = len;
}
return ret;
}
static void nputs_all(const char *str, unsigned long len)
{
unsigned long p = 0;
while (p < len)
p += nputs(&str[p], len - p);
}
void sbi_puts(const char *str)
{
unsigned long len = sbi_strlen(str);
spin_lock(&console_out_lock);
while (*str) {
sbi_putc(*str);
str++;
}
nputs_all(str, len);
spin_unlock(&console_out_lock);
}
unsigned long sbi_nputs(const char *str, unsigned long len)
{
unsigned long ret;
spin_lock(&console_out_lock);
ret = nputs(str, len);
spin_unlock(&console_out_lock);
return ret;
}
void sbi_gets(char *s, int maxwidth, char endchar)
{
int ch;
@@ -64,6 +102,21 @@ void sbi_gets(char *s, int maxwidth, char endchar)
*retval = '\0';
}
unsigned long sbi_ngets(char *str, unsigned long len)
{
int ch;
unsigned long i;
for (i = 0; i < len; i++) {
ch = sbi_getc();
if (ch < 0)
break;
str[i] = ch;
}
return i;
}
#define PAD_RIGHT 1
#define PAD_ZERO 2
#define PAD_ALTERNATE 4
@@ -183,12 +236,30 @@ static int printi(char **out, u32 *out_len, long long i, int b, int sg,
static int print(char **out, u32 *out_len, const char *format, va_list args)
{
int width, flags;
int pc = 0;
char scr[2];
int width, flags, pc = 0;
char scr[2], *tout;
bool use_tbuf = (!out) ? true : false;
unsigned long long tmp;
/*
* The console_tbuf is protected by console_out_lock and
* print() is always called with console_out_lock held
* when out == NULL.
*/
if (use_tbuf) {
console_tbuf_len = CONSOLE_TBUF_MAX;
tout = console_tbuf;
out = &tout;
out_len = &console_tbuf_len;
}
for (; *format != 0; ++format) {
if (use_tbuf && !console_tbuf_len) {
nputs_all(console_tbuf, CONSOLE_TBUF_MAX);
console_tbuf_len = CONSOLE_TBUF_MAX;
tout = console_tbuf;
}
if (*format == '%') {
++format;
width = flags = 0;
@@ -314,6 +385,9 @@ literal:
}
}
if (use_tbuf && console_tbuf_len < CONSOLE_TBUF_MAX)
nputs_all(console_tbuf, CONSOLE_TBUF_MAX - console_tbuf_len);
return pc;
}
@@ -407,5 +481,11 @@ void sbi_console_set_device(const struct sbi_console_device *dev)
int sbi_console_init(struct sbi_scratch *scratch)
{
return sbi_platform_console_init(sbi_platform_ptr(scratch));
int rc = sbi_platform_console_init(sbi_platform_ptr(scratch));
/* console is not a necessary device */
if (rc == SBI_ENODEV)
return 0;
return rc;
}

110
lib/sbi/sbi_cppc.c Normal file
View File

@@ -0,0 +1,110 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2023 Ventana Micro Systems Inc.
*
*/
#include <sbi/sbi_error.h>
#include <sbi/sbi_cppc.h>
static const struct sbi_cppc_device *cppc_dev = NULL;
const struct sbi_cppc_device *sbi_cppc_get_device(void)
{
return cppc_dev;
}
void sbi_cppc_set_device(const struct sbi_cppc_device *dev)
{
if (!dev || cppc_dev)
return;
cppc_dev = dev;
}
static bool sbi_cppc_is_reserved(unsigned long reg)
{
if ((reg > SBI_CPPC_ACPI_LAST && reg < SBI_CPPC_TRANSITION_LATENCY) ||
reg > SBI_CPPC_NON_ACPI_LAST)
return true;
return false;
}
static bool sbi_cppc_readable(unsigned long reg)
{
/* there are no write-only cppc registers currently */
return true;
}
static bool sbi_cppc_writable(unsigned long reg)
{
switch (reg) {
case SBI_CPPC_HIGHEST_PERF:
case SBI_CPPC_NOMINAL_PERF:
case SBI_CPPC_LOW_NON_LINEAR_PERF:
case SBI_CPPC_LOWEST_PERF:
case SBI_CPPC_GUARANTEED_PERF:
case SBI_CPPC_CTR_WRAP_TIME:
case SBI_CPPC_REFERENCE_CTR:
case SBI_CPPC_DELIVERED_CTR:
case SBI_CPPC_REFERENCE_PERF:
case SBI_CPPC_LOWEST_FREQ:
case SBI_CPPC_NOMINAL_FREQ:
case SBI_CPPC_TRANSITION_LATENCY:
return false;
}
return true;
}
int sbi_cppc_probe(unsigned long reg)
{
if (!cppc_dev || !cppc_dev->cppc_probe)
return SBI_EFAIL;
/* Check whether register is reserved */
if (sbi_cppc_is_reserved(reg))
return SBI_ERR_INVALID_PARAM;
return cppc_dev->cppc_probe(reg);
}
int sbi_cppc_read(unsigned long reg, uint64_t *val)
{
int ret;
if (!cppc_dev || !cppc_dev->cppc_read)
return SBI_EFAIL;
/* Check whether register is implemented */
ret = sbi_cppc_probe(reg);
if (ret <= 0)
return SBI_ERR_NOT_SUPPORTED;
/* Check whether the register is write-only */
if (!sbi_cppc_readable(reg))
return SBI_ERR_DENIED;
return cppc_dev->cppc_read(reg, val);
}
int sbi_cppc_write(unsigned long reg, uint64_t val)
{
int ret;
if (!cppc_dev || !cppc_dev->cppc_write)
return SBI_EFAIL;
/* Check whether register is implemented */
ret = sbi_cppc_probe(reg);
if (ret <= 0)
return SBI_ERR_NOT_SUPPORTED;
/* Check whether the register is read-only */
if (!sbi_cppc_writable(reg))
return SBI_ERR_DENIED;
return cppc_dev->cppc_write(reg, val);
}

View File

@@ -11,37 +11,63 @@
#include <sbi/sbi_console.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_hsm.h>
#include <sbi/sbi_math.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h>
struct sbi_domain *hartid_to_domain_table[SBI_HARTMASK_MAX_BITS] = { 0 };
struct sbi_domain *domidx_to_domain_table[SBI_DOMAIN_MAX_INDEX] = { 0 };
/*
* We allocate an extra element because sbi_domain_for_each() expects
* the array to be null-terminated.
*/
struct sbi_domain *domidx_to_domain_table[SBI_DOMAIN_MAX_INDEX + 1] = { 0 };
static u32 domain_count = 0;
static bool domain_finalized = false;
static struct sbi_hartmask root_hmask = { 0 };
#define ROOT_REGION_MAX 16
static u32 root_memregs_count = 0;
static struct sbi_domain_memregion root_fw_region;
static struct sbi_domain_memregion root_memregs[ROOT_REGION_MAX + 1] = { 0 };
struct sbi_domain root = {
.name = "root",
.possible_harts = &root_hmask,
.regions = root_memregs,
.system_reset_allowed = TRUE,
.possible_harts = NULL,
.regions = NULL,
.system_reset_allowed = true,
.system_suspend_allowed = true,
.fw_region_inited = false,
};
static unsigned long domain_hart_ptr_offset;
struct sbi_domain *sbi_hartid_to_domain(u32 hartid)
{
struct sbi_scratch *scratch;
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch || !domain_hart_ptr_offset)
return NULL;
return sbi_scratch_read_type(scratch, void *, domain_hart_ptr_offset);
}
static void update_hartid_to_domain(u32 hartid, struct sbi_domain *dom)
{
struct sbi_scratch *scratch;
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch)
return;
sbi_scratch_write_type(scratch, void *, domain_hart_ptr_offset, dom);
}
bool sbi_domain_is_assigned_hart(const struct sbi_domain *dom, u32 hartid)
{
if (dom)
return sbi_hartmask_test_hart(hartid, &dom->assigned_harts);
return FALSE;
return false;
}
ulong sbi_domain_get_assigned_hartmask(const struct sbi_domain *dom,
@@ -64,14 +90,6 @@ ulong sbi_domain_get_assigned_hartmask(const struct sbi_domain *dom,
return ret;
}
static void domain_memregion_initfw(struct sbi_domain_memregion *reg)
{
if (!reg)
return;
sbi_memcpy(reg, &root_fw_region, sizeof(*reg));
}
void sbi_domain_memregion_init(unsigned long addr,
unsigned long size,
unsigned long flags,
@@ -105,54 +123,64 @@ bool sbi_domain_check_addr(const struct sbi_domain *dom,
unsigned long addr, unsigned long mode,
unsigned long access_flags)
{
bool rmmio, mmio = FALSE;
bool rmmio, mmio = false;
struct sbi_domain_memregion *reg;
unsigned long rstart, rend, rflags, rwx = 0;
unsigned long rstart, rend, rflags, rwx = 0, rrwx = 0;
if (!dom)
return FALSE;
return false;
/*
* Use M_{R/W/X} bits because the SU-bits are at the
* same relative offsets. If the mode is not M, the SU
* bits will fall at same offsets after the shift.
*/
if (access_flags & SBI_DOMAIN_READ)
rwx |= SBI_DOMAIN_MEMREGION_READABLE;
rwx |= SBI_DOMAIN_MEMREGION_M_READABLE;
if (access_flags & SBI_DOMAIN_WRITE)
rwx |= SBI_DOMAIN_MEMREGION_WRITEABLE;
rwx |= SBI_DOMAIN_MEMREGION_M_WRITABLE;
if (access_flags & SBI_DOMAIN_EXECUTE)
rwx |= SBI_DOMAIN_MEMREGION_EXECUTABLE;
rwx |= SBI_DOMAIN_MEMREGION_M_EXECUTABLE;
if (access_flags & SBI_DOMAIN_MMIO)
mmio = TRUE;
mmio = true;
sbi_domain_for_each_memregion(dom, reg) {
rflags = reg->flags;
if (mode == PRV_M && !(rflags & SBI_DOMAIN_MEMREGION_MMODE))
continue;
rrwx = (mode == PRV_M ?
(rflags & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK) :
(rflags & SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK)
>> SBI_DOMAIN_MEMREGION_SU_ACCESS_SHIFT);
rstart = reg->base;
rend = (reg->order < __riscv_xlen) ?
rstart + ((1UL << reg->order) - 1) : -1UL;
if (rstart <= addr && addr <= rend) {
rmmio = (rflags & SBI_DOMAIN_MEMREGION_MMIO) ? TRUE : FALSE;
rmmio = (rflags & SBI_DOMAIN_MEMREGION_MMIO) ? true : false;
if (mmio != rmmio)
return FALSE;
return ((rflags & rwx) == rwx) ? TRUE : FALSE;
return false;
return ((rrwx & rwx) == rwx) ? true : false;
}
}
return (mode == PRV_M) ? TRUE : FALSE;
return (mode == PRV_M) ? true : false;
}
/* Check if region complies with constraints */
static bool is_region_valid(const struct sbi_domain_memregion *reg)
{
if (reg->order < 3 || __riscv_xlen < reg->order)
return FALSE;
return false;
if (reg->order == __riscv_xlen && reg->base != 0)
return FALSE;
return false;
if (reg->order < __riscv_xlen && (reg->base & (BIT(reg->order) - 1)))
return FALSE;
return false;
return TRUE;
return true;
}
/** Check if regionA is sub-region of regionB */
@@ -168,9 +196,9 @@ static bool is_region_subset(const struct sbi_domain_memregion *regA,
(regA_start < regB_end) &&
(regB_start < regA_end) &&
(regA_end <= regB_end))
return TRUE;
return true;
return FALSE;
return false;
}
/** Check if regionA conflicts regionB */
@@ -179,9 +207,9 @@ static bool is_region_conflict(const struct sbi_domain_memregion *regA,
{
if ((is_region_subset(regA, regB) || is_region_subset(regB, regA)) &&
regA->flags == regB->flags)
return TRUE;
return true;
return FALSE;
return false;
}
/** Check if regionA should be placed before regionB */
@@ -189,20 +217,57 @@ static bool is_region_before(const struct sbi_domain_memregion *regA,
const struct sbi_domain_memregion *regB)
{
if (regA->order < regB->order)
return TRUE;
return true;
if ((regA->order == regB->order) &&
(regA->base < regB->base))
return TRUE;
return true;
return FALSE;
return false;
}
static const struct sbi_domain_memregion *find_region(
const struct sbi_domain *dom,
unsigned long addr)
{
unsigned long rstart, rend;
struct sbi_domain_memregion *reg;
sbi_domain_for_each_memregion(dom, reg) {
rstart = reg->base;
rend = (reg->order < __riscv_xlen) ?
rstart + ((1UL << reg->order) - 1) : -1UL;
if (rstart <= addr && addr <= rend)
return reg;
}
return NULL;
}
static const struct sbi_domain_memregion *find_next_subset_region(
const struct sbi_domain *dom,
const struct sbi_domain_memregion *reg,
unsigned long addr)
{
struct sbi_domain_memregion *sreg, *ret = NULL;
sbi_domain_for_each_memregion(dom, sreg) {
if (sreg == reg || (sreg->base <= addr) ||
!is_region_subset(sreg, reg))
continue;
if (!ret || (sreg->base < ret->base) ||
((sreg->base == ret->base) && (sreg->order < ret->order)))
ret = sreg;
}
return ret;
}
static int sanitize_domain(const struct sbi_platform *plat,
struct sbi_domain *dom)
{
u32 i, j, count;
bool have_fw_reg;
struct sbi_domain_memregion treg, *reg, *reg1;
/* Check possible HARTs */
@@ -217,7 +282,7 @@ static int sanitize_domain(const struct sbi_platform *plat,
"hart %d\n", __func__, dom->name, i);
return SBI_EINVAL;
}
};
}
/* Check memory regions */
if (!dom->regions) {
@@ -235,17 +300,13 @@ static int sanitize_domain(const struct sbi_platform *plat,
}
}
/* Count memory regions and check presence of firmware region */
/* Count memory regions */
count = 0;
have_fw_reg = FALSE;
sbi_domain_for_each_memregion(dom, reg) {
if (reg->order == root_fw_region.order &&
reg->base == root_fw_region.base &&
reg->flags == root_fw_region.flags)
have_fw_reg = TRUE;
sbi_domain_for_each_memregion(dom, reg)
count++;
}
if (!have_fw_reg) {
/* Check presence of firmware regions */
if (!dom->fw_region_inited) {
sbi_printf("%s: %s does not have firmware region\n",
__func__, dom->name);
return SBI_EINVAL;
@@ -285,7 +346,7 @@ static int sanitize_domain(const struct sbi_platform *plat,
/*
* Check next mode
*
* We only allow next mode to be S-mode or U-mode.so that we can
* We only allow next mode to be S-mode or U-mode, so that we can
* protect M-mode context and enforce checks on memory accesses.
*/
if (dom->next_mode != PRV_S &&
@@ -295,7 +356,7 @@ static int sanitize_domain(const struct sbi_platform *plat,
return SBI_EINVAL;
}
/* Check next address and next mode*/
/* Check next address and next mode */
if (!sbi_domain_check_addr(dom, dom->next_addr, dom->next_mode,
SBI_DOMAIN_EXECUTE)) {
sbi_printf("%s: %s next booting stage address 0x%lx can't "
@@ -306,6 +367,37 @@ static int sanitize_domain(const struct sbi_platform *plat,
return 0;
}
bool sbi_domain_check_addr_range(const struct sbi_domain *dom,
unsigned long addr, unsigned long size,
unsigned long mode,
unsigned long access_flags)
{
unsigned long max = addr + size;
const struct sbi_domain_memregion *reg, *sreg;
if (!dom)
return false;
while (addr < max) {
reg = find_region(dom, addr);
if (!reg)
return false;
if (!sbi_domain_check_addr(dom, addr, mode, access_flags))
return false;
sreg = find_next_subset_region(dom, reg, addr);
if (sreg)
addr = sreg->base;
else if (reg->order < __riscv_xlen)
addr = reg->base + (1UL << reg->order);
else
break;
}
return true;
}
void sbi_domain_dump(const struct sbi_domain *dom, const char *suffix)
{
u32 i, k;
@@ -335,15 +427,25 @@ void sbi_domain_dump(const struct sbi_domain *dom, const char *suffix)
dom->index, i, suffix, rstart, rend);
k = 0;
if (reg->flags & SBI_DOMAIN_MEMREGION_MMODE)
sbi_printf("%cM", (k++) ? ',' : '(');
sbi_printf("M: ");
if (reg->flags & SBI_DOMAIN_MEMREGION_MMIO)
sbi_printf("%cI", (k++) ? ',' : '(');
if (reg->flags & SBI_DOMAIN_MEMREGION_READABLE)
if (reg->flags & SBI_DOMAIN_MEMREGION_M_READABLE)
sbi_printf("%cR", (k++) ? ',' : '(');
if (reg->flags & SBI_DOMAIN_MEMREGION_WRITEABLE)
if (reg->flags & SBI_DOMAIN_MEMREGION_M_WRITABLE)
sbi_printf("%cW", (k++) ? ',' : '(');
if (reg->flags & SBI_DOMAIN_MEMREGION_EXECUTABLE)
if (reg->flags & SBI_DOMAIN_MEMREGION_M_EXECUTABLE)
sbi_printf("%cX", (k++) ? ',' : '(');
sbi_printf("%s ", (k++) ? ")" : "()");
k = 0;
sbi_printf("S/U: ");
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
sbi_printf("%cR", (k++) ? ',' : '(');
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
sbi_printf("%cW", (k++) ? ',' : '(');
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
sbi_printf("%cX", (k++) ? ',' : '(');
sbi_printf("%s\n", (k++) ? ")" : "()");
@@ -370,10 +472,13 @@ void sbi_domain_dump(const struct sbi_domain *dom, const char *suffix)
default:
sbi_printf("Unknown\n");
break;
};
}
sbi_printf("Domain%d SysReset %s: %s\n",
dom->index, suffix, (dom->system_reset_allowed) ? "yes" : "no");
sbi_printf("Domain%d SysSuspend %s: %s\n",
dom->index, suffix, (dom->system_suspend_allowed) ? "yes" : "no");
}
void sbi_domain_dump_all(const char *suffix)
@@ -437,11 +542,11 @@ int sbi_domain_register(struct sbi_domain *dom,
if (!sbi_hartmask_test_hart(i, dom->possible_harts))
continue;
tdom = hartid_to_domain_table[i];
tdom = sbi_hartid_to_domain(i);
if (tdom)
sbi_hartmask_clear_hart(i,
&tdom->assigned_harts);
hartid_to_domain_table[i] = dom;
update_hartid_to_domain(i, dom);
sbi_hartmask_set_hart(i, &dom->assigned_harts);
/*
@@ -467,8 +572,7 @@ int sbi_domain_root_add_memregion(const struct sbi_domain_memregion *reg)
const struct sbi_platform *plat = sbi_platform_thishart_ptr();
/* Sanity checks */
if (!reg || domain_finalized ||
(root.regions != root_memregs) ||
if (!reg || domain_finalized || !root.regions ||
(ROOT_REGION_MAX <= root_memregs_count))
return SBI_EINVAL;
@@ -483,10 +587,10 @@ int sbi_domain_root_add_memregion(const struct sbi_domain_memregion *reg)
}
/* Append the memregion to root memregions */
nreg = &root_memregs[root_memregs_count];
nreg = &root.regions[root_memregs_count];
sbi_memcpy(nreg, reg, sizeof(*reg));
root_memregs_count++;
root_memregs[root_memregs_count].order = 0;
root.regions[root_memregs_count].order = 0;
/* Sort and optimize root regions */
do {
@@ -616,12 +720,57 @@ int sbi_domain_finalize(struct sbi_scratch *scratch, u32 cold_hartid)
int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid)
{
u32 i;
int rc;
struct sbi_hartmask *root_hmask;
struct sbi_domain_memregion *root_memregs;
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
if (scratch->fw_rw_offset == 0 ||
(scratch->fw_rw_offset & (scratch->fw_rw_offset - 1)) != 0) {
sbi_printf("%s: fw_rw_offset is not a power of 2 (0x%lx)\n",
__func__, scratch->fw_rw_offset);
return SBI_EINVAL;
}
if ((scratch->fw_start & (scratch->fw_rw_offset - 1)) != 0) {
sbi_printf("%s: fw_start and fw_rw_offset not aligned\n",
__func__);
return SBI_EINVAL;
}
domain_hart_ptr_offset = sbi_scratch_alloc_type_offset(void *);
if (!domain_hart_ptr_offset)
return SBI_ENOMEM;
root_memregs = sbi_calloc(sizeof(*root_memregs), ROOT_REGION_MAX + 1);
if (!root_memregs) {
sbi_printf("%s: no memory for root regions\n", __func__);
rc = SBI_ENOMEM;
goto fail_free_domain_hart_ptr_offset;
}
root.regions = root_memregs;
root_hmask = sbi_zalloc(sizeof(*root_hmask));
if (!root_hmask) {
sbi_printf("%s: no memory for root hartmask\n", __func__);
rc = SBI_ENOMEM;
goto fail_free_root_memregs;
}
root.possible_harts = root_hmask;
/* Root domain firmware memory region */
sbi_domain_memregion_init(scratch->fw_start, scratch->fw_size, 0,
&root_fw_region);
domain_memregion_initfw(&root_memregs[root_memregs_count++]);
sbi_domain_memregion_init(scratch->fw_start, scratch->fw_rw_offset,
(SBI_DOMAIN_MEMREGION_M_READABLE |
SBI_DOMAIN_MEMREGION_M_EXECUTABLE),
&root_memregs[root_memregs_count++]);
sbi_domain_memregion_init((scratch->fw_start + scratch->fw_rw_offset),
(scratch->fw_size - scratch->fw_rw_offset),
(SBI_DOMAIN_MEMREGION_M_READABLE |
SBI_DOMAIN_MEMREGION_M_WRITABLE),
&root_memregs[root_memregs_count++]);
root.fw_region_inited = true;
/* Root domain allow everything memory region */
sbi_domain_memregion_init(0, ~0UL,
@@ -645,8 +794,21 @@ int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid)
for (i = 0; i < SBI_HARTMASK_MAX_BITS; i++) {
if (sbi_platform_hart_invalid(plat, i))
continue;
sbi_hartmask_set_hart(i, &root_hmask);
sbi_hartmask_set_hart(i, root_hmask);
}
return sbi_domain_register(&root, &root_hmask);
/* Finally register the root domain */
rc = sbi_domain_register(&root, root_hmask);
if (rc)
goto fail_free_root_hmask;
return 0;
fail_free_root_hmask:
sbi_free(root_hmask);
fail_free_root_memregs:
sbi_free(root_memregs);
fail_free_domain_hart_ptr_offset:
sbi_scratch_free_offset(domain_hart_ptr_offset);
return rc;
}

View File

@@ -78,7 +78,7 @@ int sbi_ecall_register_extension(struct sbi_ecall_extension *ext)
void sbi_ecall_unregister_extension(struct sbi_ecall_extension *ext)
{
bool found = FALSE;
bool found = false;
struct sbi_ecall_extension *t;
if (!ext)
@@ -86,7 +86,7 @@ void sbi_ecall_unregister_extension(struct sbi_ecall_extension *ext)
sbi_list_for_each_entry(t, &ecall_exts_list, head) {
if (t == ext) {
found = TRUE;
found = true;
break;
}
}
@@ -120,7 +120,9 @@ int sbi_ecall_handler(struct sbi_trap_regs *regs)
trap.epc = regs->mepc;
sbi_trap_redirect(regs, &trap);
} else {
if (ret < SBI_LAST_ERR) {
if (ret < SBI_LAST_ERR ||
(extension_id != SBI_EXT_0_1_CONSOLE_GETCHAR &&
SBI_SUCCESS < ret)) {
sbi_printf("%s: Invalid error %d for ext=0x%lx "
"func=0x%lx\n", __func__, ret,
extension_id, func_id);
@@ -152,7 +154,10 @@ int sbi_ecall_init(void)
for (i = 0; i < sbi_ecall_exts_size; i++) {
ext = sbi_ecall_exts[i];
ret = sbi_ecall_register_extension(ext);
ret = SBI_ENODEV;
if (ext->register_extensions)
ret = ext->register_extensions();
if (ret)
return ret;
}

View File

@@ -72,8 +72,16 @@ static int sbi_ecall_base_handler(unsigned long extid, unsigned long funcid,
return ret;
}
struct sbi_ecall_extension ecall_base;
static int sbi_ecall_base_register_extensions(void)
{
return sbi_ecall_register_extension(&ecall_base);
}
struct sbi_ecall_extension ecall_base = {
.extid_start = SBI_EXT_BASE,
.extid_end = SBI_EXT_BASE,
.handle = sbi_ecall_base_handler,
.extid_start = SBI_EXT_BASE,
.extid_end = SBI_EXT_BASE,
.register_extensions = sbi_ecall_base_register_extensions,
.handle = sbi_ecall_base_handler,
};

67
lib/sbi/sbi_ecall_cppc.c Normal file
View File

@@ -0,0 +1,67 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2023 Ventana Micro Systems Inc.
*
*/
#include <sbi/sbi_ecall.h>
#include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_trap.h>
#include <sbi/sbi_cppc.h>
static int sbi_ecall_cppc_handler(unsigned long extid, unsigned long funcid,
const struct sbi_trap_regs *regs,
unsigned long *out_val,
struct sbi_trap_info *out_trap)
{
int ret = 0;
uint64_t temp;
switch (funcid) {
case SBI_EXT_CPPC_READ:
ret = sbi_cppc_read(regs->a0, &temp);
*out_val = temp;
break;
case SBI_EXT_CPPC_READ_HI:
#if __riscv_xlen == 32
ret = sbi_cppc_read(regs->a0, &temp);
*out_val = temp >> 32;
#else
*out_val = 0;
#endif
break;
case SBI_EXT_CPPC_WRITE:
ret = sbi_cppc_write(regs->a0, regs->a1);
break;
case SBI_EXT_CPPC_PROBE:
ret = sbi_cppc_probe(regs->a0);
if (ret >= 0) {
*out_val = ret;
ret = 0;
}
break;
default:
ret = SBI_ENOTSUPP;
}
return ret;
}
struct sbi_ecall_extension ecall_cppc;
static int sbi_ecall_cppc_register_extensions(void)
{
if (!sbi_cppc_get_device())
return 0;
return sbi_ecall_register_extension(&ecall_cppc);
}
struct sbi_ecall_extension ecall_cppc = {
.extid_start = SBI_EXT_CPPC,
.extid_end = SBI_EXT_CPPC,
.register_extensions = sbi_ecall_cppc_register_extensions,
.handle = sbi_ecall_cppc_handler,
};

79
lib/sbi/sbi_ecall_dbcn.c Normal file
View File

@@ -0,0 +1,79 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2022 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel <apatel@ventanamicro.com>
*/
#include <sbi/sbi_console.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_ecall.h>
#include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_trap.h>
#include <sbi/riscv_asm.h>
static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid,
const struct sbi_trap_regs *regs,
unsigned long *out_val,
struct sbi_trap_info *out_trap)
{
ulong smode = (csr_read(CSR_MSTATUS) & MSTATUS_MPP) >>
MSTATUS_MPP_SHIFT;
switch (funcid) {
case SBI_EXT_DBCN_CONSOLE_WRITE:
case SBI_EXT_DBCN_CONSOLE_READ:
/*
* On RV32, the M-mode can only access the first 4GB of
* the physical address space because M-mode does not have
* MMU to access full 34-bit physical address space.
*
* Based on above, we simply fail if the upper 32bits of
* the physical address (i.e. a2 register) is non-zero on
* RV32.
*
* Analogously, we fail if the upper 64bit of the
* physical address (i.e. a2 register) is non-zero on
* RV64.
*/
if (regs->a2)
return SBI_ERR_FAILED;
if (!sbi_domain_check_addr_range(sbi_domain_thishart_ptr(),
regs->a1, regs->a0, smode,
SBI_DOMAIN_READ|SBI_DOMAIN_WRITE))
return SBI_ERR_INVALID_PARAM;
if (funcid == SBI_EXT_DBCN_CONSOLE_WRITE)
*out_val = sbi_nputs((const char *)regs->a1, regs->a0);
else
*out_val = sbi_ngets((char *)regs->a1, regs->a0);
return 0;
case SBI_EXT_DBCN_CONSOLE_WRITE_BYTE:
sbi_putc(regs->a0);
return 0;
default:
break;
}
return SBI_ENOTSUPP;
}
struct sbi_ecall_extension ecall_dbcn;
static int sbi_ecall_dbcn_register_extensions(void)
{
if (!sbi_console_get_device())
return 0;
return sbi_ecall_register_extension(&ecall_dbcn);
}
struct sbi_ecall_extension ecall_dbcn = {
.extid_start = SBI_EXT_DBCN,
.extid_end = SBI_EXT_DBCN,
.register_extensions = sbi_ecall_dbcn_register_extensions,
.handle = sbi_ecall_dbcn_handler,
};

View File

@@ -12,7 +12,6 @@
#include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_trap.h>
#include <sbi/sbi_version.h>
#include <sbi/sbi_hsm.h>
#include <sbi/sbi_scratch.h>
#include <sbi/riscv_asm.h>
@@ -33,7 +32,7 @@ static int sbi_ecall_hsm_handler(unsigned long extid, unsigned long funcid,
regs->a0, regs->a1, smode, regs->a2);
break;
case SBI_EXT_HSM_HART_STOP:
ret = sbi_hsm_hart_stop(scratch, TRUE);
ret = sbi_hsm_hart_stop(scratch, true);
break;
case SBI_EXT_HSM_HART_GET_STATUS:
ret = sbi_hsm_hart_get_state(sbi_domain_thishart_ptr(),
@@ -45,7 +44,8 @@ static int sbi_ecall_hsm_handler(unsigned long extid, unsigned long funcid,
break;
default:
ret = SBI_ENOTSUPP;
};
}
if (ret >= 0) {
*out_val = ret;
ret = 0;
@@ -54,8 +54,16 @@ static int sbi_ecall_hsm_handler(unsigned long extid, unsigned long funcid,
return ret;
}
struct sbi_ecall_extension ecall_hsm;
static int sbi_ecall_hsm_register_extensions(void)
{
return sbi_ecall_register_extension(&ecall_hsm);
}
struct sbi_ecall_extension ecall_hsm = {
.extid_start = SBI_EXT_HSM,
.extid_end = SBI_EXT_HSM,
.handle = sbi_ecall_hsm_handler,
.extid_start = SBI_EXT_HSM,
.extid_end = SBI_EXT_HSM,
.register_extensions = sbi_ecall_hsm_register_extensions,
.handle = sbi_ecall_hsm_handler,
};

View File

@@ -29,8 +29,16 @@ static int sbi_ecall_ipi_handler(unsigned long extid, unsigned long funcid,
return ret;
}
struct sbi_ecall_extension ecall_ipi;
static int sbi_ecall_ipi_register_extensions(void)
{
return sbi_ecall_register_extension(&ecall_ipi);
}
struct sbi_ecall_extension ecall_ipi = {
.extid_start = SBI_EXT_IPI,
.extid_end = SBI_EXT_IPI,
.handle = sbi_ecall_ipi_handler,
.extid_start = SBI_EXT_IPI,
.extid_end = SBI_EXT_IPI,
.register_extensions = sbi_ecall_ipi_register_extensions,
.handle = sbi_ecall_ipi_handler,
};

View File

@@ -112,13 +112,21 @@ static int sbi_ecall_legacy_handler(unsigned long extid, unsigned long funcid,
break;
default:
ret = SBI_ENOTSUPP;
};
}
return ret;
}
struct sbi_ecall_extension ecall_legacy;
static int sbi_ecall_legacy_register_extensions(void)
{
return sbi_ecall_register_extension(&ecall_legacy);
}
struct sbi_ecall_extension ecall_legacy = {
.extid_start = SBI_EXT_0_1_SET_TIMER,
.extid_end = SBI_EXT_0_1_SHUTDOWN,
.handle = sbi_ecall_legacy_handler,
.extid_start = SBI_EXT_0_1_SET_TIMER,
.extid_end = SBI_EXT_0_1_SHUTDOWN,
.register_extensions = sbi_ecall_legacy_register_extensions,
.handle = sbi_ecall_legacy_handler,
};

View File

@@ -54,6 +54,14 @@ static int sbi_ecall_pmu_handler(unsigned long extid, unsigned long funcid,
ret = sbi_pmu_ctr_fw_read(regs->a0, &temp);
*out_val = temp;
break;
case SBI_EXT_PMU_COUNTER_FW_READ_HI:
#if __riscv_xlen == 32
ret = sbi_pmu_ctr_fw_read(regs->a0, &temp);
*out_val = temp >> 32;
#else
*out_val = 0;
#endif
break;
case SBI_EXT_PMU_COUNTER_START:
#if __riscv_xlen == 32
@@ -68,21 +76,21 @@ static int sbi_ecall_pmu_handler(unsigned long extid, unsigned long funcid,
break;
default:
ret = SBI_ENOTSUPP;
};
}
return ret;
}
static int sbi_ecall_pmu_probe(unsigned long extid, unsigned long *out_val)
struct sbi_ecall_extension ecall_pmu;
static int sbi_ecall_pmu_register_extensions(void)
{
/* PMU extension is always enabled */
*out_val = 1;
return 0;
return sbi_ecall_register_extension(&ecall_pmu);
}
struct sbi_ecall_extension ecall_pmu = {
.extid_start = SBI_EXT_PMU,
.extid_end = SBI_EXT_PMU,
.handle = sbi_ecall_pmu_handler,
.probe = sbi_ecall_pmu_probe,
.extid_start = SBI_EXT_PMU,
.extid_end = SBI_EXT_PMU,
.register_extensions = sbi_ecall_pmu_register_extensions,
.handle = sbi_ecall_pmu_handler,
};

View File

@@ -74,13 +74,21 @@ static int sbi_ecall_rfence_handler(unsigned long extid, unsigned long funcid,
break;
default:
ret = SBI_ENOTSUPP;
};
}
return ret;
}
struct sbi_ecall_extension ecall_rfence;
static int sbi_ecall_rfence_register_extensions(void)
{
return sbi_ecall_register_extension(&ecall_rfence);
}
struct sbi_ecall_extension ecall_rfence = {
.extid_start = SBI_EXT_RFENCE,
.extid_end = SBI_EXT_RFENCE,
.handle = sbi_ecall_rfence_handler,
.extid_start = SBI_EXT_RFENCE,
.extid_end = SBI_EXT_RFENCE,
.register_extensions = sbi_ecall_rfence_register_extensions,
.handle = sbi_ecall_rfence_handler,
};

View File

@@ -48,28 +48,36 @@ static int sbi_ecall_srst_handler(unsigned long extid, unsigned long funcid,
return SBI_ENOTSUPP;
}
static int sbi_ecall_srst_probe(unsigned long extid, unsigned long *out_val)
static bool srst_available(void)
{
u32 type, count = 0;
u32 type;
/*
* At least one standard reset types should be supported by
* the platform for SBI SRST extension to be usable.
*/
for (type = 0; type <= SBI_SRST_RESET_TYPE_LAST; type++) {
if (sbi_system_reset_supported(type,
SBI_SRST_RESET_REASON_NONE))
count++;
return true;
}
*out_val = (count) ? 1 : 0;
return 0;
return false;
}
struct sbi_ecall_extension ecall_srst;
static int sbi_ecall_srst_register_extensions(void)
{
if (!srst_available())
return 0;
return sbi_ecall_register_extension(&ecall_srst);
}
struct sbi_ecall_extension ecall_srst = {
.extid_start = SBI_EXT_SRST,
.extid_end = SBI_EXT_SRST,
.handle = sbi_ecall_srst_handler,
.probe = sbi_ecall_srst_probe,
.extid_start = SBI_EXT_SRST,
.extid_end = SBI_EXT_SRST,
.register_extensions = sbi_ecall_srst_register_extensions,
.handle = sbi_ecall_srst_handler,
};

57
lib/sbi/sbi_ecall_susp.c Normal file
View File

@@ -0,0 +1,57 @@
// SPDX-License-Identifier: BSD-2-Clause
#include <sbi/sbi_ecall.h>
#include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_trap.h>
#include <sbi/sbi_system.h>
static int sbi_ecall_susp_handler(unsigned long extid, unsigned long funcid,
const struct sbi_trap_regs *regs,
unsigned long *out_val,
struct sbi_trap_info *out_trap)
{
int ret = SBI_ENOTSUPP;
if (funcid == SBI_EXT_SUSP_SUSPEND)
ret = sbi_system_suspend(regs->a0, regs->a1, regs->a2);
if (ret >= 0) {
*out_val = ret;
ret = 0;
}
return ret;
}
static bool susp_available(void)
{
u32 type;
/*
* At least one suspend type should be supported by the
* platform for the SBI SUSP extension to be usable.
*/
for (type = 0; type <= SBI_SUSP_SLEEP_TYPE_LAST; type++) {
if (sbi_system_suspend_supported(type))
return true;
}
return false;
}
struct sbi_ecall_extension ecall_susp;
static int sbi_ecall_susp_register_extensions(void)
{
if (!susp_available())
return 0;
return sbi_ecall_register_extension(&ecall_susp);
}
struct sbi_ecall_extension ecall_susp = {
.extid_start = SBI_EXT_SUSP,
.extid_end = SBI_EXT_SUSP,
.register_extensions = sbi_ecall_susp_register_extensions,
.handle = sbi_ecall_susp_handler,
};

View File

@@ -33,8 +33,16 @@ static int sbi_ecall_time_handler(unsigned long extid, unsigned long funcid,
return ret;
}
struct sbi_ecall_extension ecall_time;
static int sbi_ecall_time_register_extensions(void)
{
return sbi_ecall_register_extension(&ecall_time);
}
struct sbi_ecall_extension ecall_time = {
.extid_start = SBI_EXT_TIME,
.extid_end = SBI_EXT_TIME,
.handle = sbi_ecall_time_handler,
.extid_start = SBI_EXT_TIME,
.extid_end = SBI_EXT_TIME,
.register_extensions = sbi_ecall_time_register_extensions,
.handle = sbi_ecall_time_handler,
};

View File

@@ -13,13 +13,13 @@
#include <sbi/sbi_error.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_trap.h>
#include <sbi/riscv_asm.h>
static int sbi_ecall_vendor_probe(unsigned long extid,
unsigned long *out_val)
static inline unsigned long sbi_ecall_vendor_id(void)
{
*out_val = sbi_platform_vendor_ext_check(sbi_platform_thishart_ptr(),
extid);
return 0;
return SBI_EXT_VENDOR_START +
(csr_read(CSR_MVENDORID) &
(SBI_EXT_VENDOR_END - SBI_EXT_VENDOR_START));
}
static int sbi_ecall_vendor_handler(unsigned long extid, unsigned long funcid,
@@ -28,13 +28,28 @@ static int sbi_ecall_vendor_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_info *out_trap)
{
return sbi_platform_vendor_ext_provider(sbi_platform_thishart_ptr(),
extid, funcid, regs,
funcid, regs,
out_val, out_trap);
}
struct sbi_ecall_extension ecall_vendor;
static int sbi_ecall_vendor_register_extensions(void)
{
unsigned long extid = sbi_ecall_vendor_id();
if (!sbi_platform_vendor_ext_check(sbi_platform_thishart_ptr()))
return 0;
ecall_vendor.extid_start = extid;
ecall_vendor.extid_end = extid;
return sbi_ecall_register_extension(&ecall_vendor);
}
struct sbi_ecall_extension ecall_vendor = {
.extid_start = SBI_EXT_VENDOR_START,
.extid_end = SBI_EXT_VENDOR_END,
.probe = sbi_ecall_vendor_probe,
.handle = sbi_ecall_vendor_handler,
.extid_start = SBI_EXT_VENDOR_START,
.extid_end = SBI_EXT_VENDOR_END,
.register_extensions = sbi_ecall_vendor_register_extensions,
.handle = sbi_ecall_vendor_handler,
};

View File

@@ -39,7 +39,7 @@ static bool hpm_allowed(int hpm_num, ulong prev_mode, bool virt)
cen = 0;
}
return ((cen >> hpm_num) & 1) ? TRUE : FALSE;
return ((cen >> hpm_num) & 1) ? true : false;
}
int sbi_emulate_csr_read(int csr_num, struct sbi_trap_regs *regs,
@@ -49,9 +49,9 @@ int sbi_emulate_csr_read(int csr_num, struct sbi_trap_regs *regs,
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
ulong prev_mode = (regs->mstatus & MSTATUS_MPP) >> MSTATUS_MPP_SHIFT;
#if __riscv_xlen == 32
bool virt = (regs->mstatusH & MSTATUSH_MPV) ? TRUE : FALSE;
bool virt = (regs->mstatusH & MSTATUSH_MPV) ? true : false;
#else
bool virt = (regs->mstatus & MSTATUS_MPV) ? TRUE : FALSE;
bool virt = (regs->mstatus & MSTATUS_MPV) ? true : false;
#endif
switch (csr_num) {
@@ -149,7 +149,7 @@ int sbi_emulate_csr_read(int csr_num, struct sbi_trap_regs *regs,
default:
ret = SBI_ENOTSUPP;
break;
};
}
if (ret)
sbi_dprintf("%s: hartid%d: invalid csr_num=0x%x\n",
@@ -164,9 +164,9 @@ int sbi_emulate_csr_write(int csr_num, struct sbi_trap_regs *regs,
int ret = 0;
ulong prev_mode = (regs->mstatus & MSTATUS_MPP) >> MSTATUS_MPP_SHIFT;
#if __riscv_xlen == 32
bool virt = (regs->mstatusH & MSTATUSH_MPV) ? TRUE : FALSE;
bool virt = (regs->mstatusH & MSTATUSH_MPV) ? true : false;
#else
bool virt = (regs->mstatus & MSTATUS_MPV) ? TRUE : FALSE;
bool virt = (regs->mstatus & MSTATUS_MPV) ? true : false;
#endif
switch (csr_num) {
@@ -187,7 +187,7 @@ int sbi_emulate_csr_write(int csr_num, struct sbi_trap_regs *regs,
default:
ret = SBI_ENOTSUPP;
break;
};
}
if (ret)
sbi_dprintf("%s: hartid%d: invalid csr_num=0x%x\n",

View File

@@ -26,7 +26,7 @@ void sbi_fifo_init(struct sbi_fifo *fifo, void *queue_mem, u16 entries,
/* Note: must be called with fifo->qlock held */
static inline bool __sbi_fifo_is_full(struct sbi_fifo *fifo)
{
return (fifo->avail == fifo->num_entries) ? TRUE : FALSE;
return (fifo->avail == fifo->num_entries) ? true : false;
}
u16 sbi_fifo_avail(struct sbi_fifo *fifo)
@@ -75,7 +75,7 @@ static inline void __sbi_fifo_enqueue(struct sbi_fifo *fifo, void *data)
/* Note: must be called with fifo->qlock held */
static inline bool __sbi_fifo_is_empty(struct sbi_fifo *fifo)
{
return (fifo->avail == 0) ? TRUE : FALSE;
return (fifo->avail == 0) ? true : false;
}
int sbi_fifo_is_empty(struct sbi_fifo *fifo)
@@ -105,13 +105,13 @@ static inline void __sbi_fifo_reset(struct sbi_fifo *fifo)
bool sbi_fifo_reset(struct sbi_fifo *fifo)
{
if (!fifo)
return FALSE;
return false;
spin_lock(&fifo->qlock);
__sbi_fifo_reset(fifo);
spin_unlock(&fifo->qlock);
return TRUE;
return true;
}
/**

View File

@@ -90,6 +90,7 @@ static void mstatus_init(struct sbi_scratch *scratch)
mstateen_val |= ((uint64_t)csr_read(CSR_MSTATEEN0H)) << 32;
#endif
mstateen_val |= SMSTATEEN_STATEN;
mstateen_val |= SMSTATEEN0_CONTEXT;
mstateen_val |= SMSTATEEN0_HSENVCFG;
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMAIA))
@@ -303,15 +304,21 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
break;
pmp_flags = 0;
if (reg->flags & SBI_DOMAIN_MEMREGION_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_WRITEABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_EXECUTABLE)
pmp_flags |= PMP_X;
if (reg->flags & SBI_DOMAIN_MEMREGION_MMODE)
/*
* If permissions are to be enforced for all modes on this
* region, the lock bit should be set.
*/
if (reg->flags & SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS)
pmp_flags |= PMP_L;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
pmp_flags |= PMP_X;
pmp_addr = reg->base >> PMP_SHIFT;
if (pmp_gran_log2 <= reg->order && pmp_addr < pmp_addr_max)
pmp_set(pmp_idx++, pmp_flags, reg->base, reg->order);
@@ -726,6 +733,12 @@ int sbi_hart_init(struct sbi_scratch *scratch, bool cold_boot)
{
int rc;
/*
* Clear mip CSR before proceeding with init to avoid any spurious
* external interrupts in S-mode.
*/
csr_write(CSR_MIP, 0);
if (cold_boot) {
if (misa_extension('H'))
sbi_hart_expected_trap = &__sbi_expected_trap_hext;

206
lib/sbi/sbi_heap.c Normal file
View File

@@ -0,0 +1,206 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2023 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel<apatel@ventanamicro.com>
*/
#include <sbi/riscv_locks.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_list.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h>
/* Alignment of heap base address and size */
#define HEAP_BASE_ALIGN 1024
/* Minimum size and alignment of heap allocations */
#define HEAP_ALLOC_ALIGN 64
#define HEAP_HOUSEKEEPING_FACTOR 16
struct heap_node {
struct sbi_dlist head;
unsigned long addr;
unsigned long size;
};
struct heap_control {
spinlock_t lock;
unsigned long base;
unsigned long size;
unsigned long hkbase;
unsigned long hksize;
struct sbi_dlist free_node_list;
struct sbi_dlist free_space_list;
struct sbi_dlist used_space_list;
};
static struct heap_control hpctrl;
void *sbi_malloc(size_t size)
{
void *ret = NULL;
struct heap_node *n, *np;
if (!size)
return NULL;
size += HEAP_ALLOC_ALIGN - 1;
size &= ~((unsigned long)HEAP_ALLOC_ALIGN - 1);
spin_lock(&hpctrl.lock);
np = NULL;
sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) {
if (size <= n->size) {
np = n;
break;
}
}
if (np) {
if ((size < np->size) &&
!sbi_list_empty(&hpctrl.free_node_list)) {
n = sbi_list_first_entry(&hpctrl.free_node_list,
struct heap_node, head);
sbi_list_del(&n->head);
n->addr = np->addr + np->size - size;
n->size = size;
np->size -= size;
sbi_list_add_tail(&n->head, &hpctrl.used_space_list);
ret = (void *)n->addr;
} else if (size == np->size) {
sbi_list_del(&np->head);
sbi_list_add_tail(&np->head, &hpctrl.used_space_list);
ret = (void *)np->addr;
}
}
spin_unlock(&hpctrl.lock);
return ret;
}
void *sbi_zalloc(size_t size)
{
void *ret = sbi_malloc(size);
if (ret)
sbi_memset(ret, 0, size);
return ret;
}
void sbi_free(void *ptr)
{
struct heap_node *n, *np;
if (!ptr)
return;
spin_lock(&hpctrl.lock);
np = NULL;
sbi_list_for_each_entry(n, &hpctrl.used_space_list, head) {
if ((n->addr <= (unsigned long)ptr) &&
((unsigned long)ptr < (n->addr + n->size))) {
np = n;
break;
}
}
if (!np) {
spin_unlock(&hpctrl.lock);
return;
}
sbi_list_del(&np->head);
sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) {
if ((np->addr + np->size) == n->addr) {
n->addr = np->addr;
n->size += np->size;
sbi_list_add_tail(&np->head, &hpctrl.free_node_list);
np = NULL;
break;
} else if (np->addr == (n->addr + n->size)) {
n->size += np->size;
sbi_list_add_tail(&np->head, &hpctrl.free_node_list);
np = NULL;
break;
} else if ((n->addr + n->size) < np->addr) {
sbi_list_add(&np->head, &n->head);
np = NULL;
break;
}
}
if (np)
sbi_list_add_tail(&np->head, &hpctrl.free_space_list);
spin_unlock(&hpctrl.lock);
}
unsigned long sbi_heap_free_space(void)
{
struct heap_node *n;
unsigned long ret = 0;
spin_lock(&hpctrl.lock);
sbi_list_for_each_entry(n, &hpctrl.free_space_list, head)
ret += n->size;
spin_unlock(&hpctrl.lock);
return ret;
}
unsigned long sbi_heap_used_space(void)
{
return hpctrl.size - hpctrl.hksize - sbi_heap_free_space();
}
unsigned long sbi_heap_reserved_space(void)
{
return hpctrl.hksize;
}
int sbi_heap_init(struct sbi_scratch *scratch)
{
unsigned long i;
struct heap_node *n;
/* Sanity checks on heap offset and size */
if (!scratch->fw_heap_size ||
(scratch->fw_heap_size & (HEAP_BASE_ALIGN - 1)) ||
(scratch->fw_heap_offset < scratch->fw_rw_offset) ||
(scratch->fw_size < (scratch->fw_heap_offset + scratch->fw_heap_size)) ||
(scratch->fw_heap_offset & (HEAP_BASE_ALIGN - 1)))
return SBI_EINVAL;
/* Initialize heap control */
SPIN_LOCK_INIT(hpctrl.lock);
hpctrl.base = scratch->fw_start + scratch->fw_heap_offset;
hpctrl.size = scratch->fw_heap_size;
hpctrl.hkbase = hpctrl.base;
hpctrl.hksize = hpctrl.size / HEAP_HOUSEKEEPING_FACTOR;
hpctrl.hksize &= ~((unsigned long)HEAP_BASE_ALIGN - 1);
SBI_INIT_LIST_HEAD(&hpctrl.free_node_list);
SBI_INIT_LIST_HEAD(&hpctrl.free_space_list);
SBI_INIT_LIST_HEAD(&hpctrl.used_space_list);
/* Prepare free node list */
for (i = 0; i < (hpctrl.hksize / sizeof(*n)); i++) {
n = (struct heap_node *)(hpctrl.hkbase + (sizeof(*n) * i));
SBI_INIT_LIST_HEAD(&n->head);
n->addr = n->size = 0;
sbi_list_add_tail(&n->head, &hpctrl.free_node_list);
}
/* Prepare free space list */
n = sbi_list_first_entry(&hpctrl.free_node_list,
struct heap_node, head);
sbi_list_del(&n->head);
n->addr = hpctrl.hkbase + hpctrl.hksize;
n->size = hpctrl.size - hpctrl.hksize;
sbi_list_add_tail(&n->head, &hpctrl.free_space_list);
return 0;
}

View File

@@ -26,6 +26,15 @@
#include <sbi/sbi_timer.h>
#include <sbi/sbi_console.h>
#define __sbi_hsm_hart_change_state(hdata, oldstate, newstate) \
({ \
long state = atomic_cmpxchg(&(hdata)->state, oldstate, newstate); \
if (state != (oldstate)) \
sbi_printf("%s: ERR: The hart is in invalid state [%lu]\n", \
__func__, state); \
state == (oldstate); \
})
static const struct sbi_hsm_device *hsm_dev = NULL;
static unsigned long hart_data_offset;
@@ -35,9 +44,18 @@ struct sbi_hsm_data {
unsigned long suspend_type;
unsigned long saved_mie;
unsigned long saved_mip;
atomic_t start_ticket;
};
static inline int __sbi_hsm_hart_get_state(u32 hartid)
bool sbi_hsm_hart_change_state(struct sbi_scratch *scratch, long oldstate,
long newstate)
{
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset);
return __sbi_hsm_hart_change_state(hdata, oldstate, newstate);
}
int __sbi_hsm_hart_get_state(u32 hartid)
{
struct sbi_hsm_data *hdata;
struct sbi_scratch *scratch;
@@ -58,6 +76,32 @@ int sbi_hsm_hart_get_state(const struct sbi_domain *dom, u32 hartid)
return __sbi_hsm_hart_get_state(hartid);
}
/*
* Try to acquire the ticket for the given target hart to make sure only
* one hart prepares the start of the target hart.
* Returns true if the ticket has been acquired, false otherwise.
*
* The function has "acquire" semantics: no memory operations following it
* in the current hart can be seen before it by other harts.
* atomic_cmpxchg() provides the memory barriers needed for that.
*/
static bool hsm_start_ticket_acquire(struct sbi_hsm_data *hdata)
{
return (atomic_cmpxchg(&hdata->start_ticket, 0, 1) == 0);
}
/*
* Release the ticket for the given target hart.
*
* The function has "release" semantics: no memory operations preceding it
* in the current hart can be seen after it by other harts.
*/
static void hsm_start_ticket_release(struct sbi_hsm_data *hdata)
{
RISCV_FENCE(rw, w);
atomic_write(&hdata->start_ticket, 0);
}
/**
* Get ulong HART mask for given HART base ID
* @param dom the domain to be used for output HART mask
@@ -93,16 +137,25 @@ int sbi_hsm_hart_interruptible_mask(const struct sbi_domain *dom,
return 0;
}
void sbi_hsm_prepare_next_jump(struct sbi_scratch *scratch, u32 hartid)
void __noreturn sbi_hsm_hart_start_finish(struct sbi_scratch *scratch,
u32 hartid)
{
u32 oldstate;
unsigned long next_arg1;
unsigned long next_addr;
unsigned long next_mode;
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset);
oldstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_START_PENDING,
SBI_HSM_STATE_STARTED);
if (oldstate != SBI_HSM_STATE_START_PENDING)
if (!__sbi_hsm_hart_change_state(hdata, SBI_HSM_STATE_START_PENDING,
SBI_HSM_STATE_STARTED))
sbi_hart_hang();
next_arg1 = scratch->next_arg1;
next_addr = scratch->next_addr;
next_mode = scratch->next_mode;
hsm_start_ticket_release(hdata);
sbi_hart_switch_mode(hartid, next_arg1, next_addr, next_mode, false);
}
static void sbi_hsm_hart_wait(struct sbi_scratch *scratch, u32 hartid)
@@ -116,10 +169,10 @@ static void sbi_hsm_hart_wait(struct sbi_scratch *scratch, u32 hartid)
/* Set MSIE and MEIE bits to receive IPI */
csr_set(CSR_MIE, MIP_MSIP | MIP_MEIP);
/* Wait for hart_add call*/
/* Wait for state transition requested by sbi_hsm_hart_start() */
while (atomic_read(&hdata->state) != SBI_HSM_STATE_START_PENDING) {
wfi();
};
}
/* Restore MIE CSR */
csr_write(CSR_MIE, saved_mie);
@@ -207,6 +260,7 @@ int sbi_hsm_init(struct sbi_scratch *scratch, u32 hartid, bool cold_boot)
(i == hartid) ?
SBI_HSM_STATE_START_PENDING :
SBI_HSM_STATE_STOPPED);
ATOMIC_INIT(&hdata->start_ticket, 0);
}
} else {
sbi_hsm_hart_wait(scratch, hartid);
@@ -217,20 +271,17 @@ int sbi_hsm_init(struct sbi_scratch *scratch, u32 hartid, bool cold_boot)
void __noreturn sbi_hsm_exit(struct sbi_scratch *scratch)
{
u32 hstate;
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset);
void (*jump_warmboot)(void) = (void (*)(void))scratch->warmboot_addr;
hstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_STOP_PENDING,
SBI_HSM_STATE_STOPPED);
if (hstate != SBI_HSM_STATE_STOP_PENDING)
if (!__sbi_hsm_hart_change_state(hdata, SBI_HSM_STATE_STOP_PENDING,
SBI_HSM_STATE_STOPPED))
goto fail_exit;
if (hsm_device_has_hart_hotplug()) {
hsm_device_hart_stop();
/* It should never reach here */
goto fail_exit;
if (hsm_device_hart_stop() != SBI_ENOTSUPP)
goto fail_exit;
}
/**
@@ -248,12 +299,13 @@ fail_exit:
int sbi_hsm_hart_start(struct sbi_scratch *scratch,
const struct sbi_domain *dom,
u32 hartid, ulong saddr, ulong smode, ulong priv)
u32 hartid, ulong saddr, ulong smode, ulong arg1)
{
unsigned long init_count;
unsigned long init_count, entry_count;
unsigned int hstate;
struct sbi_scratch *rscratch;
struct sbi_hsm_data *hdata;
int rc;
/* For now, we only allow start mode to be S-mode or U-mode. */
if (smode != PRV_S && smode != PRV_U)
@@ -267,39 +319,55 @@ int sbi_hsm_hart_start(struct sbi_scratch *scratch,
rscratch = sbi_hartid_to_scratch(hartid);
if (!rscratch)
return SBI_EINVAL;
hdata = sbi_scratch_offset_ptr(rscratch, hart_data_offset);
if (!hsm_start_ticket_acquire(hdata))
return SBI_EINVAL;
init_count = sbi_init_count(hartid);
entry_count = sbi_entry_count(hartid);
rscratch->next_arg1 = arg1;
rscratch->next_addr = saddr;
rscratch->next_mode = smode;
/*
* atomic_cmpxchg() is an implicit barrier. It makes sure that
* other harts see reading of init_count and writing to *rscratch
* before hdata->state is set to SBI_HSM_STATE_START_PENDING.
*/
hstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_STOPPED,
SBI_HSM_STATE_START_PENDING);
if (hstate == SBI_HSM_STATE_STARTED)
return SBI_EALREADY;
if (hstate == SBI_HSM_STATE_STARTED) {
rc = SBI_EALREADY;
goto err;
}
/**
* if a hart is already transition to start or stop, another start call
* is considered as invalid request.
*/
if (hstate != SBI_HSM_STATE_STOPPED)
return SBI_EINVAL;
init_count = sbi_init_count(hartid);
rscratch->next_arg1 = priv;
rscratch->next_addr = saddr;
rscratch->next_mode = smode;
if (hsm_device_has_hart_hotplug() ||
(hsm_device_has_hart_secondary_boot() && !init_count)) {
return hsm_device_hart_start(hartid, scratch->warmboot_addr);
} else {
int rc = sbi_ipi_raw_send(hartid);
if (rc)
return rc;
if (hstate != SBI_HSM_STATE_STOPPED) {
rc = SBI_EINVAL;
goto err;
}
return 0;
if ((hsm_device_has_hart_hotplug() && (entry_count == init_count)) ||
(hsm_device_has_hart_secondary_boot() && !init_count)) {
rc = hsm_device_hart_start(hartid, scratch->warmboot_addr);
} else {
rc = sbi_ipi_raw_send(hartid);
}
if (!rc)
return 0;
err:
hsm_start_ticket_release(hdata);
return rc;
}
int sbi_hsm_hart_stop(struct sbi_scratch *scratch, bool exitnow)
{
int oldstate;
const struct sbi_domain *dom = sbi_domain_thishart_ptr();
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset);
@@ -307,13 +375,9 @@ int sbi_hsm_hart_stop(struct sbi_scratch *scratch, bool exitnow)
if (!dom)
return SBI_EFAIL;
oldstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_STARTED,
SBI_HSM_STATE_STOP_PENDING);
if (oldstate != SBI_HSM_STATE_STARTED) {
sbi_printf("%s: ERR: The hart is in invalid state [%u]\n",
__func__, oldstate);
if (!__sbi_hsm_hart_change_state(hdata, SBI_HSM_STATE_STARTED,
SBI_HSM_STATE_STOP_PENDING))
return SBI_EFAIL;
}
if (exitnow)
sbi_exit(scratch);
@@ -329,7 +393,7 @@ static int __sbi_hsm_suspend_default(struct sbi_scratch *scratch)
return 0;
}
static void __sbi_hsm_suspend_non_ret_save(struct sbi_scratch *scratch)
void __sbi_hsm_suspend_non_ret_save(struct sbi_scratch *scratch)
{
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset);
@@ -358,62 +422,55 @@ static void __sbi_hsm_suspend_non_ret_restore(struct sbi_scratch *scratch)
hart_data_offset);
csr_write(CSR_MIE, hdata->saved_mie);
csr_write(CSR_MIP, (hdata->saved_mip & (MIP_SSIP | MIP_STIP)));
csr_set(CSR_MIP, (hdata->saved_mip & (MIP_SSIP | MIP_STIP)));
}
void sbi_hsm_hart_resume_start(struct sbi_scratch *scratch)
{
int oldstate;
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset);
/* If current HART was SUSPENDED then set RESUME_PENDING state */
oldstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_SUSPENDED,
SBI_HSM_STATE_RESUME_PENDING);
if (oldstate != SBI_HSM_STATE_SUSPENDED) {
sbi_printf("%s: ERR: The hart is in invalid state [%u]\n",
__func__, oldstate);
if (!__sbi_hsm_hart_change_state(hdata, SBI_HSM_STATE_SUSPENDED,
SBI_HSM_STATE_RESUME_PENDING))
sbi_hart_hang();
}
hsm_device_hart_resume();
}
void sbi_hsm_hart_resume_finish(struct sbi_scratch *scratch)
void __noreturn sbi_hsm_hart_resume_finish(struct sbi_scratch *scratch,
u32 hartid)
{
u32 oldstate;
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset);
/* If current HART was RESUME_PENDING then set STARTED state */
oldstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_RESUME_PENDING,
SBI_HSM_STATE_STARTED);
if (oldstate != SBI_HSM_STATE_RESUME_PENDING) {
sbi_printf("%s: ERR: The hart is in invalid state [%u]\n",
__func__, oldstate);
if (!__sbi_hsm_hart_change_state(hdata, SBI_HSM_STATE_RESUME_PENDING,
SBI_HSM_STATE_STARTED))
sbi_hart_hang();
}
/*
* Restore some of the M-mode CSRs which we are re-configured by
* the warm-boot sequence.
*/
__sbi_hsm_suspend_non_ret_restore(scratch);
sbi_hart_switch_mode(hartid, scratch->next_arg1,
scratch->next_addr,
scratch->next_mode, false);
}
int sbi_hsm_hart_suspend(struct sbi_scratch *scratch, u32 suspend_type,
ulong raddr, ulong rmode, ulong priv)
ulong raddr, ulong rmode, ulong arg1)
{
int oldstate, ret;
int ret;
const struct sbi_domain *dom = sbi_domain_thishart_ptr();
struct sbi_hsm_data *hdata = sbi_scratch_offset_ptr(scratch,
hart_data_offset);
/* For now, we only allow suspend from S-mode or U-mode. */
/* Sanity check on domain assigned to current HART */
if (!dom)
return SBI_EINVAL;
return SBI_EFAIL;
/* Sanity check on suspend type */
if (SBI_HSM_SUSPEND_RET_DEFAULT < suspend_type &&
@@ -425,27 +482,26 @@ int sbi_hsm_hart_suspend(struct sbi_scratch *scratch, u32 suspend_type,
/* Additional sanity check for non-retentive suspend */
if (suspend_type & SBI_HSM_SUSP_NON_RET_BIT) {
/*
* For now, we only allow non-retentive suspend from
* S-mode or U-mode.
*/
if (rmode != PRV_S && rmode != PRV_U)
return SBI_EINVAL;
return SBI_EFAIL;
if (dom && !sbi_domain_check_addr(dom, raddr, rmode,
SBI_DOMAIN_EXECUTE))
return SBI_EINVALID_ADDR;
}
/* Save the resume address and resume mode */
scratch->next_arg1 = priv;
scratch->next_arg1 = arg1;
scratch->next_addr = raddr;
scratch->next_mode = rmode;
/* Directly move from STARTED to SUSPENDED state */
oldstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_STARTED,
SBI_HSM_STATE_SUSPENDED);
if (oldstate != SBI_HSM_STATE_STARTED) {
sbi_printf("%s: ERR: The hart is in invalid state [%u]\n",
__func__, oldstate);
ret = SBI_EDENIED;
goto fail_restore_state;
}
if (!__sbi_hsm_hart_change_state(hdata, SBI_HSM_STATE_STARTED,
SBI_HSM_STATE_SUSPENDED))
return SBI_EFAIL;
/* Save the suspend type */
hdata->suspend_type = suspend_type;
@@ -480,18 +536,13 @@ int sbi_hsm_hart_suspend(struct sbi_scratch *scratch, u32 suspend_type,
jump_warmboot();
}
fail_restore_state:
/*
* We might have successfully resumed from retentive suspend
* or suspend failed. In both cases, we restore state of hart.
*/
oldstate = atomic_cmpxchg(&hdata->state, SBI_HSM_STATE_SUSPENDED,
SBI_HSM_STATE_STARTED);
if (oldstate != SBI_HSM_STATE_SUSPENDED) {
sbi_printf("%s: ERR: The hart is in invalid state [%u]\n",
__func__, oldstate);
if (!__sbi_hsm_hart_change_state(hdata, SBI_HSM_STATE_SUSPENDED,
SBI_HSM_STATE_STARTED))
sbi_hart_hang();
}
return ret;
}

View File

@@ -90,7 +90,7 @@ static int system_opcode_insn(ulong insn, struct sbi_trap_regs *regs)
break;
default:
return truly_illegal_insn(insn, regs);
};
}
if (do_write && sbi_emulate_csr_write(csr_num, regs, new_csr_val))
return truly_illegal_insn(insn, regs);

View File

@@ -12,10 +12,12 @@
#include <sbi/riscv_barrier.h>
#include <sbi/riscv_locks.h>
#include <sbi/sbi_console.h>
#include <sbi/sbi_cppc.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_ecall.h>
#include <sbi/sbi_hart.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_hsm.h>
#include <sbi/sbi_ipi.h>
#include <sbi/sbi_irqchip.h>
@@ -69,6 +71,8 @@ static void sbi_boot_print_general(struct sbi_scratch *scratch)
const struct sbi_timer_device *tdev;
const struct sbi_console_device *cdev;
const struct sbi_system_reset_device *srdev;
const struct sbi_system_suspend_device *susp_dev;
const struct sbi_cppc_device *cppc_dev;
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
if (scratch->options & SBI_SCRATCH_NO_BOOT_PRINTS)
@@ -103,11 +107,32 @@ static void sbi_boot_print_general(struct sbi_scratch *scratch)
srdev = sbi_system_reset_get_device(SBI_SRST_RESET_TYPE_SHUTDOWN, 0);
sbi_printf("Platform Shutdown Device : %s\n",
(srdev) ? srdev->name : "---");
susp_dev = sbi_system_suspend_get_device();
sbi_printf("Platform Suspend Device : %s\n",
(susp_dev) ? susp_dev->name : "---");
cppc_dev = sbi_cppc_get_device();
sbi_printf("Platform CPPC Device : %s\n",
(cppc_dev) ? cppc_dev->name : "---");
/* Firmware details */
sbi_printf("Firmware Base : 0x%lx\n", scratch->fw_start);
sbi_printf("Firmware Size : %d KB\n",
(u32)(scratch->fw_size / 1024));
sbi_printf("Firmware RW Offset : 0x%lx\n", scratch->fw_rw_offset);
sbi_printf("Firmware RW Size : %d KB\n",
(u32)((scratch->fw_size - scratch->fw_rw_offset) / 1024));
sbi_printf("Firmware Heap Offset : 0x%lx\n", scratch->fw_heap_offset);
sbi_printf("Firmware Heap Size : "
"%d KB (total), %d KB (reserved), %d KB (used), %d KB (free)\n",
(u32)(scratch->fw_heap_size / 1024),
(u32)(sbi_heap_reserved_space() / 1024),
(u32)(sbi_heap_used_space() / 1024),
(u32)(sbi_heap_free_space() / 1024));
sbi_printf("Firmware Scratch Size : "
"%d B (total), %d B (used), %d B (free)\n",
SBI_SCRATCH_SIZE,
(u32)sbi_scratch_used_space(),
(u32)(SBI_SCRATCH_SIZE - sbi_scratch_used_space()));
/* SBI details */
sbi_printf("Runtime SBI Version : %d.%d\n",
@@ -190,7 +215,7 @@ static void wait_for_coldboot(struct sbi_scratch *scratch, u32 hartid)
wfi();
cmip = csr_read(CSR_MIP);
} while (!(cmip & (MIP_MSIP | MIP_MEIP)));
};
}
/* Acquire coldboot lock */
spin_lock(&coldboot_lock);
@@ -233,12 +258,13 @@ static void wake_coldboot_harts(struct sbi_scratch *scratch, u32 hartid)
spin_unlock(&coldboot_lock);
}
static unsigned long entry_count_offset;
static unsigned long init_count_offset;
static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
{
int rc;
unsigned long *init_count;
unsigned long *count;
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
/* Note: This has to be first thing in coldboot init sequence */
@@ -247,23 +273,35 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
sbi_hart_hang();
/* Note: This has to be second thing in coldboot init sequence */
rc = sbi_heap_init(scratch);
if (rc)
sbi_hart_hang();
/* Note: This has to be the third thing in coldboot init sequence */
rc = sbi_domain_init(scratch, hartid);
if (rc)
sbi_hart_hang();
entry_count_offset = sbi_scratch_alloc_offset(__SIZEOF_POINTER__);
if (!entry_count_offset)
sbi_hart_hang();
init_count_offset = sbi_scratch_alloc_offset(__SIZEOF_POINTER__);
if (!init_count_offset)
sbi_hart_hang();
rc = sbi_hsm_init(scratch, hartid, TRUE);
count = sbi_scratch_offset_ptr(scratch, entry_count_offset);
(*count)++;
rc = sbi_hsm_init(scratch, hartid, true);
if (rc)
sbi_hart_hang();
rc = sbi_platform_early_init(plat, TRUE);
rc = sbi_platform_early_init(plat, true);
if (rc)
sbi_hart_hang();
rc = sbi_hart_init(scratch, TRUE);
rc = sbi_hart_init(scratch, true);
if (rc)
sbi_hart_hang();
@@ -271,43 +309,40 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
if (rc)
sbi_hart_hang();
rc = sbi_pmu_init(scratch, TRUE);
if (rc)
rc = sbi_pmu_init(scratch, true);
if (rc) {
sbi_printf("%s: pmu init failed (error %d)\n",
__func__, rc);
sbi_hart_hang();
}
sbi_boot_print_banner(scratch);
rc = sbi_irqchip_init(scratch, TRUE);
rc = sbi_irqchip_init(scratch, true);
if (rc) {
sbi_printf("%s: irqchip init failed (error %d)\n",
__func__, rc);
sbi_hart_hang();
}
rc = sbi_ipi_init(scratch, TRUE);
rc = sbi_ipi_init(scratch, true);
if (rc) {
sbi_printf("%s: ipi init failed (error %d)\n", __func__, rc);
sbi_hart_hang();
}
rc = sbi_tlb_init(scratch, TRUE);
rc = sbi_tlb_init(scratch, true);
if (rc) {
sbi_printf("%s: tlb init failed (error %d)\n", __func__, rc);
sbi_hart_hang();
}
rc = sbi_timer_init(scratch, TRUE);
rc = sbi_timer_init(scratch, true);
if (rc) {
sbi_printf("%s: timer init failed (error %d)\n", __func__, rc);
sbi_hart_hang();
}
rc = sbi_ecall_init();
if (rc) {
sbi_printf("%s: ecall init failed (error %d)\n", __func__, rc);
sbi_hart_hang();
}
/*
* Note: Finalize domains after HSM initialization so that we
* can startup non-root domains.
@@ -329,16 +364,28 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
}
/*
* Note: Platform final initialization should be last so that
* it sees correct domain assignment and PMP configuration.
* Note: Platform final initialization should be after finalizing
* domains so that it sees correct domain assignment and PMP
* configuration for FDT fixups.
*/
rc = sbi_platform_final_init(plat, TRUE);
rc = sbi_platform_final_init(plat, true);
if (rc) {
sbi_printf("%s: platform final init failed (error %d)\n",
__func__, rc);
sbi_hart_hang();
}
/*
* Note: Ecall initialization should be after platform final
* initialization so that all available platform devices are
* already registered.
*/
rc = sbi_ecall_init();
if (rc) {
sbi_printf("%s: ecall init failed (error %d)\n", __func__, rc);
sbi_hart_hang();
}
sbi_boot_print_general(scratch);
sbi_boot_print_domains(scratch);
@@ -347,52 +394,54 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
wake_coldboot_harts(scratch, hartid);
init_count = sbi_scratch_offset_ptr(scratch, init_count_offset);
(*init_count)++;
count = sbi_scratch_offset_ptr(scratch, init_count_offset);
(*count)++;
sbi_hsm_prepare_next_jump(scratch, hartid);
sbi_hart_switch_mode(hartid, scratch->next_arg1, scratch->next_addr,
scratch->next_mode, FALSE);
sbi_hsm_hart_start_finish(scratch, hartid);
}
static void init_warm_startup(struct sbi_scratch *scratch, u32 hartid)
static void __noreturn init_warm_startup(struct sbi_scratch *scratch,
u32 hartid)
{
int rc;
unsigned long *init_count;
unsigned long *count;
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
if (!init_count_offset)
if (!entry_count_offset || !init_count_offset)
sbi_hart_hang();
rc = sbi_hsm_init(scratch, hartid, FALSE);
count = sbi_scratch_offset_ptr(scratch, entry_count_offset);
(*count)++;
rc = sbi_hsm_init(scratch, hartid, false);
if (rc)
sbi_hart_hang();
rc = sbi_platform_early_init(plat, FALSE);
rc = sbi_platform_early_init(plat, false);
if (rc)
sbi_hart_hang();
rc = sbi_hart_init(scratch, FALSE);
rc = sbi_hart_init(scratch, false);
if (rc)
sbi_hart_hang();
rc = sbi_pmu_init(scratch, FALSE);
rc = sbi_pmu_init(scratch, false);
if (rc)
sbi_hart_hang();
rc = sbi_irqchip_init(scratch, FALSE);
rc = sbi_irqchip_init(scratch, false);
if (rc)
sbi_hart_hang();
rc = sbi_ipi_init(scratch, FALSE);
rc = sbi_ipi_init(scratch, false);
if (rc)
sbi_hart_hang();
rc = sbi_tlb_init(scratch, FALSE);
rc = sbi_tlb_init(scratch, false);
if (rc)
sbi_hart_hang();
rc = sbi_timer_init(scratch, FALSE);
rc = sbi_timer_init(scratch, false);
if (rc)
sbi_hart_hang();
@@ -400,17 +449,18 @@ static void init_warm_startup(struct sbi_scratch *scratch, u32 hartid)
if (rc)
sbi_hart_hang();
rc = sbi_platform_final_init(plat, FALSE);
rc = sbi_platform_final_init(plat, false);
if (rc)
sbi_hart_hang();
init_count = sbi_scratch_offset_ptr(scratch, init_count_offset);
(*init_count)++;
count = sbi_scratch_offset_ptr(scratch, init_count_offset);
(*count)++;
sbi_hsm_prepare_next_jump(scratch, hartid);
sbi_hsm_hart_start_finish(scratch, hartid);
}
static void init_warm_resume(struct sbi_scratch *scratch)
static void __noreturn init_warm_resume(struct sbi_scratch *scratch,
u32 hartid)
{
int rc;
@@ -424,7 +474,7 @@ static void init_warm_resume(struct sbi_scratch *scratch)
if (rc)
sbi_hart_hang();
sbi_hsm_hart_resume_finish(scratch);
sbi_hsm_hart_resume_finish(scratch, hartid);
}
static void __noreturn init_warmboot(struct sbi_scratch *scratch, u32 hartid)
@@ -437,14 +487,12 @@ static void __noreturn init_warmboot(struct sbi_scratch *scratch, u32 hartid)
if (hstate < 0)
sbi_hart_hang();
if (hstate == SBI_HSM_STATE_SUSPENDED)
init_warm_resume(scratch);
else
if (hstate == SBI_HSM_STATE_SUSPENDED) {
init_warm_resume(scratch, hartid);
} else {
sbi_ipi_raw_clear(hartid);
init_warm_startup(scratch, hartid);
sbi_hart_switch_mode(hartid, scratch->next_arg1,
scratch->next_addr,
scratch->next_mode, FALSE);
}
}
static atomic_t coldboot_lottery = ATOMIC_INITIALIZER(0);
@@ -463,8 +511,8 @@ static atomic_t coldboot_lottery = ATOMIC_INITIALIZER(0);
*/
void __noreturn sbi_init(struct sbi_scratch *scratch)
{
bool next_mode_supported = FALSE;
bool coldboot = FALSE;
bool next_mode_supported = false;
bool coldboot = false;
u32 hartid = current_hartid();
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
@@ -474,15 +522,15 @@ void __noreturn sbi_init(struct sbi_scratch *scratch)
switch (scratch->next_mode) {
case PRV_M:
next_mode_supported = TRUE;
next_mode_supported = true;
break;
case PRV_S:
if (misa_extension('S'))
next_mode_supported = TRUE;
next_mode_supported = true;
break;
case PRV_U:
if (misa_extension('U'))
next_mode_supported = TRUE;
next_mode_supported = true;
break;
default:
sbi_hart_hang();
@@ -498,8 +546,11 @@ void __noreturn sbi_init(struct sbi_scratch *scratch)
* HARTs which satisfy above condition.
*/
if (next_mode_supported && atomic_xchg(&coldboot_lottery, 1) == 0)
coldboot = TRUE;
if (sbi_platform_cold_boot_allowed(plat, hartid)) {
if (next_mode_supported &&
atomic_xchg(&coldboot_lottery, 1) == 0)
coldboot = true;
}
/*
* Do platform specific nascent (very early) initialization so
@@ -515,6 +566,23 @@ void __noreturn sbi_init(struct sbi_scratch *scratch)
init_warmboot(scratch, hartid);
}
unsigned long sbi_entry_count(u32 hartid)
{
struct sbi_scratch *scratch;
unsigned long *entry_count;
if (!entry_count_offset)
return 0;
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch)
return 0;
entry_count = sbi_scratch_offset_ptr(scratch, entry_count_offset);
return *entry_count;
}
unsigned long sbi_init_count(u32 hartid)
{
struct sbi_scratch *scratch;

View File

@@ -53,7 +53,7 @@ static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartid,
if (ipi_ops->update) {
ret = ipi_ops->update(scratch, remote_scratch,
remote_hartid, data);
if (ret < 0)
if (ret != SBI_IPI_UPDATE_SUCCESS)
return ret;
}
@@ -69,6 +69,18 @@ static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartid,
sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_SENT);
return 0;
}
static int sbi_ipi_sync(struct sbi_scratch *scratch, u32 event)
{
const struct sbi_ipi_event_ops *ipi_ops;
if ((SBI_IPI_EVENT_MAX <= event) ||
!ipi_ops_array[event])
return SBI_EINVAL;
ipi_ops = ipi_ops_array[event];
if (ipi_ops->sync)
ipi_ops->sync(scratch);
@@ -83,33 +95,49 @@ static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartid,
int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data)
{
int rc;
bool retry_needed;
ulong i, m;
struct sbi_hartmask target_mask = {0};
struct sbi_domain *dom = sbi_domain_thishart_ptr();
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
/* Find the target harts */
if (hbase != -1UL) {
rc = sbi_hsm_hart_interruptible_mask(dom, hbase, &m);
if (rc)
return rc;
m &= hmask;
/* Send IPIs */
for (i = hbase; m; i++, m >>= 1) {
if (m & 1UL)
sbi_ipi_send(scratch, i, event, data);
sbi_hartmask_set_hart(i, &target_mask);
}
} else {
hbase = 0;
while (!sbi_hsm_hart_interruptible_mask(dom, hbase, &m)) {
/* Send IPIs */
for (i = hbase; m; i++, m >>= 1) {
if (m & 1UL)
sbi_ipi_send(scratch, i, event, data);
sbi_hartmask_set_hart(i, &target_mask);
}
hbase += BITS_PER_LONG;
}
}
/* Send IPIs */
do {
retry_needed = false;
sbi_hartmask_for_each_hart(i, &target_mask) {
rc = sbi_ipi_send(scratch, i, event, data);
if (rc == SBI_IPI_UPDATE_RETRY)
retry_needed = true;
else
sbi_hartmask_clear_hart(i, &target_mask);
}
} while (retry_needed);
/* Sync IPIs */
sbi_ipi_sync(scratch, event);
return 0;
}
@@ -163,7 +191,7 @@ void sbi_ipi_clear_smode(void)
static void sbi_ipi_process_halt(struct sbi_scratch *scratch)
{
sbi_hsm_hart_stop(scratch, TRUE);
sbi_hsm_hart_stop(scratch, true);
}
static struct sbi_ipi_event_ops ipi_halt_ops = {
@@ -195,17 +223,14 @@ void sbi_ipi_process(void)
ipi_type = atomic_raw_xchg_ulong(&ipi_data->ipi_type, 0);
ipi_event = 0;
while (ipi_type) {
if (!(ipi_type & 1UL))
goto skip;
ipi_ops = ipi_ops_array[ipi_event];
if (ipi_ops && ipi_ops->process)
ipi_ops->process(scratch);
skip:
if (ipi_type & 1UL) {
ipi_ops = ipi_ops_array[ipi_event];
if (ipi_ops && ipi_ops->process)
ipi_ops->process(scratch);
}
ipi_type = ipi_type >> 1;
ipi_event++;
};
}
}
int sbi_ipi_raw_send(u32 target_hart)
@@ -217,6 +242,12 @@ int sbi_ipi_raw_send(u32 target_hart)
return 0;
}
void sbi_ipi_raw_clear(u32 target_hart)
{
if (ipi_dev && ipi_dev->ipi_clear)
ipi_dev->ipi_clear(target_hart);
}
const struct sbi_ipi_device *sbi_ipi_get_device(void)
{
return ipi_dev;

View File

@@ -12,7 +12,7 @@
#include <sbi/sbi_console.h>
#include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_hart.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_pmu.h>
#include <sbi/sbi_scratch.h>
@@ -50,23 +50,43 @@ union sbi_pmu_ctr_info {
};
};
#if SBI_PMU_FW_CTR_MAX >= BITS_PER_LONG
#error "Can't handle firmware counters beyond BITS_PER_LONG"
#endif
/** Per-HART state of the PMU counters */
struct sbi_pmu_hart_state {
/* HART to which this state belongs */
uint32_t hartid;
/* Counter to enabled event mapping */
uint32_t active_events[SBI_PMU_HW_CTR_MAX + SBI_PMU_FW_CTR_MAX];
/* Bitmap of firmware counters started */
unsigned long fw_counters_started;
/*
* Counter values for SBI firmware events and event codes
* for platform firmware events. Both are mutually exclusive
* and hence can optimally share the same memory.
*/
uint64_t fw_counters_data[SBI_PMU_FW_CTR_MAX];
};
/** Offset of pointer to PMU HART state in scratch space */
static unsigned long phs_ptr_offset;
#define pmu_get_hart_state_ptr(__scratch) \
sbi_scratch_read_type((__scratch), void *, phs_ptr_offset)
#define pmu_thishart_state_ptr() \
pmu_get_hart_state_ptr(sbi_scratch_thishart_ptr())
#define pmu_set_hart_state_ptr(__scratch, __phs) \
sbi_scratch_write_type((__scratch), void *, phs_ptr_offset, (__phs))
/* Platform specific PMU device */
static const struct sbi_pmu_device *pmu_dev = NULL;
/* Mapping between event range and possible counters */
static struct sbi_pmu_hw_event hw_event_map[SBI_PMU_HW_EVENT_MAX] = {0};
/* counter to enabled event mapping */
static uint32_t active_events[SBI_HARTMASK_MAX_BITS][SBI_PMU_HW_CTR_MAX + SBI_PMU_FW_CTR_MAX];
/* Bitmap of firmware counters started on each HART */
#if SBI_PMU_FW_CTR_MAX >= BITS_PER_LONG
#error "Can't handle firmware counters beyond BITS_PER_LONG"
#endif
static unsigned long fw_counters_started[SBI_HARTMASK_MAX_BITS];
/* Values of firmwares counters on each HART */
static uint64_t fw_counters_value[SBI_HARTMASK_MAX_BITS][SBI_PMU_FW_CTR_MAX] = {0};
static struct sbi_pmu_hw_event *hw_event_map;
/* Maximum number of hardware events available */
static uint32_t num_hw_events;
@@ -77,7 +97,8 @@ static uint32_t num_hw_ctrs;
static uint32_t total_ctrs;
/* Helper macros to retrieve event idx and code type */
#define get_cidx_type(x) ((x & SBI_PMU_EVENT_IDX_TYPE_MASK) >> 16)
#define get_cidx_type(x) \
(((x) & SBI_PMU_EVENT_IDX_TYPE_MASK) >> SBI_PMU_EVENT_IDX_TYPE_OFFSET)
#define get_cidx_code(x) (x & SBI_PMU_EVENT_IDX_CODE_MASK)
/**
@@ -85,7 +106,7 @@ static uint32_t total_ctrs;
* @param evtA Pointer to the existing hw event structure
* @param evtB Pointer to the new hw event structure
*
* Return FALSE if the range doesn't overlap, TRUE otherwise
* Return false if the range doesn't overlap, true otherwise
*/
static bool pmu_event_range_overlap(struct sbi_pmu_hw_event *evtA,
struct sbi_pmu_hw_event *evtB)
@@ -93,20 +114,21 @@ static bool pmu_event_range_overlap(struct sbi_pmu_hw_event *evtA,
/* check if the range of events overlap with a previous entry */
if (((evtA->end_idx < evtB->start_idx) && (evtA->end_idx < evtB->end_idx)) ||
((evtA->start_idx > evtB->start_idx) && (evtA->start_idx > evtB->end_idx)))
return FALSE;
return TRUE;
return false;
return true;
}
static bool pmu_event_select_overlap(struct sbi_pmu_hw_event *evt,
uint64_t select_val, uint64_t select_mask)
{
if ((evt->select == select_val) && (evt->select_mask == select_mask))
return TRUE;
return true;
return FALSE;
return false;
}
static int pmu_event_validate(unsigned long event_idx)
static int pmu_event_validate(struct sbi_pmu_hart_state *phs,
unsigned long event_idx, uint64_t edata)
{
uint32_t event_idx_type = get_cidx_type(event_idx);
uint32_t event_idx_code = get_cidx_code(event_idx);
@@ -118,9 +140,15 @@ static int pmu_event_validate(unsigned long event_idx)
event_idx_code_max = SBI_PMU_HW_GENERAL_MAX;
break;
case SBI_PMU_EVENT_TYPE_FW:
if (SBI_PMU_FW_MAX <= event_idx_code &&
pmu_dev && pmu_dev->fw_event_validate_code)
return pmu_dev->fw_event_validate_code(event_idx_code);
if ((event_idx_code >= SBI_PMU_FW_MAX &&
event_idx_code <= SBI_PMU_FW_RESERVED_MAX) ||
event_idx_code > SBI_PMU_FW_PLATFORM)
return SBI_EINVAL;
if (SBI_PMU_FW_PLATFORM == event_idx_code &&
pmu_dev && pmu_dev->fw_event_validate_encoding)
return pmu_dev->fw_event_validate_encoding(phs->hartid,
edata);
else
event_idx_code_max = SBI_PMU_FW_MAX;
break;
@@ -153,16 +181,16 @@ static int pmu_event_validate(unsigned long event_idx)
return SBI_EINVAL;
}
static int pmu_ctr_validate(uint32_t cidx, uint32_t *event_idx_code)
static int pmu_ctr_validate(struct sbi_pmu_hart_state *phs,
uint32_t cidx, uint32_t *event_idx_code)
{
uint32_t event_idx_val;
uint32_t event_idx_type;
u32 hartid = current_hartid();
if (cidx >= total_ctrs)
return SBI_EINVAL;
event_idx_val = active_events[hartid][cidx];
event_idx_val = phs->active_events[cidx];
event_idx_type = get_cidx_type(event_idx_val);
if (event_idx_val == SBI_PMU_EVENT_IDX_INVALID ||
event_idx_type >= SBI_PMU_EVENT_TYPE_MAX)
@@ -177,18 +205,26 @@ int sbi_pmu_ctr_fw_read(uint32_t cidx, uint64_t *cval)
{
int event_idx_type;
uint32_t event_code;
u32 hartid = current_hartid();
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
event_idx_type = pmu_ctr_validate(cidx, &event_code);
event_idx_type = pmu_ctr_validate(phs, cidx, &event_code);
if (event_idx_type != SBI_PMU_EVENT_TYPE_FW)
return SBI_EINVAL;
if (SBI_PMU_FW_MAX <= event_code &&
pmu_dev && pmu_dev->fw_counter_read_value)
fw_counters_value[hartid][cidx - num_hw_ctrs] =
pmu_dev->fw_counter_read_value(cidx - num_hw_ctrs);
if ((event_code >= SBI_PMU_FW_MAX &&
event_code <= SBI_PMU_FW_RESERVED_MAX) ||
event_code > SBI_PMU_FW_PLATFORM)
return SBI_EINVAL;
*cval = fw_counters_value[hartid][cidx - num_hw_ctrs];
if (SBI_PMU_FW_PLATFORM == event_code) {
if (pmu_dev && pmu_dev->fw_counter_read_value)
*cval = pmu_dev->fw_counter_read_value(phs->hartid,
cidx -
num_hw_ctrs);
else
*cval = 0;
} else
*cval = phs->fw_counters_data[cidx - num_hw_ctrs];
return 0;
}
@@ -356,24 +392,37 @@ int sbi_pmu_irq_bit(void)
return 0;
}
static int pmu_ctr_start_fw(uint32_t cidx, uint32_t event_code,
uint64_t ival, bool ival_update)
static int pmu_ctr_start_fw(struct sbi_pmu_hart_state *phs,
uint32_t cidx, uint32_t event_code,
uint64_t event_data, uint64_t ival,
bool ival_update)
{
int ret;
u32 hartid = current_hartid();
if ((event_code >= SBI_PMU_FW_MAX &&
event_code <= SBI_PMU_FW_RESERVED_MAX) ||
event_code > SBI_PMU_FW_PLATFORM)
return SBI_EINVAL;
if (SBI_PMU_FW_MAX <= event_code &&
pmu_dev && pmu_dev->fw_counter_start) {
ret = pmu_dev->fw_counter_start(cidx - num_hw_ctrs,
event_code,
ival, ival_update);
if (ret)
return ret;
if (SBI_PMU_FW_PLATFORM == event_code) {
if (!pmu_dev ||
!pmu_dev->fw_counter_write_value ||
!pmu_dev->fw_counter_start) {
return SBI_EINVAL;
}
if (ival_update)
pmu_dev->fw_counter_write_value(phs->hartid,
cidx - num_hw_ctrs,
ival);
return pmu_dev->fw_counter_start(phs->hartid,
cidx - num_hw_ctrs,
event_data);
} else {
if (ival_update)
phs->fw_counters_data[cidx - num_hw_ctrs] = ival;
}
if (ival_update)
fw_counters_value[hartid][cidx - num_hw_ctrs] = ival;
fw_counters_started[hartid] |= BIT(cidx - num_hw_ctrs);
phs->fw_counters_started |= BIT(cidx - num_hw_ctrs);
return 0;
}
@@ -381,26 +430,33 @@ static int pmu_ctr_start_fw(uint32_t cidx, uint32_t event_code,
int sbi_pmu_ctr_start(unsigned long cbase, unsigned long cmask,
unsigned long flags, uint64_t ival)
{
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
int event_idx_type;
uint32_t event_code;
int ret = SBI_EINVAL;
bool bUpdate = FALSE;
bool bUpdate = false;
int i, cidx;
uint64_t edata;
if ((cbase + sbi_fls(cmask)) >= total_ctrs)
return ret;
if (flags & SBI_PMU_START_FLAG_SET_INIT_VALUE)
bUpdate = TRUE;
bUpdate = true;
for_each_set_bit(i, &cmask, total_ctrs) {
cidx = i + cbase;
event_idx_type = pmu_ctr_validate(cidx, &event_code);
event_idx_type = pmu_ctr_validate(phs, cidx, &event_code);
if (event_idx_type < 0)
/* Continue the start operation for other counters */
continue;
else if (event_idx_type == SBI_PMU_EVENT_TYPE_FW)
ret = pmu_ctr_start_fw(cidx, event_code, ival, bUpdate);
else if (event_idx_type == SBI_PMU_EVENT_TYPE_FW) {
edata = (event_code == SBI_PMU_FW_PLATFORM) ?
phs->fw_counters_data[cidx - num_hw_ctrs]
: 0x0;
ret = pmu_ctr_start_fw(phs, cidx, event_code, edata,
ival, bUpdate);
}
else
ret = pmu_ctr_start_hw(cidx, ival, bUpdate);
}
@@ -430,18 +486,24 @@ static int pmu_ctr_stop_hw(uint32_t cidx)
return SBI_EALREADY_STOPPED;
}
static int pmu_ctr_stop_fw(uint32_t cidx, uint32_t event_code)
static int pmu_ctr_stop_fw(struct sbi_pmu_hart_state *phs,
uint32_t cidx, uint32_t event_code)
{
int ret;
if (SBI_PMU_FW_MAX <= event_code &&
if ((event_code >= SBI_PMU_FW_MAX &&
event_code <= SBI_PMU_FW_RESERVED_MAX) ||
event_code > SBI_PMU_FW_PLATFORM)
return SBI_EINVAL;
if (SBI_PMU_FW_PLATFORM == event_code &&
pmu_dev && pmu_dev->fw_counter_stop) {
ret = pmu_dev->fw_counter_stop(cidx - num_hw_ctrs);
ret = pmu_dev->fw_counter_stop(phs->hartid, cidx - num_hw_ctrs);
if (ret)
return ret;
}
fw_counters_started[current_hartid()] &= ~BIT(cidx - num_hw_ctrs);
phs->fw_counters_started &= ~BIT(cidx - num_hw_ctrs);
return 0;
}
@@ -465,7 +527,7 @@ static int pmu_reset_hw_mhpmevent(int ctr_idx)
int sbi_pmu_ctr_stop(unsigned long cbase, unsigned long cmask,
unsigned long flag)
{
u32 hartid = current_hartid();
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
int ret = SBI_EINVAL;
int event_idx_type;
uint32_t event_code;
@@ -476,18 +538,18 @@ int sbi_pmu_ctr_stop(unsigned long cbase, unsigned long cmask,
for_each_set_bit(i, &cmask, total_ctrs) {
cidx = i + cbase;
event_idx_type = pmu_ctr_validate(cidx, &event_code);
event_idx_type = pmu_ctr_validate(phs, cidx, &event_code);
if (event_idx_type < 0)
/* Continue the stop operation for other counters */
continue;
else if (event_idx_type == SBI_PMU_EVENT_TYPE_FW)
ret = pmu_ctr_stop_fw(cidx, event_code);
ret = pmu_ctr_stop_fw(phs, cidx, event_code);
else
ret = pmu_ctr_stop_hw(cidx);
if (flag & SBI_PMU_STOP_FLAG_RESET) {
active_events[hartid][cidx] = SBI_PMU_EVENT_IDX_INVALID;
if (cidx > (CSR_INSTRET - CSR_CYCLE) && flag & SBI_PMU_STOP_FLAG_RESET) {
phs->active_events[cidx] = SBI_PMU_EVENT_IDX_INVALID;
pmu_reset_hw_mhpmevent(cidx);
}
}
@@ -558,14 +620,15 @@ static int pmu_ctr_find_fixed_fw(unsigned long evt_idx_code)
return SBI_EINVAL;
}
static int pmu_ctr_find_hw(unsigned long cbase, unsigned long cmask, unsigned long flags,
static int pmu_ctr_find_hw(struct sbi_pmu_hart_state *phs,
unsigned long cbase, unsigned long cmask,
unsigned long flags,
unsigned long event_idx, uint64_t data)
{
unsigned long ctr_mask;
int i, ret = 0, fixed_ctr, ctr_idx = SBI_ENOTSUPP;
struct sbi_pmu_hw_event *temp;
unsigned long mctr_inhbt = 0;
u32 hartid = current_hartid();
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
if (cbase >= num_hw_ctrs)
@@ -604,7 +667,7 @@ static int pmu_ctr_find_hw(unsigned long cbase, unsigned long cmask, unsigned lo
* Some of the platform may not support mcountinhibit.
* Checking the active_events is enough for them
*/
if (active_events[hartid][cbase] != SBI_PMU_EVENT_IDX_INVALID)
if (phs->active_events[cbase] != SBI_PMU_EVENT_IDX_INVALID)
continue;
/* If mcountinhibit is supported, the bit must be enabled */
if ((sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_11) &&
@@ -639,21 +702,28 @@ static int pmu_ctr_find_hw(unsigned long cbase, unsigned long cmask, unsigned lo
* Thus, select the first available fw counter after sanity
* check.
*/
static int pmu_ctr_find_fw(unsigned long cbase, unsigned long cmask,
uint32_t event_code, u32 hartid)
static int pmu_ctr_find_fw(struct sbi_pmu_hart_state *phs,
unsigned long cbase, unsigned long cmask,
uint32_t event_code, uint64_t edata)
{
int i, cidx;
if ((event_code >= SBI_PMU_FW_MAX &&
event_code <= SBI_PMU_FW_RESERVED_MAX) ||
event_code > SBI_PMU_FW_PLATFORM)
return SBI_EINVAL;
for_each_set_bit(i, &cmask, BITS_PER_LONG) {
cidx = i + cbase;
if (cidx < num_hw_ctrs || total_ctrs <= cidx)
continue;
if (active_events[hartid][i] != SBI_PMU_EVENT_IDX_INVALID)
if (phs->active_events[i] != SBI_PMU_EVENT_IDX_INVALID)
continue;
if (SBI_PMU_FW_MAX <= event_code &&
pmu_dev && pmu_dev->fw_counter_match_code) {
if (!pmu_dev->fw_counter_match_code(cidx - num_hw_ctrs,
event_code))
if (SBI_PMU_FW_PLATFORM == event_code &&
pmu_dev && pmu_dev->fw_counter_match_encoding) {
if (!pmu_dev->fw_counter_match_encoding(phs->hartid,
cidx - num_hw_ctrs,
edata))
continue;
}
@@ -667,15 +737,15 @@ int sbi_pmu_ctr_cfg_match(unsigned long cidx_base, unsigned long cidx_mask,
unsigned long flags, unsigned long event_idx,
uint64_t event_data)
{
int ret, ctr_idx = SBI_ENOTSUPP;
u32 event_code, hartid = current_hartid();
int event_type;
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
int ret, event_type, ctr_idx = SBI_ENOTSUPP;
u32 event_code;
/* Do a basic sanity check of counter base & mask */
if ((cidx_base + sbi_fls(cidx_mask)) >= total_ctrs)
return SBI_EINVAL;
event_type = pmu_event_validate(event_idx);
event_type = pmu_event_validate(phs, event_idx, event_data);
if (event_type < 0)
return SBI_EINVAL;
event_code = get_cidx_code(event_idx);
@@ -684,25 +754,34 @@ int sbi_pmu_ctr_cfg_match(unsigned long cidx_base, unsigned long cidx_mask,
/* The caller wants to skip the match because it already knows the
* counter idx for the given event. Verify that the counter idx
* is still valid.
* As per the specification, we should "unconditionally select
* the first counter from the set of counters specified by the
* counter_idx_base and counter_idx_mask".
*/
if (active_events[hartid][cidx_base] == SBI_PMU_EVENT_IDX_INVALID)
unsigned long cidx_first = cidx_base + sbi_ffs(cidx_mask);
if (phs->active_events[cidx_first] == SBI_PMU_EVENT_IDX_INVALID)
return SBI_EINVAL;
ctr_idx = cidx_base;
ctr_idx = cidx_first;
goto skip_match;
}
if (event_type == SBI_PMU_EVENT_TYPE_FW) {
/* Any firmware counter can be used track any firmware event */
ctr_idx = pmu_ctr_find_fw(cidx_base, cidx_mask, event_code, hartid);
ctr_idx = pmu_ctr_find_fw(phs, cidx_base, cidx_mask,
event_code, event_data);
if (event_code == SBI_PMU_FW_PLATFORM)
phs->fw_counters_data[ctr_idx - num_hw_ctrs] =
event_data;
} else {
ctr_idx = pmu_ctr_find_hw(cidx_base, cidx_mask, flags, event_idx,
event_data);
ctr_idx = pmu_ctr_find_hw(phs, cidx_base, cidx_mask, flags,
event_idx, event_data);
}
if (ctr_idx < 0)
return SBI_ENOTSUPP;
active_events[hartid][ctr_idx] = event_idx;
phs->active_events[ctr_idx] = event_idx;
skip_match:
if (event_type == SBI_PMU_EVENT_TYPE_HW) {
if (flags & SBI_PMU_CFG_FLAG_CLEAR_VALUE)
@@ -711,18 +790,17 @@ skip_match:
pmu_ctr_start_hw(ctr_idx, 0, false);
} else if (event_type == SBI_PMU_EVENT_TYPE_FW) {
if (flags & SBI_PMU_CFG_FLAG_CLEAR_VALUE)
fw_counters_value[hartid][ctr_idx - num_hw_ctrs] = 0;
phs->fw_counters_data[ctr_idx - num_hw_ctrs] = 0;
if (flags & SBI_PMU_CFG_FLAG_AUTO_START) {
if (SBI_PMU_FW_MAX <= event_code &&
if (SBI_PMU_FW_PLATFORM == event_code &&
pmu_dev && pmu_dev->fw_counter_start) {
ret = pmu_dev->fw_counter_start(
ctr_idx - num_hw_ctrs, event_code,
fw_counters_value[hartid][ctr_idx - num_hw_ctrs],
true);
phs->hartid,
ctr_idx - num_hw_ctrs, event_data);
if (ret)
return ret;
}
fw_counters_started[hartid] |= BIT(ctr_idx - num_hw_ctrs);
phs->fw_counters_started |= BIT(ctr_idx - num_hw_ctrs);
}
}
@@ -731,19 +809,20 @@ skip_match:
int sbi_pmu_ctr_incr_fw(enum sbi_pmu_fw_event_code_id fw_id)
{
u32 cidx, hartid = current_hartid();
u32 cidx;
uint64_t *fcounter = NULL;
struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
if (likely(!fw_counters_started[hartid]))
if (likely(!phs->fw_counters_started))
return 0;
if (unlikely(fw_id >= SBI_PMU_FW_MAX))
return SBI_EINVAL;
for (cidx = num_hw_ctrs; cidx < total_ctrs; cidx++) {
if (get_cidx_code(active_events[hartid][cidx]) == fw_id &&
(fw_counters_started[hartid] & BIT(cidx - num_hw_ctrs))) {
fcounter = &fw_counters_value[hartid][cidx - num_hw_ctrs];
if (get_cidx_code(phs->active_events[cidx]) == fw_id &&
(phs->fw_counters_started & BIT(cidx - num_hw_ctrs))) {
fcounter = &phs->fw_counters_data[cidx - num_hw_ctrs];
break;
}
}
@@ -761,6 +840,7 @@ unsigned long sbi_pmu_num_ctr(void)
int sbi_pmu_ctr_get_info(uint32_t cidx, unsigned long *ctr_info)
{
int width;
union sbi_pmu_ctr_info cinfo = {0};
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
@@ -782,6 +862,11 @@ int sbi_pmu_ctr_get_info(uint32_t cidx, unsigned long *ctr_info)
cinfo.type = SBI_PMU_CTR_TYPE_FW;
/* Firmware counters are always 64 bits wide */
cinfo.width = 63;
if (pmu_dev && pmu_dev->fw_counter_width) {
width = pmu_dev->fw_counter_width();
if (width)
cinfo.width = width - 1;
}
}
*ctr_info = cinfo.value;
@@ -789,16 +874,16 @@ int sbi_pmu_ctr_get_info(uint32_t cidx, unsigned long *ctr_info)
return 0;
}
static void pmu_reset_event_map(u32 hartid)
static void pmu_reset_event_map(struct sbi_pmu_hart_state *phs)
{
int j;
/* Initialize the counter to event mapping table */
for (j = 3; j < total_ctrs; j++)
active_events[hartid][j] = SBI_PMU_EVENT_IDX_INVALID;
phs->active_events[j] = SBI_PMU_EVENT_IDX_INVALID;
for (j = 0; j < SBI_PMU_FW_CTR_MAX; j++)
fw_counters_value[hartid][j] = 0;
fw_counters_started[hartid] = 0;
phs->fw_counters_data[j] = 0;
phs->fw_counters_started = 0;
}
const struct sbi_pmu_device *sbi_pmu_get_device(void)
@@ -816,39 +901,60 @@ void sbi_pmu_set_device(const struct sbi_pmu_device *dev)
void sbi_pmu_exit(struct sbi_scratch *scratch)
{
u32 hartid = current_hartid();
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_11)
csr_write(CSR_MCOUNTINHIBIT, 0xFFFFFFF8);
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_10)
csr_write(CSR_MCOUNTEREN, -1);
pmu_reset_event_map(hartid);
pmu_reset_event_map(pmu_get_hart_state_ptr(scratch));
}
int sbi_pmu_init(struct sbi_scratch *scratch, bool cold_boot)
{
struct sbi_pmu_hart_state *phs;
const struct sbi_platform *plat;
u32 hartid = current_hartid();
if (cold_boot) {
hw_event_map = sbi_calloc(sizeof(*hw_event_map),
SBI_PMU_HW_EVENT_MAX);
if (!hw_event_map)
return SBI_ENOMEM;
phs_ptr_offset = sbi_scratch_alloc_type_offset(void *);
if (!phs_ptr_offset) {
sbi_free(hw_event_map);
return SBI_ENOMEM;
}
plat = sbi_platform_ptr(scratch);
/* Initialize hw pmu events */
sbi_platform_pmu_init(plat);
/* mcycle & minstret is available always */
num_hw_ctrs = sbi_hart_mhpm_count(scratch) + 3;
if (num_hw_ctrs > SBI_PMU_HW_CTR_MAX)
return SBI_EINVAL;
total_ctrs = num_hw_ctrs + SBI_PMU_FW_CTR_MAX;
}
pmu_reset_event_map(hartid);
phs = pmu_get_hart_state_ptr(scratch);
if (!phs) {
phs = sbi_zalloc(sizeof(*phs));
if (!phs)
return SBI_ENOMEM;
phs->hartid = current_hartid();
pmu_set_hart_state_ptr(scratch, phs);
}
pmu_reset_event_map(phs);
/* First three counters are fixed by the priv spec and we enable it by default */
active_events[hartid][0] = SBI_PMU_EVENT_TYPE_HW << SBI_PMU_EVENT_IDX_OFFSET |
SBI_PMU_HW_CPU_CYCLES;
active_events[hartid][1] = SBI_PMU_EVENT_IDX_INVALID;
active_events[hartid][2] = SBI_PMU_EVENT_TYPE_HW << SBI_PMU_EVENT_IDX_OFFSET |
SBI_PMU_HW_INSTRUCTIONS;
phs->active_events[0] = (SBI_PMU_EVENT_TYPE_HW << SBI_PMU_EVENT_IDX_TYPE_OFFSET) |
SBI_PMU_HW_CPU_CYCLES;
phs->active_events[1] = SBI_PMU_EVENT_IDX_INVALID;
phs->active_events[2] = (SBI_PMU_EVENT_TYPE_HW << SBI_PMU_EVENT_IDX_TYPE_OFFSET) |
SBI_PMU_HW_INSTRUCTIONS;
return 0;
}

View File

@@ -59,8 +59,8 @@ unsigned long sbi_scratch_alloc_offset(unsigned long size)
if (!size)
return 0;
if (size & (__SIZEOF_POINTER__ - 1))
size = (size & ~(__SIZEOF_POINTER__ - 1)) + __SIZEOF_POINTER__;
size += __SIZEOF_POINTER__ - 1;
size &= ~((unsigned long)__SIZEOF_POINTER__ - 1);
spin_lock(&extra_lock);
@@ -97,3 +97,14 @@ void sbi_scratch_free_offset(unsigned long offset)
* brain-dead allocator.
*/
}
unsigned long sbi_scratch_used_space(void)
{
unsigned long ret = 0;
spin_lock(&extra_lock);
ret = extra_offset;
spin_unlock(&extra_lock);
return ret;
}

View File

@@ -17,6 +17,7 @@
#include <sbi/sbi_system.h>
#include <sbi/sbi_ipi.h>
#include <sbi/sbi_init.h>
#include <sbi/sbi_timer.h>
static SBI_LIST_HEAD(reset_devices_list);
@@ -79,7 +80,7 @@ void __noreturn sbi_system_reset(u32 reset_type, u32 reset_reason)
}
/* Stop current HART */
sbi_hsm_hart_stop(scratch, FALSE);
sbi_hsm_hart_stop(scratch, false);
/* Platform specific reset if domain allowed system reset */
if (dom->system_reset_allowed) {
@@ -92,3 +93,116 @@ void __noreturn sbi_system_reset(u32 reset_type, u32 reset_reason)
/* If platform specific reset did not work then do sbi_exit() */
sbi_exit(scratch);
}
static const struct sbi_system_suspend_device *suspend_dev = NULL;
const struct sbi_system_suspend_device *sbi_system_suspend_get_device(void)
{
return suspend_dev;
}
void sbi_system_suspend_set_device(struct sbi_system_suspend_device *dev)
{
if (!dev || suspend_dev)
return;
suspend_dev = dev;
}
static int sbi_system_suspend_test_check(u32 sleep_type)
{
return sleep_type == SBI_SUSP_SLEEP_TYPE_SUSPEND ? 0 : SBI_EINVAL;
}
static int sbi_system_suspend_test_suspend(u32 sleep_type,
unsigned long mmode_resume_addr)
{
if (sleep_type != SBI_SUSP_SLEEP_TYPE_SUSPEND)
return SBI_EINVAL;
sbi_timer_mdelay(5000);
/* Wait for interrupt */
wfi();
return SBI_OK;
}
static struct sbi_system_suspend_device sbi_system_suspend_test = {
.name = "system-suspend-test",
.system_suspend_check = sbi_system_suspend_test_check,
.system_suspend = sbi_system_suspend_test_suspend,
};
void sbi_system_suspend_test_enable(void)
{
sbi_system_suspend_set_device(&sbi_system_suspend_test);
}
bool sbi_system_suspend_supported(u32 sleep_type)
{
return suspend_dev && suspend_dev->system_suspend_check &&
suspend_dev->system_suspend_check(sleep_type) == 0;
}
int sbi_system_suspend(u32 sleep_type, ulong resume_addr, ulong opaque)
{
const struct sbi_domain *dom = sbi_domain_thishart_ptr();
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
void (*jump_warmboot)(void) = (void (*)(void))scratch->warmboot_addr;
unsigned int hartid = current_hartid();
unsigned long prev_mode;
unsigned long i;
int ret;
if (!dom || !dom->system_suspend_allowed)
return SBI_EFAIL;
if (!suspend_dev || !suspend_dev->system_suspend ||
!suspend_dev->system_suspend_check)
return SBI_EFAIL;
ret = suspend_dev->system_suspend_check(sleep_type);
if (ret != SBI_OK)
return ret;
prev_mode = (csr_read(CSR_MSTATUS) & MSTATUS_MPP) >> MSTATUS_MPP_SHIFT;
if (prev_mode != PRV_S && prev_mode != PRV_U)
return SBI_EFAIL;
sbi_hartmask_for_each_hart(i, &dom->assigned_harts) {
if (i == hartid)
continue;
if (__sbi_hsm_hart_get_state(i) != SBI_HSM_STATE_STOPPED)
return SBI_EFAIL;
}
if (!sbi_domain_check_addr(dom, resume_addr, prev_mode,
SBI_DOMAIN_EXECUTE))
return SBI_EINVALID_ADDR;
if (!sbi_hsm_hart_change_state(scratch, SBI_HSM_STATE_STARTED,
SBI_HSM_STATE_SUSPENDED))
return SBI_EFAIL;
/* Prepare for resume */
scratch->next_mode = prev_mode;
scratch->next_addr = resume_addr;
scratch->next_arg1 = opaque;
__sbi_hsm_suspend_non_ret_save(scratch);
/* Suspend */
ret = suspend_dev->system_suspend(sleep_type, scratch->warmboot_addr);
if (ret != SBI_OK) {
if (!sbi_hsm_hart_change_state(scratch, SBI_HSM_STATE_SUSPENDED,
SBI_HSM_STATE_STARTED))
sbi_hart_hang();
return ret;
}
/* Resume */
jump_warmboot();
__builtin_unreachable();
}

View File

@@ -215,7 +215,7 @@ static void tlb_entry_process(struct sbi_tlb_info *tinfo)
{
u32 rhartid;
struct sbi_scratch *rscratch = NULL;
unsigned long *rtlb_sync = NULL;
atomic_t *rtlb_sync = NULL;
tinfo->local_fn(tinfo);
@@ -225,47 +225,40 @@ static void tlb_entry_process(struct sbi_tlb_info *tinfo)
continue;
rtlb_sync = sbi_scratch_offset_ptr(rscratch, tlb_sync_off);
while (atomic_raw_xchg_ulong(rtlb_sync, 1)) ;
atomic_sub_return(rtlb_sync, 1);
}
}
static void tlb_process_count(struct sbi_scratch *scratch, int count)
static bool tlb_process_once(struct sbi_scratch *scratch)
{
struct sbi_tlb_info tinfo;
unsigned int deq_count = 0;
struct sbi_fifo *tlb_fifo =
sbi_scratch_offset_ptr(scratch, tlb_fifo_off);
while (!sbi_fifo_dequeue(tlb_fifo, &tinfo)) {
if (!sbi_fifo_dequeue(tlb_fifo, &tinfo)) {
tlb_entry_process(&tinfo);
deq_count++;
if (deq_count > count)
break;
return true;
}
return false;
}
static void tlb_process(struct sbi_scratch *scratch)
{
struct sbi_tlb_info tinfo;
struct sbi_fifo *tlb_fifo =
sbi_scratch_offset_ptr(scratch, tlb_fifo_off);
while (!sbi_fifo_dequeue(tlb_fifo, &tinfo))
tlb_entry_process(&tinfo);
while (tlb_process_once(scratch));
}
static void tlb_sync(struct sbi_scratch *scratch)
{
unsigned long *tlb_sync =
atomic_t *tlb_sync =
sbi_scratch_offset_ptr(scratch, tlb_sync_off);
while (!atomic_raw_xchg_ulong(tlb_sync, 0)) {
while (atomic_read(tlb_sync) > 0) {
/*
* While we are waiting for remote hart to set the sync,
* consume fifo requests to avoid deadlock.
*/
tlb_process_count(scratch, 1);
tlb_process_once(scratch);
}
return;
@@ -343,6 +336,7 @@ static int tlb_update(struct sbi_scratch *scratch,
u32 remote_hartid, void *data)
{
int ret;
atomic_t *tlb_sync;
struct sbi_fifo *tlb_fifo_r;
struct sbi_tlb_info *tinfo = data;
u32 curr_hartid = current_hartid();
@@ -363,17 +357,14 @@ static int tlb_update(struct sbi_scratch *scratch,
*/
if (remote_hartid == curr_hartid) {
tinfo->local_fn(tinfo);
return -1;
return SBI_IPI_UPDATE_BREAK;
}
tlb_fifo_r = sbi_scratch_offset_ptr(remote_scratch, tlb_fifo_off);
ret = sbi_fifo_inplace_update(tlb_fifo_r, data, tlb_update_cb);
if (ret != SBI_FIFO_UNCHANGED) {
return 1;
}
while (sbi_fifo_enqueue(tlb_fifo_r, data) < 0) {
if (ret == SBI_FIFO_UNCHANGED && sbi_fifo_enqueue(tlb_fifo_r, data) < 0) {
/**
* For now, Busy loop until there is space in the fifo.
* There may be case where target hart is also
@@ -382,12 +373,16 @@ static int tlb_update(struct sbi_scratch *scratch,
* TODO: Introduce a wait/wakeup event mechanism to handle
* this properly.
*/
tlb_process_count(scratch, 1);
tlb_process_once(scratch);
sbi_dprintf("hart%d: hart%d tlb fifo full\n",
curr_hartid, remote_hartid);
return SBI_IPI_UPDATE_RETRY;
}
return 0;
tlb_sync = sbi_scratch_offset_ptr(scratch, tlb_sync_off);
atomic_add_return(tlb_sync, 1);
return SBI_IPI_UPDATE_SUCCESS;
}
static struct sbi_ipi_event_ops tlb_ops = {
@@ -413,7 +408,7 @@ int sbi_tlb_init(struct sbi_scratch *scratch, bool cold_boot)
{
int ret;
void *tlb_mem;
unsigned long *tlb_sync;
atomic_t *tlb_sync;
struct sbi_fifo *tlb_q;
const struct sbi_platform *plat = sbi_platform_ptr(scratch);
@@ -455,7 +450,7 @@ int sbi_tlb_init(struct sbi_scratch *scratch, bool cold_boot)
tlb_q = sbi_scratch_offset_ptr(scratch, tlb_fifo_off);
tlb_mem = sbi_scratch_offset_ptr(scratch, tlb_fifo_mem_off);
*tlb_sync = 0;
ATOMIC_INIT(tlb_sync, 0);
sbi_fifo_init(tlb_q, tlb_mem,
SBI_TLB_FIFO_NUM_ENTRIES, SBI_TLB_INFO_SIZE);

View File

@@ -88,12 +88,12 @@ int sbi_trap_redirect(struct sbi_trap_regs *regs,
{
ulong hstatus, vsstatus, prev_mode;
#if __riscv_xlen == 32
bool prev_virt = (regs->mstatusH & MSTATUSH_MPV) ? TRUE : FALSE;
bool prev_virt = (regs->mstatusH & MSTATUSH_MPV) ? true : false;
#else
bool prev_virt = (regs->mstatus & MSTATUS_MPV) ? TRUE : FALSE;
bool prev_virt = (regs->mstatus & MSTATUS_MPV) ? true : false;
#endif
/* By default, we redirect to HS-mode */
bool next_virt = FALSE;
bool next_virt = false;
/* Sanity check on previous mode */
prev_mode = (regs->mstatus & MSTATUS_MPP) >> MSTATUS_MPP_SHIFT;
@@ -106,7 +106,7 @@ int sbi_trap_redirect(struct sbi_trap_regs *regs,
if (misa_extension('H') && prev_virt) {
if ((trap->cause < __riscv_xlen) &&
(csr_read(CSR_HEDELEG) & BIT(trap->cause))) {
next_virt = TRUE;
next_virt = true;
}
}
@@ -212,7 +212,7 @@ static int sbi_trap_nonaia_irq(struct sbi_trap_regs *regs, ulong mcause)
return sbi_irqchip_process(regs);
default:
return SBI_ENOENT;
};
}
return 0;
}
@@ -320,7 +320,7 @@ struct sbi_trap_regs *sbi_trap_handler(struct sbi_trap_regs *regs)
rc = sbi_trap_redirect(regs, &trap);
break;
};
}
trap_error:
if (rc)

View File

@@ -163,7 +163,7 @@ ulong sbi_get_insn(ulong mepc, struct sbi_trap_info *trap)
break;
default:
break;
};
}
return insn;
}

View File

@@ -13,6 +13,7 @@
#include <sbi/sbi_domain.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_scratch.h>
#include <sbi_utils/fdt/fdt_domain.h>
#include <sbi_utils/fdt/fdt_helper.h>
@@ -219,14 +220,13 @@ skip_device_disable:
fdt_nop_node(fdt, poffset);
}
#define FDT_DOMAIN_MAX_COUNT 8
#define FDT_DOMAIN_REGION_MAX_COUNT 16
static u32 fdt_domains_count;
static struct sbi_domain fdt_domains[FDT_DOMAIN_MAX_COUNT];
static struct sbi_hartmask fdt_masks[FDT_DOMAIN_MAX_COUNT];
static struct sbi_domain_memregion
fdt_regions[FDT_DOMAIN_MAX_COUNT][FDT_DOMAIN_REGION_MAX_COUNT + 1];
struct parse_region_data {
struct sbi_domain *dom;
u32 region_count;
u32 max_regions;
};
static int __fdt_parse_region(void *fdt, int domain_offset,
int region_offset, u32 region_access,
@@ -236,13 +236,25 @@ static int __fdt_parse_region(void *fdt, int domain_offset,
u32 val32;
u64 val64;
const u32 *val;
u32 *region_count = opaque;
struct parse_region_data *preg = opaque;
struct sbi_domain_memregion *region;
/* Find next region of the domain */
if (FDT_DOMAIN_REGION_MAX_COUNT <= *region_count)
/*
* Non-root domains cannot add a region with only M-mode
* access permissions. M-mode regions can only be part of
* root domain.
*
* SU permission bits can't be all zeroes when M-mode permission
* bits have at least one bit set.
*/
if (!(region_access & SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK)
&& (region_access & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK))
return SBI_EINVAL;
region = &fdt_regions[fdt_domains_count][*region_count];
/* Find next region of the domain */
if (preg->max_regions <= preg->region_count)
return SBI_ENOSPC;
region = &preg->dom->regions[preg->region_count];
/* Read "base" DT property */
val = fdt_getprop(fdt, region_offset, "base", &len);
@@ -266,7 +278,7 @@ static int __fdt_parse_region(void *fdt, int domain_offset,
if (fdt_get_property(fdt, region_offset, "mmio", NULL))
region->flags |= SBI_DOMAIN_MEMREGION_MMIO;
(*region_count)++;
preg->region_count++;
return 0;
}
@@ -279,16 +291,30 @@ static int __fdt_parse_domain(void *fdt, int domain_offset, void *opaque)
struct sbi_domain *dom;
struct sbi_hartmask *mask;
struct sbi_hartmask assign_mask;
struct parse_region_data preg;
int *cold_domain_offset = opaque;
struct sbi_domain_memregion *reg, *regions;
int i, err, len, cpus_offset, cpu_offset, doffset;
struct sbi_domain_memregion *reg;
int i, err = 0, len, cpus_offset, cpu_offset, doffset;
/* Sanity check on maximum domains we can handle */
if (FDT_DOMAIN_MAX_COUNT <= fdt_domains_count)
return SBI_EINVAL;
dom = &fdt_domains[fdt_domains_count];
mask = &fdt_masks[fdt_domains_count];
regions = &fdt_regions[fdt_domains_count][0];
dom = sbi_zalloc(sizeof(*dom));
if (!dom)
return SBI_ENOMEM;
dom->regions = sbi_calloc(sizeof(*dom->regions),
FDT_DOMAIN_REGION_MAX_COUNT + 1);
if (!dom->regions) {
err = SBI_ENOMEM;
goto fail_free_domain;
}
preg.dom = dom;
preg.region_count = 0;
preg.max_regions = FDT_DOMAIN_REGION_MAX_COUNT;
mask = sbi_zalloc(sizeof(*mask));
if (!mask) {
err = SBI_ENOMEM;
goto fail_free_regions;
}
/* Read DT node name */
strncpy(dom->name, fdt_get_name(fdt, domain_offset, NULL),
@@ -304,12 +330,14 @@ static int __fdt_parse_domain(void *fdt, int domain_offset, void *opaque)
for (i = 0; i < len; i++) {
cpu_offset = fdt_node_offset_by_phandle(fdt,
fdt32_to_cpu(val[i]));
if (cpu_offset < 0)
return cpu_offset;
if (cpu_offset < 0) {
err = cpu_offset;
goto fail_free_all;
}
err = fdt_parse_hart_id(fdt, cpu_offset, &val32);
if (err)
return err;
goto fail_free_all;
if (!fdt_node_is_enabled(fdt, cpu_offset))
continue;
@@ -319,14 +347,10 @@ static int __fdt_parse_domain(void *fdt, int domain_offset, void *opaque)
}
/* Setup memregions from DT */
val32 = 0;
memset(regions, 0,
sizeof(*regions) * (FDT_DOMAIN_REGION_MAX_COUNT + 1));
dom->regions = regions;
err = fdt_iterate_each_memregion(fdt, domain_offset, &val32,
err = fdt_iterate_each_memregion(fdt, domain_offset, &preg,
__fdt_parse_region);
if (err)
return err;
goto fail_free_all;
/*
* Copy over root domain memregions which don't allow
@@ -338,14 +362,17 @@ static int __fdt_parse_domain(void *fdt, int domain_offset, void *opaque)
* 2) mmio regions protecting M-mode only mmio devices
*/
sbi_domain_for_each_memregion(&root, reg) {
if ((reg->flags & SBI_DOMAIN_MEMREGION_READABLE) ||
(reg->flags & SBI_DOMAIN_MEMREGION_WRITEABLE) ||
(reg->flags & SBI_DOMAIN_MEMREGION_EXECUTABLE))
if ((reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE) ||
(reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE) ||
(reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE))
continue;
if (FDT_DOMAIN_REGION_MAX_COUNT <= val32)
return SBI_EINVAL;
memcpy(&regions[val32++], reg, sizeof(*reg));
if (preg.max_regions <= preg.region_count) {
err = SBI_EINVAL;
goto fail_free_all;
}
memcpy(&dom->regions[preg.region_count++], reg, sizeof(*reg));
}
dom->fw_region_inited = root.fw_region_inited;
/* Read "boot-hart" DT property */
val32 = -1U;
@@ -401,14 +428,23 @@ static int __fdt_parse_domain(void *fdt, int domain_offset, void *opaque)
/* Read "system-reset-allowed" DT property */
if (fdt_get_property(fdt, domain_offset,
"system-reset-allowed", NULL))
dom->system_reset_allowed = TRUE;
dom->system_reset_allowed = true;
else
dom->system_reset_allowed = FALSE;
dom->system_reset_allowed = false;
/* Read "system-suspend-allowed" DT property */
if (fdt_get_property(fdt, domain_offset,
"system-suspend-allowed", NULL))
dom->system_suspend_allowed = true;
else
dom->system_suspend_allowed = false;
/* Find /cpus DT node */
cpus_offset = fdt_path_offset(fdt, "/cpus");
if (cpus_offset < 0)
return cpus_offset;
if (cpus_offset < 0) {
err = cpus_offset;
goto fail_free_all;
}
/* HART to domain assignment mask based on CPU DT nodes */
sbi_hartmask_clear_all(&assign_mask);
@@ -424,22 +460,35 @@ static int __fdt_parse_domain(void *fdt, int domain_offset, void *opaque)
continue;
val = fdt_getprop(fdt, cpu_offset, "opensbi-domain", &len);
if (!val || len < 4)
return SBI_EINVAL;
if (!val || len < 4) {
err = SBI_EINVAL;
goto fail_free_all;
}
doffset = fdt_node_offset_by_phandle(fdt, fdt32_to_cpu(*val));
if (doffset < 0)
return doffset;
if (doffset < 0) {
err = doffset;
goto fail_free_all;
}
if (doffset == domain_offset)
sbi_hartmask_set_hart(val32, &assign_mask);
}
/* Increment domains count */
fdt_domains_count++;
/* Register the domain */
return sbi_domain_register(dom, &assign_mask);
err = sbi_domain_register(dom, &assign_mask);
if (err)
goto fail_free_all;
return 0;
fail_free_all:
sbi_free(mask);
fail_free_regions:
sbi_free(dom->regions);
fail_free_domain:
sbi_free(dom);
return err;
}
int fdt_domains_populate(void *fdt)

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: BSD-2-Clause
/*
* fdt_fixup.c - Flat Device Tree parsing helper routines
@@ -14,10 +15,96 @@
#include <sbi/sbi_hart.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h>
#include <sbi/sbi_error.h>
#include <sbi_utils/fdt/fdt_fixup.h>
#include <sbi_utils/fdt/fdt_pmu.h>
#include <sbi_utils/fdt/fdt_helper.h>
int fdt_add_cpu_idle_states(void *fdt, const struct sbi_cpu_idle_state *state)
{
int cpu_node, cpus_node, err, idle_states_node;
uint32_t count, phandle;
err = fdt_open_into(fdt, fdt, fdt_totalsize(fdt) + 1024);
if (err < 0)
return err;
err = fdt_find_max_phandle(fdt, &phandle);
phandle++;
if (err < 0)
return err;
cpus_node = fdt_path_offset(fdt, "/cpus");
if (cpus_node < 0)
return cpus_node;
/* Do nothing if the idle-states node already exists. */
idle_states_node = fdt_subnode_offset(fdt, cpus_node, "idle-states");
if (idle_states_node >= 0)
return 0;
/* Create the idle-states node and its child nodes. */
idle_states_node = fdt_add_subnode(fdt, cpus_node, "idle-states");
if (idle_states_node < 0)
return idle_states_node;
for (count = 0; state->name; count++, phandle++, state++) {
int idle_state_node;
idle_state_node = fdt_add_subnode(fdt, idle_states_node,
state->name);
if (idle_state_node < 0)
return idle_state_node;
fdt_setprop_string(fdt, idle_state_node, "compatible",
"riscv,idle-state");
fdt_setprop_u32(fdt, idle_state_node,
"riscv,sbi-suspend-param",
state->suspend_param);
if (state->local_timer_stop)
fdt_setprop_empty(fdt, idle_state_node,
"local-timer-stop");
fdt_setprop_u32(fdt, idle_state_node, "entry-latency-us",
state->entry_latency_us);
fdt_setprop_u32(fdt, idle_state_node, "exit-latency-us",
state->exit_latency_us);
fdt_setprop_u32(fdt, idle_state_node, "min-residency-us",
state->min_residency_us);
if (state->wakeup_latency_us)
fdt_setprop_u32(fdt, idle_state_node,
"wakeup-latency-us",
state->wakeup_latency_us);
fdt_setprop_u32(fdt, idle_state_node, "phandle", phandle);
}
if (count == 0)
return 0;
/* Link each cpu node to the idle state nodes. */
fdt_for_each_subnode(cpu_node, fdt, cpus_node) {
const char *device_type;
fdt32_t *value;
/* Only process child nodes with device_type = "cpu". */
device_type = fdt_getprop(fdt, cpu_node, "device_type", NULL);
if (!device_type || strcmp(device_type, "cpu"))
continue;
/* Allocate space for the list of phandles. */
err = fdt_setprop_placeholder(fdt, cpu_node, "cpu-idle-states",
count * sizeof(phandle),
(void **)&value);
if (err < 0)
return err;
/* Fill in the phandles of the idle state nodes. */
for (uint32_t i = 0; i < count; ++i)
value[i] = cpu_to_fdt32(phandle - count + i);
}
return 0;
}
void fdt_cpu_fixup(void *fdt)
{
struct sbi_domain *dom = sbi_domain_thishart_ptr();
@@ -123,7 +210,7 @@ void fdt_plic_fixup(void *fdt)
static int fdt_resv_memory_update_node(void *fdt, unsigned long addr,
unsigned long size, int index,
int parent, bool no_map)
int parent)
{
int na = fdt_address_cells(fdt, 0);
int ns = fdt_size_cells(fdt, 0);
@@ -152,16 +239,14 @@ static int fdt_resv_memory_update_node(void *fdt, unsigned long addr,
if (subnode < 0)
return subnode;
if (no_map) {
/*
* Tell operating system not to create a virtual
* mapping of the region as part of its standard
* mapping of system memory.
*/
err = fdt_setprop_empty(fdt, subnode, "no-map");
if (err < 0)
return err;
}
/*
* Tell operating system not to create a virtual
* mapping of the region as part of its standard
* mapping of system memory.
*/
err = fdt_setprop_empty(fdt, subnode, "no-map");
if (err < 0)
return err;
/* encode the <reg> property value */
val = reg;
@@ -199,9 +284,10 @@ int fdt_reserved_memory_fixup(void *fdt)
{
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
unsigned long filtered_base[PMP_COUNT] = { 0 };
unsigned char filtered_order[PMP_COUNT] = { 0 };
unsigned long addr, size;
int err, parent, i;
int err, parent, i, j;
int na = fdt_address_cells(fdt, 0);
int ns = fdt_size_cells(fdt, 0);
@@ -259,42 +345,41 @@ int fdt_reserved_memory_fixup(void *fdt)
/* Ignore MMIO or READABLE or WRITABLE or EXECUTABLE regions */
if (reg->flags & SBI_DOMAIN_MEMREGION_MMIO)
continue;
if (reg->flags & SBI_DOMAIN_MEMREGION_READABLE)
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
continue;
if (reg->flags & SBI_DOMAIN_MEMREGION_WRITEABLE)
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
continue;
if (reg->flags & SBI_DOMAIN_MEMREGION_EXECUTABLE)
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
continue;
if (i >= PMP_COUNT) {
sbi_printf("%s: Too many memory regions to fixup.\n",
__func__);
return SBI_ENOSPC;
}
bool overlap = false;
addr = reg->base;
size = 1UL << reg->order;
fdt_resv_memory_update_node(fdt, addr, size, i, parent,
(sbi_hart_pmp_count(scratch)) ? false : true);
i++;
for (j = 0; j < i; j++) {
if (addr == filtered_base[j]
&& filtered_order[j] < reg->order) {
overlap = true;
filtered_order[j] = reg->order;
break;
}
}
if (!overlap) {
filtered_base[i] = reg->base;
filtered_order[i] = reg->order;
i++;
}
}
return 0;
}
int fdt_reserved_memory_nomap_fixup(void *fdt)
{
int parent, subnode;
int err;
/* Locate the reserved memory node */
parent = fdt_path_offset(fdt, "/reserved-memory");
if (parent < 0)
return parent;
fdt_for_each_subnode(subnode, fdt, parent) {
/*
* Tell operating system not to create a virtual
* mapping of the region as part of its standard
* mapping of system memory.
*/
err = fdt_setprop_empty(fdt, subnode, "no-map");
if (err < 0)
return err;
for (j = 0; j < i; j++) {
addr = filtered_base[j];
size = 1UL << filtered_order[j];
fdt_resv_memory_update_node(fdt, addr, size, j, parent);
}
return 0;

View File

@@ -689,7 +689,7 @@ int fdt_parse_imsic_node(void *fdt, int nodeoff, struct imsic_data *imsic)
break;
regs->addr = reg_addr;
regs->size = reg_size;
};
}
if (!imsic->regs[0].size)
return SBI_EINVAL;

View File

@@ -12,6 +12,7 @@
#include <sbi/sbi_hart.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_pmu.h>
#include <sbi/sbi_scratch.h>
#include <sbi_utils/fdt/fdt_helper.h>
#define FDT_PMU_HW_EVENT_MAX (SBI_PMU_HW_EVENT_MAX * 2)

View File

@@ -10,10 +10,17 @@ config FDT_GPIO
if FDT_GPIO
config FDT_GPIO_DESIGNWARE
bool "DesignWare GPIO driver"
default n
config FDT_GPIO_SIFIVE
bool "SiFive GPIO FDT driver"
default n
config FDT_GPIO_STARFIVE
bool "StarFive GPIO FDT driver"
default n
endif
config GPIO

View File

@@ -0,0 +1,140 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2022 SiFive
*
* GPIO driver for Synopsys DesignWare APB GPIO
*
* Authors:
* Ben Dooks <ben.dooks@sifive.com>
*/
#include <libfdt.h>
#include <sbi/riscv_io.h>
#include <sbi/sbi_error.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/gpio/fdt_gpio.h>
#define DW_GPIO_CHIP_MAX 4 /* need 1 per bank in use */
#define DW_GPIO_PINS_MAX 32
#define DW_GPIO_DDR 0x4
#define DW_GPIO_DR 0x0
#define DW_GPIO_BIT(_b) (1UL << (_b))
struct dw_gpio_chip {
void *dr;
void *ext;
struct gpio_chip chip;
};
extern struct fdt_gpio fdt_gpio_designware;
static unsigned int dw_gpio_chip_count;
static struct dw_gpio_chip dw_gpio_chip_array[DW_GPIO_CHIP_MAX];
#define pin_to_chip(__p) container_of((__p)->chip, struct dw_gpio_chip, chip);
static int dw_gpio_direction_output(struct gpio_pin *gp, int value)
{
struct dw_gpio_chip *chip = pin_to_chip(gp);
unsigned long v;
v = readl(chip->dr + DW_GPIO_DR);
if (!value)
v &= ~DW_GPIO_BIT(gp->offset);
else
v |= DW_GPIO_BIT(gp->offset);
writel(v, chip->dr + DW_GPIO_DR);
/* the DR is output only so we can set it then the DDR to set
* the data direction, to avoid glitches.
*/
v = readl(chip->dr + DW_GPIO_DDR);
v |= DW_GPIO_BIT(gp->offset);
writel(v, chip->dr + DW_GPIO_DDR);
return 0;
}
static void dw_gpio_set(struct gpio_pin *gp, int value)
{
struct dw_gpio_chip *chip = pin_to_chip(gp);
unsigned long v;
v = readl(chip->dr + DW_GPIO_DR);
if (!value)
v &= ~DW_GPIO_BIT(gp->offset);
else
v |= DW_GPIO_BIT(gp->offset);
writel(v, chip->dr + DW_GPIO_DR);
}
/* notes:
* each sub node is a bank and has ngpios or snpns,nr-gpios and a reg property
* with the compatible `snps,dw-apb-gpio-port`.
* bank A is the only one with irq support but we're not using it here
*/
static int dw_gpio_init_bank(void *fdt, int nodeoff, u32 phandle,
const struct fdt_match *match)
{
struct dw_gpio_chip *chip;
const fdt32_t *val;
uint64_t addr;
int rc, poff, nr_pins, bank, len;
if (dw_gpio_chip_count >= DW_GPIO_CHIP_MAX)
return SBI_ENOSPC;
/* need to get parent for the address property */
poff = fdt_parent_offset(fdt, nodeoff);
if (poff < 0)
return SBI_EINVAL;
rc = fdt_get_node_addr_size(fdt, poff, 0, &addr, NULL);
if (rc)
return rc;
val = fdt_getprop(fdt, nodeoff, "reg", &len);
if (!val || len <= 0)
return SBI_EINVAL;
bank = fdt32_to_cpu(*val);
val = fdt_getprop(fdt, nodeoff, "snps,nr-gpios", &len);
if (!val)
val = fdt_getprop(fdt, nodeoff, "ngpios", &len);
if (!val || len <= 0)
return SBI_EINVAL;
nr_pins = fdt32_to_cpu(*val);
chip = &dw_gpio_chip_array[dw_gpio_chip_count];
chip->dr = (void *)(uintptr_t)addr + (bank * 0xc);
chip->ext = (void *)(uintptr_t)addr + (bank * 4) + 0x50;
chip->chip.driver = &fdt_gpio_designware;
chip->chip.id = phandle;
chip->chip.ngpio = nr_pins;
chip->chip.set = dw_gpio_set;
chip->chip.direction_output = dw_gpio_direction_output;
rc = gpio_chip_add(&chip->chip);
if (rc)
return rc;
dw_gpio_chip_count++;
return 0;
}
/* since we're only probed when used, match on port not main controller node */
static const struct fdt_match dw_gpio_match[] = {
{ .compatible = "snps,dw-apb-gpio-port" },
{ },
};
struct fdt_gpio fdt_gpio_designware = {
.match_table = dw_gpio_match,
.xlate = fdt_gpio_simple_xlate,
.init = dw_gpio_init_bank,
};

View File

@@ -9,11 +9,10 @@
#include <sbi/riscv_io.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_heap.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/gpio/fdt_gpio.h>
#define SIFIVE_GPIO_CHIP_MAX 2
#define SIFIVE_GPIO_PINS_MIN 1
#define SIFIVE_GPIO_PINS_MAX 32
#define SIFIVE_GPIO_PINS_DEF 16
@@ -27,9 +26,6 @@ struct sifive_gpio_chip {
struct gpio_chip chip;
};
static unsigned int sifive_gpio_chip_count;
static struct sifive_gpio_chip sifive_gpio_chip_array[SIFIVE_GPIO_CHIP_MAX];
static int sifive_gpio_direction_output(struct gpio_pin *gp, int value)
{
unsigned int v;
@@ -73,13 +69,15 @@ static int sifive_gpio_init(void *fdt, int nodeoff, u32 phandle,
struct sifive_gpio_chip *chip;
uint64_t addr;
if (SIFIVE_GPIO_CHIP_MAX <= sifive_gpio_chip_count)
return SBI_ENOSPC;
chip = &sifive_gpio_chip_array[sifive_gpio_chip_count];
chip = sbi_zalloc(sizeof(*chip));
if (!chip)
return SBI_ENOMEM;
rc = fdt_get_node_addr_size(fdt, nodeoff, 0, &addr, NULL);
if (rc)
if (rc) {
sbi_free(chip);
return rc;
}
chip->addr = addr;
chip->chip.driver = &fdt_gpio_sifive;
@@ -88,10 +86,11 @@ static int sifive_gpio_init(void *fdt, int nodeoff, u32 phandle,
chip->chip.direction_output = sifive_gpio_direction_output;
chip->chip.set = sifive_gpio_set;
rc = gpio_chip_add(&chip->chip);
if (rc)
if (rc) {
sbi_free(chip);
return rc;
}
sifive_gpio_chip_count++;
return 0;
}

View File

@@ -0,0 +1,117 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2022 Starfive
*
* Authors:
* Minda.chen <Minda.chen@starfivetech.com>
*/
#include <sbi/riscv_io.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_console.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/gpio/fdt_gpio.h>
#define STARFIVE_GPIO_PINS_DEF 64
#define STARFIVE_GPIO_OUTVAL 0x40
#define STARFIVE_GPIO_MASK 0xff
#define STARFIVE_GPIO_REG_SHIFT_MASK 0x3
#define STARFIVE_GPIO_SHIFT_BITS 0x3
struct starfive_gpio_chip {
unsigned long addr;
struct gpio_chip chip;
};
static int starfive_gpio_direction_output(struct gpio_pin *gp, int value)
{
u32 val;
unsigned long reg_addr;
u32 bit_mask, shift_bits;
struct starfive_gpio_chip *chip =
container_of(gp->chip, struct starfive_gpio_chip, chip);
/* set out en*/
reg_addr = chip->addr + gp->offset;
reg_addr &= ~(STARFIVE_GPIO_REG_SHIFT_MASK);
val = readl((void *)(reg_addr));
shift_bits = (gp->offset & STARFIVE_GPIO_REG_SHIFT_MASK)
<< STARFIVE_GPIO_SHIFT_BITS;
bit_mask = STARFIVE_GPIO_MASK << shift_bits;
val = readl((void *)reg_addr);
val &= ~bit_mask;
writel(val, (void *)reg_addr);
return 0;
}
static void starfive_gpio_set(struct gpio_pin *gp, int value)
{
u32 val;
unsigned long reg_addr;
u32 bit_mask, shift_bits;
struct starfive_gpio_chip *chip =
container_of(gp->chip, struct starfive_gpio_chip, chip);
reg_addr = chip->addr + gp->offset;
reg_addr &= ~(STARFIVE_GPIO_REG_SHIFT_MASK);
shift_bits = (gp->offset & STARFIVE_GPIO_REG_SHIFT_MASK)
<< STARFIVE_GPIO_SHIFT_BITS;
bit_mask = STARFIVE_GPIO_MASK << shift_bits;
/* set output value */
val = readl((void *)(reg_addr + STARFIVE_GPIO_OUTVAL));
val &= ~bit_mask;
val |= value << shift_bits;
writel(val, (void *)(reg_addr + STARFIVE_GPIO_OUTVAL));
}
extern struct fdt_gpio fdt_gpio_starfive;
static int starfive_gpio_init(void *fdt, int nodeoff, u32 phandle,
const struct fdt_match *match)
{
int rc;
struct starfive_gpio_chip *chip;
u64 addr;
chip = sbi_zalloc(sizeof(*chip));
if (!chip)
return SBI_ENOMEM;
rc = fdt_get_node_addr_size(fdt, nodeoff, 0, &addr, NULL);
if (rc) {
sbi_free(chip);
return rc;
}
chip->addr = addr;
chip->chip.driver = &fdt_gpio_starfive;
chip->chip.id = phandle;
chip->chip.ngpio = STARFIVE_GPIO_PINS_DEF;
chip->chip.direction_output = starfive_gpio_direction_output;
chip->chip.set = starfive_gpio_set;
rc = gpio_chip_add(&chip->chip);
if (rc) {
sbi_free(chip);
return rc;
}
return 0;
}
static const struct fdt_match starfive_gpio_match[] = {
{ .compatible = "starfive,jh7110-sys-pinctrl" },
{ .compatible = "starfive,iomux-pinctrl" },
{ },
};
struct fdt_gpio fdt_gpio_starfive = {
.match_table = starfive_gpio_match,
.xlate = fdt_gpio_simple_xlate,
.init = starfive_gpio_init,
};

View File

@@ -10,7 +10,13 @@
libsbiutils-objs-$(CONFIG_FDT_GPIO) += gpio/fdt_gpio.o
libsbiutils-objs-$(CONFIG_FDT_GPIO) += gpio/fdt_gpio_drivers.o
carray-fdt_gpio_drivers-$(CONFIG_FDT_GPIO_DESIGNWARE) += fdt_gpio_designware
libsbiutils-objs-$(CONFIG_FDT_GPIO_DESIGNWARE) += gpio/fdt_gpio_designware.o
carray-fdt_gpio_drivers-$(CONFIG_FDT_GPIO_SIFIVE) += fdt_gpio_sifive
libsbiutils-objs-$(CONFIG_FDT_GPIO_SIFIVE) += gpio/fdt_gpio_sifive.o
carray-fdt_gpio_drivers-$(CONFIG_FDT_GPIO_STARFIVE) += fdt_gpio_starfive
libsbiutils-objs-$(CONFIG_FDT_GPIO_STARFIVE) += gpio/fdt_gpio_starfive.o
libsbiutils-objs-$(CONFIG_GPIO) += gpio/gpio.o

View File

@@ -14,8 +14,16 @@ config FDT_I2C_SIFIVE
bool "SiFive I2C FDT driver"
default n
config FDT_I2C_DW
bool "Synopsys Designware I2C FDT driver"
select I2C_DW
default n
endif
config I2C_DW
bool "Synopsys Designware I2C support"
default n
config I2C
bool "I2C support"
default n

190
lib/utils/i2c/dw_i2c.c Normal file
View File

@@ -0,0 +1,190 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2022 starfivetech.com
*
* Authors:
* Minda Chen <minda.chen@starfivetech.com>
*/
#include <sbi/riscv_io.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_timer.h>
#include <sbi/sbi_console.h>
#include <sbi/sbi_string.h>
#include <sbi/sbi_bitops.h>
#include <sbi_utils/i2c/dw_i2c.h>
#define DW_IC_CON 0x00
#define DW_IC_TAR 0x04
#define DW_IC_SAR 0x08
#define DW_IC_DATA_CMD 0x10
#define DW_IC_SS_SCL_HCNT 0x14
#define DW_IC_SS_SCL_LCNT 0x18
#define DW_IC_FS_SCL_HCNT 0x1c
#define DW_IC_FS_SCL_LCNT 0x20
#define DW_IC_HS_SCL_HCNT 0x24
#define DW_IC_HS_SCL_LCNT 0x28
#define DW_IC_INTR_STAT 0x2c
#define DW_IC_INTR_MASK 0x30
#define DW_IC_RAW_INTR_STAT 0x34
#define DW_IC_RX_TL 0x38
#define DW_IC_TX_TL 0x3c
#define DW_IC_CLR_INTR 0x40
#define DW_IC_CLR_RX_UNDER 0x44
#define DW_IC_CLR_RX_OVER 0x48
#define DW_IC_CLR_TX_OVER 0x4c
#define DW_IC_CLR_RD_REQ 0x50
#define DW_IC_CLR_TX_ABRT 0x54
#define DW_IC_CLR_RX_DONE 0x58
#define DW_IC_CLR_ACTIVITY 0x5c
#define DW_IC_CLR_STOP_DET 0x60
#define DW_IC_CLR_START_DET 0x64
#define DW_IC_CLR_GEN_CALL 0x68
#define DW_IC_ENABLE 0x6c
#define DW_IC_STATUS 0x70
#define DW_IC_TXFLR 0x74
#define DW_IC_RXFLR 0x78
#define DW_IC_SDA_HOLD 0x7c
#define DW_IC_TX_ABRT_SOURCE 0x80
#define DW_IC_ENABLE_STATUS 0x9c
#define DW_IC_CLR_RESTART_DET 0xa8
#define DW_IC_COMP_PARAM_1 0xf4
#define DW_IC_COMP_VERSION 0xf8
#define DW_I2C_STATUS_TXFIFO_EMPTY BIT(2)
#define DW_I2C_STATUS_RXFIFO_NOT_EMPTY BIT(3)
#define IC_DATA_CMD_READ BIT(8)
#define IC_DATA_CMD_STOP BIT(9)
#define IC_DATA_CMD_RESTART BIT(10)
#define IC_INT_STATUS_STOPDET BIT(9)
static inline void dw_i2c_setreg(struct dw_i2c_adapter *adap,
u8 reg, u32 value)
{
writel(value, (void *)adap->addr + reg);
}
static inline u32 dw_i2c_getreg(struct dw_i2c_adapter *adap,
u32 reg)
{
return readl((void *)adap->addr + reg);
}
static int dw_i2c_adapter_poll(struct dw_i2c_adapter *adap,
u32 mask, u32 addr,
bool inverted)
{
unsigned int timeout = 10; /* msec */
int count = 0;
u32 val;
do {
val = dw_i2c_getreg(adap, addr);
if (inverted) {
if (!(val & mask))
return 0;
} else {
if (val & mask)
return 0;
}
sbi_timer_udelay(2);
count += 1;
if (count == (timeout * 1000))
return SBI_ETIMEDOUT;
} while (1);
}
#define dw_i2c_adapter_poll_rxrdy(adap) \
dw_i2c_adapter_poll(adap, DW_I2C_STATUS_RXFIFO_NOT_EMPTY, DW_IC_STATUS, 0)
#define dw_i2c_adapter_poll_txfifo_ready(adap) \
dw_i2c_adapter_poll(adap, DW_I2C_STATUS_TXFIFO_EMPTY, DW_IC_STATUS, 0)
static int dw_i2c_write_addr(struct dw_i2c_adapter *adap, u8 addr)
{
dw_i2c_setreg(adap, DW_IC_ENABLE, 0);
dw_i2c_setreg(adap, DW_IC_TAR, addr);
dw_i2c_setreg(adap, DW_IC_ENABLE, 1);
return 0;
}
static int dw_i2c_adapter_read(struct i2c_adapter *ia, u8 addr,
u8 reg, u8 *buffer, int len)
{
struct dw_i2c_adapter *adap =
container_of(ia, struct dw_i2c_adapter, adapter);
int rc;
dw_i2c_write_addr(adap, addr);
rc = dw_i2c_adapter_poll_txfifo_ready(adap);
if (rc)
return rc;
/* set register address */
dw_i2c_setreg(adap, DW_IC_DATA_CMD, reg);
/* set value */
while (len) {
if (len == 1)
dw_i2c_setreg(adap, DW_IC_DATA_CMD,
IC_DATA_CMD_READ | IC_DATA_CMD_STOP);
else
dw_i2c_setreg(adap, DW_IC_DATA_CMD, IC_DATA_CMD_READ);
rc = dw_i2c_adapter_poll_rxrdy(adap);
if (rc)
return rc;
*buffer = dw_i2c_getreg(adap, DW_IC_DATA_CMD) & 0xff;
buffer++;
len--;
}
return 0;
}
static int dw_i2c_adapter_write(struct i2c_adapter *ia, u8 addr,
u8 reg, u8 *buffer, int len)
{
struct dw_i2c_adapter *adap =
container_of(ia, struct dw_i2c_adapter, adapter);
int rc;
dw_i2c_write_addr(adap, addr);
rc = dw_i2c_adapter_poll_txfifo_ready(adap);
if (rc)
return rc;
/* set register address */
dw_i2c_setreg(adap, DW_IC_DATA_CMD, reg);
while (len) {
rc = dw_i2c_adapter_poll_txfifo_ready(adap);
if (rc)
return rc;
if (len == 1)
dw_i2c_setreg(adap, DW_IC_DATA_CMD, *buffer | IC_DATA_CMD_STOP);
else
dw_i2c_setreg(adap, DW_IC_DATA_CMD, *buffer);
buffer++;
len--;
}
rc = dw_i2c_adapter_poll_txfifo_ready(adap);
return rc;
}
int dw_i2c_init(struct i2c_adapter *adapter, int nodeoff)
{
adapter->id = nodeoff;
adapter->write = dw_i2c_adapter_write;
adapter->read = dw_i2c_adapter_read;
return i2c_adapter_add(adapter);
}

View File

@@ -0,0 +1,58 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2022 starfivetech.com
*
* Authors:
* Minda Chen <minda.chen@starfivetech.com>
*/
#include <sbi/riscv_io.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_string.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/i2c/dw_i2c.h>
#include <sbi_utils/i2c/fdt_i2c.h>
extern struct fdt_i2c_adapter fdt_i2c_adapter_dw;
static int fdt_dw_i2c_init(void *fdt, int nodeoff,
const struct fdt_match *match)
{
int rc;
struct dw_i2c_adapter *adapter;
u64 addr;
adapter = sbi_zalloc(sizeof(*adapter));
if (!adapter)
return SBI_ENOMEM;
rc = fdt_get_node_addr_size(fdt, nodeoff, 0, &addr, NULL);
if (rc) {
sbi_free(adapter);
return rc;
}
adapter->addr = addr;
adapter->adapter.driver = &fdt_i2c_adapter_dw;
rc = dw_i2c_init(&adapter->adapter, nodeoff);
if (rc) {
sbi_free(adapter);
return rc;
}
return 0;
}
static const struct fdt_match fdt_dw_i2c_match[] = {
{ .compatible = "snps,designware-i2c" },
{ .compatible = "starfive,jh7110-i2c" },
{ },
};
struct fdt_i2c_adapter fdt_i2c_adapter_dw = {
.match_table = fdt_dw_i2c_match,
.init = fdt_dw_i2c_init,
};

View File

@@ -9,12 +9,11 @@
#include <sbi/riscv_io.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_timer.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/i2c/fdt_i2c.h>
#define SIFIVE_I2C_ADAPTER_MAX 2
#define SIFIVE_I2C_PRELO 0x00
#define SIFIVE_I2C_PREHI 0x04
#define SIFIVE_I2C_CTR 0x08
@@ -47,10 +46,6 @@ struct sifive_i2c_adapter {
struct i2c_adapter adapter;
};
static unsigned int sifive_i2c_adapter_count;
static struct sifive_i2c_adapter
sifive_i2c_adapter_array[SIFIVE_I2C_ADAPTER_MAX];
extern struct fdt_i2c_adapter fdt_i2c_adapter_sifive;
static inline void sifive_i2c_setreg(struct sifive_i2c_adapter *adap,
@@ -244,14 +239,15 @@ static int sifive_i2c_init(void *fdt, int nodeoff,
struct sifive_i2c_adapter *adapter;
uint64_t addr;
if (sifive_i2c_adapter_count >= SIFIVE_I2C_ADAPTER_MAX)
return SBI_ENOSPC;
adapter = &sifive_i2c_adapter_array[sifive_i2c_adapter_count];
adapter = sbi_zalloc(sizeof(*adapter));
if (!adapter)
return SBI_ENOMEM;
rc = fdt_get_node_addr_size(fdt, nodeoff, 0, &addr, NULL);
if (rc)
if (rc) {
sbi_free(adapter);
return rc;
}
adapter->addr = addr;
adapter->adapter.driver = &fdt_i2c_adapter_sifive;
@@ -259,10 +255,11 @@ static int sifive_i2c_init(void *fdt, int nodeoff,
adapter->adapter.write = sifive_i2c_adapter_write;
adapter->adapter.read = sifive_i2c_adapter_read;
rc = i2c_adapter_add(&adapter->adapter);
if (rc)
if (rc) {
sbi_free(adapter);
return rc;
}
sifive_i2c_adapter_count++;
return 0;
}

View File

@@ -14,3 +14,8 @@ libsbiutils-objs-$(CONFIG_FDT_I2C) += i2c/fdt_i2c_adapter_drivers.o
carray-fdt_i2c_adapter_drivers-$(CONFIG_FDT_I2C_SIFIVE) += fdt_i2c_adapter_sifive
libsbiutils-objs-$(CONFIG_FDT_I2C_SIFIVE) += i2c/fdt_i2c_sifive.o
carray-fdt_i2c_adapter_drivers-$(CONFIG_FDT_I2C_DW) += fdt_i2c_adapter_dw
libsbiutils-objs-$(CONFIG_FDT_I2C_DW) += i2c/fdt_i2c_dw.o
libsbiutils-objs-$(CONFIG_I2C_DW) += i2c/dw_i2c.o

View File

@@ -12,21 +12,30 @@
#include <sbi/riscv_io.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_ipi.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_timer.h>
#include <sbi_utils/ipi/aclint_mswi.h>
static struct aclint_mswi_data *mswi_hartid2data[SBI_HARTMASK_MAX_BITS];
static unsigned long mswi_ptr_offset;
#define mswi_get_hart_data_ptr(__scratch) \
sbi_scratch_read_type((__scratch), void *, mswi_ptr_offset)
#define mswi_set_hart_data_ptr(__scratch, __mswi) \
sbi_scratch_write_type((__scratch), void *, mswi_ptr_offset, (__mswi))
static void mswi_ipi_send(u32 target_hart)
{
u32 *msip;
struct sbi_scratch *scratch;
struct aclint_mswi_data *mswi;
if (SBI_HARTMASK_MAX_BITS <= target_hart)
scratch = sbi_hartid_to_scratch(target_hart);
if (!scratch)
return;
mswi = mswi_hartid2data[target_hart];
mswi = mswi_get_hart_data_ptr(scratch);
if (!mswi)
return;
@@ -38,11 +47,14 @@ static void mswi_ipi_send(u32 target_hart)
static void mswi_ipi_clear(u32 target_hart)
{
u32 *msip;
struct sbi_scratch *scratch;
struct aclint_mswi_data *mswi;
if (SBI_HARTMASK_MAX_BITS <= target_hart)
scratch = sbi_hartid_to_scratch(target_hart);
if (!scratch)
return;
mswi = mswi_hartid2data[target_hart];
mswi = mswi_get_hart_data_ptr(scratch);
if (!mswi)
return;
@@ -69,26 +81,45 @@ int aclint_mswi_cold_init(struct aclint_mswi_data *mswi)
{
u32 i;
int rc;
struct sbi_scratch *scratch;
unsigned long pos, region_size;
struct sbi_domain_memregion reg;
/* Sanity checks */
if (!mswi || (mswi->addr & (ACLINT_MSWI_ALIGN - 1)) ||
(mswi->size < (mswi->hart_count * sizeof(u32))) ||
(mswi->first_hartid >= SBI_HARTMASK_MAX_BITS) ||
(mswi->hart_count > ACLINT_MSWI_MAX_HARTS))
(!mswi->hart_count || mswi->hart_count > ACLINT_MSWI_MAX_HARTS))
return SBI_EINVAL;
/* Update MSWI hartid table */
for (i = 0; i < mswi->hart_count; i++)
mswi_hartid2data[mswi->first_hartid + i] = mswi;
/* Allocate scratch space pointer */
if (!mswi_ptr_offset) {
mswi_ptr_offset = sbi_scratch_alloc_type_offset(void *);
if (!mswi_ptr_offset)
return SBI_ENOMEM;
}
/* Update MSWI pointer in scratch space */
for (i = 0; i < mswi->hart_count; i++) {
scratch = sbi_hartid_to_scratch(mswi->first_hartid + i);
/*
* We don't need to fail if scratch pointer is not available
* because we might be dealing with hartid of a HART disabled
* in the device tree.
*/
if (!scratch)
continue;
mswi_set_hart_data_ptr(scratch, mswi);
}
/* Add MSWI regions to the root domain */
for (pos = 0; pos < mswi->size; pos += ACLINT_MSWI_ALIGN) {
region_size = ((mswi->size - pos) < ACLINT_MSWI_ALIGN) ?
(mswi->size - pos) : ACLINT_MSWI_ALIGN;
sbi_domain_memregion_init(mswi->addr + pos, region_size,
SBI_DOMAIN_MEMREGION_MMIO, &reg);
(SBI_DOMAIN_MEMREGION_MMIO |
SBI_DOMAIN_MEMREGION_M_READABLE |
SBI_DOMAIN_MEMREGION_M_WRITABLE),
&reg);
rc = sbi_domain_root_add_memregion(&reg);
if (rc)
return rc;

View File

@@ -16,24 +16,17 @@
extern struct fdt_ipi *fdt_ipi_drivers[];
extern unsigned long fdt_ipi_drivers_size;
static struct fdt_ipi dummy = {
.match_table = NULL,
.cold_init = NULL,
.warm_init = NULL,
.exit = NULL,
};
static struct fdt_ipi *current_driver = &dummy;
static struct fdt_ipi *current_driver = NULL;
void fdt_ipi_exit(void)
{
if (current_driver->exit)
if (current_driver && current_driver->exit)
current_driver->exit();
}
static int fdt_ipi_warm_init(void)
{
if (current_driver->warm_init)
if (current_driver && current_driver->warm_init)
return current_driver->warm_init();
return 0;
}
@@ -51,20 +44,28 @@ static int fdt_ipi_cold_init(void)
noff = -1;
while ((noff = fdt_find_match(fdt, noff,
drv->match_table, &match)) >= 0) {
if (drv->cold_init) {
rc = drv->cold_init(fdt, noff, match);
if (rc == SBI_ENODEV)
continue;
if (rc)
return rc;
}
current_driver = drv;
}
/* drv->cold_init must not be NULL */
if (drv->cold_init == NULL)
return SBI_EFAIL;
if (current_driver != &dummy)
break;
rc = drv->cold_init(fdt, noff, match);
if (rc == SBI_ENODEV)
continue;
if (rc)
return rc;
current_driver = drv;
/*
* We will have multiple IPI devices on multi-die or
* multi-socket systems so we cannot break here.
*/
}
}
/*
* On some single-hart system there is no need for ipi,
* so we cannot return a failure here
*/
return 0;
}

View File

@@ -8,15 +8,11 @@
*/
#include <sbi/sbi_error.h>
#include <sbi/sbi_heap.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/ipi/fdt_ipi.h>
#include <sbi_utils/ipi/aclint_mswi.h>
#define MSWI_MAX_NR 16
static unsigned long mswi_count = 0;
static struct aclint_mswi_data mswi[MSWI_MAX_NR];
static int ipi_mswi_cold_init(void *fdt, int nodeoff,
const struct fdt_match *match)
{
@@ -24,15 +20,17 @@ static int ipi_mswi_cold_init(void *fdt, int nodeoff,
unsigned long offset;
struct aclint_mswi_data *ms;
if (MSWI_MAX_NR <= mswi_count)
return SBI_ENOSPC;
ms = &mswi[mswi_count];
ms = sbi_zalloc(sizeof(*ms));
if (!ms)
return SBI_ENOMEM;
rc = fdt_parse_aclint_node(fdt, nodeoff, false,
&ms->addr, &ms->size, NULL, NULL,
&ms->first_hartid, &ms->hart_count);
if (rc)
if (rc) {
sbi_free(ms);
return rc;
}
if (match->data) {
/* Adjust MSWI address and size for CLINT device */
@@ -44,10 +42,11 @@ static int ipi_mswi_cold_init(void *fdt, int nodeoff,
}
rc = aclint_mswi_cold_init(ms);
if (rc)
if (rc) {
sbi_free(ms);
return rc;
}
mswi_count++;
return 0;
}

View File

@@ -269,7 +269,10 @@ int aplic_cold_irqchip_init(struct aplic_data *aplic)
(last_deleg_irq == aplic->num_source) &&
(first_deleg_irq == 1))) {
sbi_domain_memregion_init(aplic->addr, aplic->size,
SBI_DOMAIN_MEMREGION_MMIO, &reg);
(SBI_DOMAIN_MEMREGION_MMIO |
SBI_DOMAIN_MEMREGION_M_READABLE |
SBI_DOMAIN_MEMREGION_M_WRITABLE),
&reg);
rc = sbi_domain_root_add_memregion(&reg);
if (rc)
return rc;

View File

@@ -11,15 +11,11 @@
#include <libfdt.h>
#include <sbi/riscv_asm.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_heap.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/irqchip/fdt_irqchip.h>
#include <sbi_utils/irqchip/aplic.h>
#define APLIC_MAX_NR 16
static unsigned long aplic_count = 0;
static struct aplic_data aplic[APLIC_MAX_NR];
static int irqchip_aplic_warm_init(void)
{
/* Nothing to do here. */
@@ -32,15 +28,23 @@ static int irqchip_aplic_cold_init(void *fdt, int nodeoff,
int rc;
struct aplic_data *pd;
if (APLIC_MAX_NR <= aplic_count)
return SBI_ENOSPC;
pd = &aplic[aplic_count++];
pd = sbi_zalloc(sizeof(*pd));
if (!pd)
return SBI_ENOMEM;
rc = fdt_parse_aplic_node(fdt, nodeoff, pd);
if (rc)
return rc;
goto fail_free_data;
return aplic_cold_irqchip_init(pd);
rc = aplic_cold_irqchip_init(pd);
if (rc)
goto fail_free_data;
return 0;
fail_free_data:
sbi_free(pd);
return rc;
}
static const struct fdt_match irqchip_aplic_match[] = {

View File

@@ -11,16 +11,11 @@
#include <libfdt.h>
#include <sbi/riscv_asm.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_heap.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/irqchip/fdt_irqchip.h>
#include <sbi_utils/irqchip/imsic.h>
#define IMSIC_MAX_NR 16
static unsigned long imsic_count = 0;
static struct imsic_data imsic[IMSIC_MAX_NR];
static int irqchip_imsic_update_hartid_table(void *fdt, int nodeoff,
struct imsic_data *id)
{
@@ -48,8 +43,6 @@ static int irqchip_imsic_update_hartid_table(void *fdt, int nodeoff,
err = fdt_parse_hart_id(fdt, cpu_offset, &hartid);
if (err)
return SBI_EINVAL;
if (SBI_HARTMASK_MAX_BITS <= hartid)
return SBI_EINVAL;
switch (hwirq) {
case IRQ_M_EXT:
@@ -71,27 +64,27 @@ static int irqchip_imsic_cold_init(void *fdt, int nodeoff,
int rc;
struct imsic_data *id;
if (IMSIC_MAX_NR <= imsic_count)
return SBI_ENOSPC;
id = &imsic[imsic_count];
id = sbi_zalloc(sizeof(*id));
if (!id)
return SBI_ENOMEM;
rc = fdt_parse_imsic_node(fdt, nodeoff, id);
if (rc)
return rc;
if (!id->targets_mmode)
return 0;
rc = irqchip_imsic_update_hartid_table(fdt, nodeoff, id);
if (rc)
return rc;
if (rc || !id->targets_mmode)
goto fail_free_data;
rc = imsic_cold_irqchip_init(id);
if (rc)
return rc;
goto fail_free_data;
imsic_count++;
rc = irqchip_imsic_update_hartid_table(fdt, nodeoff, id);
if (rc)
goto fail_free_data;
return 0;
fail_free_data:
sbi_free(id);
return rc;
}
static const struct fdt_match irqchip_imsic_match[] = {

View File

@@ -11,59 +11,78 @@
#include <sbi/riscv_asm.h>
#include <sbi/riscv_io.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_scratch.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/irqchip/fdt_irqchip.h>
#include <sbi_utils/irqchip/plic.h>
#define PLIC_MAX_NR 16
static unsigned long plic_ptr_offset;
static unsigned long plic_count = 0;
static struct plic_data plic[PLIC_MAX_NR];
#define plic_get_hart_data_ptr(__scratch) \
sbi_scratch_read_type((__scratch), void *, plic_ptr_offset)
static struct plic_data *plic_hartid2data[SBI_HARTMASK_MAX_BITS];
static int plic_hartid2context[SBI_HARTMASK_MAX_BITS][2];
#define plic_set_hart_data_ptr(__scratch, __plic) \
sbi_scratch_write_type((__scratch), void *, plic_ptr_offset, (__plic))
static unsigned long plic_mcontext_offset;
#define plic_get_hart_mcontext(__scratch) \
(sbi_scratch_read_type((__scratch), long, plic_mcontext_offset) - 1)
#define plic_set_hart_mcontext(__scratch, __mctx) \
sbi_scratch_write_type((__scratch), long, plic_mcontext_offset, (__mctx) + 1)
static unsigned long plic_scontext_offset;
#define plic_get_hart_scontext(__scratch) \
(sbi_scratch_read_type((__scratch), long, plic_scontext_offset) - 1)
#define plic_set_hart_scontext(__scratch, __sctx) \
sbi_scratch_write_type((__scratch), long, plic_scontext_offset, (__sctx) + 1)
void fdt_plic_priority_save(u8 *priority, u32 num)
{
struct plic_data *plic = plic_hartid2data[current_hartid()];
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
plic_priority_save(plic, priority, num);
plic_priority_save(plic_get_hart_data_ptr(scratch), priority, num);
}
void fdt_plic_priority_restore(const u8 *priority, u32 num)
{
struct plic_data *plic = plic_hartid2data[current_hartid()];
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
plic_priority_restore(plic, priority, num);
plic_priority_restore(plic_get_hart_data_ptr(scratch), priority, num);
}
void fdt_plic_context_save(bool smode, u32 *enable, u32 *threshold, u32 num)
{
u32 hartid = current_hartid();
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
plic_context_save(plic_hartid2data[hartid],
plic_hartid2context[hartid][smode],
plic_context_save(plic_get_hart_data_ptr(scratch),
smode ? plic_get_hart_scontext(scratch) :
plic_get_hart_mcontext(scratch),
enable, threshold, num);
}
void fdt_plic_context_restore(bool smode, const u32 *enable, u32 threshold,
u32 num)
{
u32 hartid = current_hartid();
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
plic_context_restore(plic_hartid2data[hartid],
plic_hartid2context[hartid][smode],
plic_context_restore(plic_get_hart_data_ptr(scratch),
smode ? plic_get_hart_scontext(scratch) :
plic_get_hart_mcontext(scratch),
enable, threshold, num);
}
static int irqchip_plic_warm_init(void)
{
u32 hartid = current_hartid();
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
return plic_warm_irqchip_init(plic_hartid2data[hartid],
plic_hartid2context[hartid][0],
plic_hartid2context[hartid][1]);
return plic_warm_irqchip_init(plic_get_hart_data_ptr(scratch),
plic_get_hart_mcontext(scratch),
plic_get_hart_scontext(scratch));
}
static int irqchip_plic_update_hartid_table(void *fdt, int nodeoff,
@@ -71,6 +90,7 @@ static int irqchip_plic_update_hartid_table(void *fdt, int nodeoff,
{
const fdt32_t *val;
u32 phandle, hwirq, hartid;
struct sbi_scratch *scratch;
int i, err, count, cpu_offset, cpu_intc_offset;
val = fdt_getprop(fdt, nodeoff, "interrupts-extended", &count);
@@ -94,16 +114,17 @@ static int irqchip_plic_update_hartid_table(void *fdt, int nodeoff,
if (err)
continue;
if (SBI_HARTMASK_MAX_BITS <= hartid)
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch)
continue;
plic_hartid2data[hartid] = pd;
plic_set_hart_data_ptr(scratch, pd);
switch (hwirq) {
case IRQ_M_EXT:
plic_hartid2context[hartid][0] = i / 2;
plic_set_hart_mcontext(scratch, i / 2);
break;
case IRQ_S_EXT:
plic_hartid2context[hartid][1] = i / 2;
plic_set_hart_scontext(scratch, i / 2);
break;
}
}
@@ -114,16 +135,34 @@ static int irqchip_plic_update_hartid_table(void *fdt, int nodeoff,
static int irqchip_plic_cold_init(void *fdt, int nodeoff,
const struct fdt_match *match)
{
int i, rc;
int rc;
struct plic_data *pd;
if (PLIC_MAX_NR <= plic_count)
return SBI_ENOSPC;
pd = &plic[plic_count++];
if (!plic_ptr_offset) {
plic_ptr_offset = sbi_scratch_alloc_type_offset(void *);
if (!plic_ptr_offset)
return SBI_ENOMEM;
}
if (!plic_mcontext_offset) {
plic_mcontext_offset = sbi_scratch_alloc_type_offset(long);
if (!plic_mcontext_offset)
return SBI_ENOMEM;
}
if (!plic_scontext_offset) {
plic_scontext_offset = sbi_scratch_alloc_type_offset(long);
if (!plic_scontext_offset)
return SBI_ENOMEM;
}
pd = sbi_zalloc(sizeof(*pd));
if (!pd)
return SBI_ENOMEM;
rc = fdt_parse_plic_node(fdt, nodeoff, pd);
if (rc)
return rc;
goto fail_free_data;
if (match->data) {
void (*plic_plat_init)(struct plic_data *) = match->data;
@@ -132,17 +171,17 @@ static int irqchip_plic_cold_init(void *fdt, int nodeoff,
rc = plic_cold_irqchip_init(pd);
if (rc)
return rc;
goto fail_free_data;
if (plic_count == 1) {
for (i = 0; i < SBI_HARTMASK_MAX_BITS; i++) {
plic_hartid2data[i] = NULL;
plic_hartid2context[i][0] = -1;
plic_hartid2context[i][1] = -1;
}
}
rc = irqchip_plic_update_hartid_table(fdt, nodeoff, pd);
if (rc)
goto fail_free_data;
return irqchip_plic_update_hartid_table(fdt, nodeoff, pd);
return 0;
fail_free_data:
sbi_free(pd);
return rc;
}
#define THEAD_PLIC_CTRL_REG 0x1ffffc
@@ -154,7 +193,8 @@ static void thead_plic_plat_init(struct plic_data *pd)
void thead_plic_restore(void)
{
struct plic_data *plic = plic_hartid2data[current_hartid()];
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
struct plic_data *plic = plic_get_hart_data_ptr(scratch);
thead_plic_plat_init(plic);
}

View File

@@ -13,10 +13,10 @@
#include <sbi/riscv_encoding.h>
#include <sbi/sbi_console.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_ipi.h>
#include <sbi/sbi_irqchip.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_scratch.h>
#include <sbi_utils/irqchip/imsic.h>
#define IMSIC_MMIO_PAGE_LE 0x00
@@ -79,33 +79,65 @@ do { \
csr_clear(CSR_MIREG, __v); \
} while (0)
static struct imsic_data *imsic_hartid2data[SBI_HARTMASK_MAX_BITS];
static int imsic_hartid2file[SBI_HARTMASK_MAX_BITS];
static unsigned long imsic_ptr_offset;
#define imsic_get_hart_data_ptr(__scratch) \
sbi_scratch_read_type((__scratch), void *, imsic_ptr_offset)
#define imsic_set_hart_data_ptr(__scratch, __imsic) \
sbi_scratch_write_type((__scratch), void *, imsic_ptr_offset, (__imsic))
static unsigned long imsic_file_offset;
#define imsic_get_hart_file(__scratch) \
sbi_scratch_read_type((__scratch), long, imsic_file_offset)
#define imsic_set_hart_file(__scratch, __file) \
sbi_scratch_write_type((__scratch), long, imsic_file_offset, (__file))
int imsic_map_hartid_to_data(u32 hartid, struct imsic_data *imsic, int file)
{
if (!imsic || !imsic->targets_mmode ||
(SBI_HARTMASK_MAX_BITS <= hartid))
struct sbi_scratch *scratch;
if (!imsic || !imsic->targets_mmode)
return SBI_EINVAL;
imsic_hartid2data[hartid] = imsic;
imsic_hartid2file[hartid] = file;
/*
* We don't need to fail if scratch pointer is not available
* because we might be dealing with hartid of a HART disabled
* in device tree. For HARTs disabled in device tree, the
* imsic_get_data() and imsic_get_target_file() will anyway
* fail.
*/
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch)
return 0;
imsic_set_hart_data_ptr(scratch, imsic);
imsic_set_hart_file(scratch, file);
return 0;
}
struct imsic_data *imsic_get_data(u32 hartid)
{
if (SBI_HARTMASK_MAX_BITS <= hartid)
struct sbi_scratch *scratch;
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch)
return NULL;
return imsic_hartid2data[hartid];
return imsic_get_hart_data_ptr(scratch);
}
int imsic_get_target_file(u32 hartid)
{
if ((SBI_HARTMASK_MAX_BITS <= hartid) ||
!imsic_hartid2data[hartid])
struct sbi_scratch *scratch;
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch)
return SBI_ENOENT;
return imsic_hartid2file[hartid];
return imsic_get_hart_file(scratch);
}
static int imsic_external_irqfn(struct sbi_trap_regs *regs)
@@ -133,9 +165,16 @@ static void imsic_ipi_send(u32 target_hart)
{
unsigned long reloff;
struct imsic_regs *regs;
struct imsic_data *data = imsic_hartid2data[target_hart];
int file = imsic_hartid2file[target_hart];
struct imsic_data *data;
struct sbi_scratch *scratch;
int file;
scratch = sbi_hartid_to_scratch(target_hart);
if (!scratch)
return;
data = imsic_get_hart_data_ptr(scratch);
file = imsic_get_hart_file(scratch);
if (!data || !data->targets_mmode)
return;
@@ -204,7 +243,7 @@ void imsic_local_irqchip_init(void)
int imsic_warm_irqchip_init(void)
{
struct imsic_data *imsic = imsic_hartid2data[current_hartid()];
struct imsic_data *imsic = imsic_get_data(current_hartid());
/* Sanity checks */
if (!imsic || !imsic->targets_mmode)
@@ -306,6 +345,20 @@ int imsic_cold_irqchip_init(struct imsic_data *imsic)
if (!imsic->targets_mmode)
return SBI_EINVAL;
/* Allocate scratch space pointer */
if (!imsic_ptr_offset) {
imsic_ptr_offset = sbi_scratch_alloc_type_offset(void *);
if (!imsic_ptr_offset)
return SBI_ENOMEM;
}
/* Allocate scratch space file */
if (!imsic_file_offset) {
imsic_file_offset = sbi_scratch_alloc_type_offset(long);
if (!imsic_file_offset)
return SBI_ENOMEM;
}
/* Setup external interrupt function for IMSIC */
sbi_irqchip_set_irqfn(imsic_external_irqfn);
@@ -313,7 +366,10 @@ int imsic_cold_irqchip_init(struct imsic_data *imsic)
for (i = 0; i < IMSIC_MAX_REGS && imsic->regs[i].size; i++) {
sbi_domain_memregion_init(imsic->regs[i].addr,
imsic->regs[i].size,
SBI_DOMAIN_MEMREGION_MMIO, &reg);
(SBI_DOMAIN_MEMREGION_MMIO |
SBI_DOMAIN_MEMREGION_M_READABLE |
SBI_DOMAIN_MEMREGION_M_WRITABLE),
&reg);
rc = sbi_domain_root_add_memregion(&reg);
if (rc)
return rc;

View File

@@ -404,7 +404,7 @@ static int overlay_fixup_one_phandle(void *fdt, void *fdto,
name, name_len, poffset,
&phandle_prop,
sizeof(phandle_prop));
};
}
/**
* overlay_fixup_phandle - Set an overlay phandle to the base one

View File

@@ -9,6 +9,7 @@
#include <sbi/sbi_string.h>
#include <sbi/sbi_types.h>
#include <sbi/sbi_byteorder.h>
#define INT_MAX ((int)(~0U >> 1))
#define UINT_MAX ((unsigned int)~0U)
@@ -37,49 +38,39 @@
#define strlen sbi_strlen
#define strnlen sbi_strnlen
typedef uint16_t FDT_BITWISE fdt16_t;
typedef uint32_t FDT_BITWISE fdt32_t;
typedef uint64_t FDT_BITWISE fdt64_t;
#define EXTRACT_BYTE(x, n) ((unsigned long long)((uint8_t *)&x)[n])
#define CPU_TO_FDT16(x) ((EXTRACT_BYTE(x, 0) << 8) | EXTRACT_BYTE(x, 1))
#define CPU_TO_FDT32(x) ((EXTRACT_BYTE(x, 0) << 24) | (EXTRACT_BYTE(x, 1) << 16) | \
(EXTRACT_BYTE(x, 2) << 8) | EXTRACT_BYTE(x, 3))
#define CPU_TO_FDT64(x) ((EXTRACT_BYTE(x, 0) << 56) | (EXTRACT_BYTE(x, 1) << 48) | \
(EXTRACT_BYTE(x, 2) << 40) | (EXTRACT_BYTE(x, 3) << 32) | \
(EXTRACT_BYTE(x, 4) << 24) | (EXTRACT_BYTE(x, 5) << 16) | \
(EXTRACT_BYTE(x, 6) << 8) | EXTRACT_BYTE(x, 7))
typedef be16_t FDT_BITWISE fdt16_t;
typedef be32_t FDT_BITWISE fdt32_t;
typedef be64_t FDT_BITWISE fdt64_t;
static inline uint16_t fdt16_to_cpu(fdt16_t x)
{
return (FDT_FORCE uint16_t)CPU_TO_FDT16(x);
return (FDT_FORCE uint16_t)be16_to_cpu(x);
}
static inline fdt16_t cpu_to_fdt16(uint16_t x)
{
return (FDT_FORCE fdt16_t)CPU_TO_FDT16(x);
return (FDT_FORCE fdt16_t)cpu_to_be16(x);
}
static inline uint32_t fdt32_to_cpu(fdt32_t x)
{
return (FDT_FORCE uint32_t)CPU_TO_FDT32(x);
return (FDT_FORCE uint32_t)be32_to_cpu(x);
}
static inline fdt32_t cpu_to_fdt32(uint32_t x)
{
return (FDT_FORCE fdt32_t)CPU_TO_FDT32(x);
return (FDT_FORCE fdt32_t)cpu_to_be32(x);
}
static inline uint64_t fdt64_to_cpu(fdt64_t x)
{
return (FDT_FORCE uint64_t)CPU_TO_FDT64(x);
return (FDT_FORCE uint64_t)be64_to_cpu(x);
}
static inline fdt64_t cpu_to_fdt64(uint64_t x)
{
return (FDT_FORCE fdt64_t)CPU_TO_FDT64(x);
return (FDT_FORCE fdt64_t)cpu_to_be64(x);
}
#undef CPU_TO_FDT64
#undef CPU_TO_FDT32
#undef CPU_TO_FDT16
#undef EXTRACT_BYTE
#ifdef __APPLE__
#include <AvailabilityMacros.h>

View File

@@ -11,6 +11,7 @@ if FDT_RESET
config FDT_RESET_ATCWDT200
bool "Andes WDT FDT reset driver"
depends on SYS_ATCSMU
default n
config FDT_RESET_GPIO

View File

@@ -16,6 +16,7 @@
#include <sbi/sbi_system.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/reset/fdt_reset.h>
#include <sbi_utils/sys/atcsmu.h>
#define ATCWDT200_WP_NUM 0x5aa5
#define WREN_REG 0x18
@@ -41,14 +42,8 @@
#define CLK_PCLK (1 << 1)
#define WDT_EN (1 << 0)
#define FLASH_BASE 0x80000000ULL
#define SMU_RESET_VEC_LO_OFF 0x50
#define SMU_RESET_VEC_HI_OFF 0x60
#define SMU_HARTn_RESET_VEC_LO(n) (SMU_RESET_VEC_LO_OFF + (n * 0x4))
#define SMU_HARTn_RESET_VEC_HI(n) (SMU_RESET_VEC_HI_OFF + (n * 0x4))
static volatile char *wdt_addr;
static volatile char *smu_addr;
static volatile char *wdt_addr = NULL;
static struct smu_data smu = { 0 };
static int ae350_system_reset_check(u32 type, u32 reason)
{
@@ -66,16 +61,16 @@ static void ae350_system_reset(u32 type, u32 reason)
{
const struct sbi_platform *plat = sbi_platform_thishart_ptr();
for (int i = 0; i < sbi_platform_hart_count(plat); i++) {
writel(FLASH_BASE, smu_addr + SMU_HARTn_RESET_VEC_LO(i));
writel(FLASH_BASE >> 32, smu_addr + SMU_HARTn_RESET_VEC_HI(i));
}
for (int i = 0; i < sbi_platform_hart_count(plat); i++)
if (smu_set_reset_vector(&smu, FLASH_BASE, i))
goto fail;
/* Program WDT control register */
writew(ATCWDT200_WP_NUM, wdt_addr + WREN_REG);
writel(INT_CLK_32768 | INT_EN | RST_CLK_128 | RST_EN | WDT_EN,
wdt_addr + CTRL_REG);
fail:
sbi_hart_hang();
}
@@ -104,7 +99,7 @@ static int atcwdt200_reset_init(void *fdt, int nodeoff,
if (fdt_parse_compat_addr(fdt, &reg_addr, "andestech,atcsmu"))
return SBI_ENODEV;
smu_addr = (volatile char *)(unsigned long)reg_addr;
smu.addr = (unsigned long)reg_addr;
sbi_system_reset_add_device(&atcwdt200_reset);

View File

@@ -77,7 +77,7 @@ static void gpio_reset_exec(struct gpio_reset *reset)
static int gpio_system_poweroff_check(u32 type, u32 reason)
{
if (gpio_reset_get(FALSE, type))
if (gpio_reset_get(false, type))
return 128;
return 0;
@@ -85,7 +85,7 @@ static int gpio_system_poweroff_check(u32 type, u32 reason)
static void gpio_system_poweroff(u32 type, u32 reason)
{
gpio_reset_exec(gpio_reset_get(FALSE, type));
gpio_reset_exec(gpio_reset_get(false, type));
}
static struct sbi_system_reset_device gpio_poweroff = {
@@ -96,7 +96,7 @@ static struct sbi_system_reset_device gpio_poweroff = {
static int gpio_system_restart_check(u32 type, u32 reason)
{
if (gpio_reset_get(TRUE, type))
if (gpio_reset_get(true, type))
return 128;
return 0;
@@ -104,7 +104,7 @@ static int gpio_system_restart_check(u32 type, u32 reason)
static void gpio_system_restart(u32 type, u32 reason)
{
gpio_reset_exec(gpio_reset_get(TRUE, type));
gpio_reset_exec(gpio_reset_get(true, type));
}
static struct sbi_system_reset_device gpio_restart = {
@@ -149,7 +149,7 @@ static int gpio_reset_init(void *fdt, int nodeoff,
}
static const struct fdt_match gpio_poweroff_match[] = {
{ .compatible = "gpio-poweroff", .data = (const void *)FALSE },
{ .compatible = "gpio-poweroff", .data = (const void *)false },
{ },
};
@@ -159,7 +159,7 @@ struct fdt_reset fdt_poweroff_gpio = {
};
static const struct fdt_match gpio_reset_match[] = {
{ .compatible = "gpio-restart", .data = (const void *)TRUE },
{ .compatible = "gpio-restart", .data = (const void *)true },
{ },
};

View File

@@ -17,13 +17,6 @@
extern struct fdt_serial *fdt_serial_drivers[];
extern unsigned long fdt_serial_drivers_size;
static struct fdt_serial dummy = {
.match_table = NULL,
.init = NULL,
};
static struct fdt_serial *current_driver = &dummy;
int fdt_serial_init(void)
{
const void *prop;
@@ -57,20 +50,15 @@ int fdt_serial_init(void)
if (!match)
continue;
if (drv->init) {
rc = drv->init(fdt, noff, match);
if (rc == SBI_ENODEV)
continue;
if (rc)
return rc;
}
current_driver = drv;
break;
}
/* drv->init must not be NULL */
if (drv->init == NULL)
return SBI_EFAIL;
/* Check if we found desired driver */
if (current_driver != &dummy)
goto done;
rc = drv->init(fdt, noff, match);
if (rc == SBI_ENODEV)
continue;
return rc;
}
/* Lastly check all DT nodes */
for (pos = 0; pos < fdt_serial_drivers_size; pos++) {
@@ -80,17 +68,15 @@ int fdt_serial_init(void)
if (noff < 0)
continue;
if (drv->init) {
rc = drv->init(fdt, noff, match);
if (rc == SBI_ENODEV)
continue;
if (rc)
return rc;
}
current_driver = drv;
break;
/* drv->init must not be NULL */
if (drv->init == NULL)
return SBI_EFAIL;
rc = drv->init(fdt, noff, match);
if (rc == SBI_ENODEV)
continue;
return rc;
}
done:
return 0;
return SBI_ENODEV;
}

View File

@@ -24,6 +24,7 @@ static int serial_cadence_init(void *fdt, int nodeoff,
}
static const struct fdt_match serial_cadence_match[] = {
{ .compatible = "cdns,uart-r1p8", },
{ .compatible = "cdns,uart-r1p12" },
{ .compatible = "starfive,jh8100-uart" },
{ },

View File

@@ -15,6 +15,7 @@
#define SYSOPEN 0x01
#define SYSWRITEC 0x03
#define SYSWRITE 0x05
#define SYSREAD 0x06
#define SYSREADC 0x07
#define SYSERRNO 0x13
@@ -93,6 +94,7 @@ static int semihosting_errno(void)
}
static int semihosting_infd = SBI_ENODEV;
static int semihosting_outfd = SBI_ENODEV;
static long semihosting_open(const char *fname, enum semihosting_open_mode mode)
{
@@ -141,6 +143,21 @@ static long semihosting_read(long fd, void *memp, size_t len)
return len - ret;
}
static long semihosting_write(long fd, const void *memp, size_t len)
{
long ret;
struct semihosting_rdwr_s write;
write.fd = fd;
write.memp = (void *)memp;
write.len = len;
ret = semihosting_trap(SYSWRITE, &write);
if (ret < 0)
return semihosting_errno();
return len - ret;
}
/* clang-format on */
static void semihosting_putc(char ch)
@@ -148,6 +165,24 @@ static void semihosting_putc(char ch)
semihosting_trap(SYSWRITEC, &ch);
}
static unsigned long semihosting_puts(const char *str, unsigned long len)
{
char ch;
long ret;
unsigned long i;
if (semihosting_outfd < 0) {
for (i = 0; i < len; i++) {
ch = str[i];
semihosting_trap(SYSWRITEC, &ch);
}
ret = len;
} else
ret = semihosting_write(semihosting_outfd, str, len);
return (ret < 0) ? 0 : ret;
}
static int semihosting_getc(void)
{
char ch = 0;
@@ -165,12 +200,14 @@ static int semihosting_getc(void)
static struct sbi_console_device semihosting_console = {
.name = "semihosting",
.console_putc = semihosting_putc,
.console_puts = semihosting_puts,
.console_getc = semihosting_getc
};
int semihosting_init(void)
{
semihosting_infd = semihosting_open(":tt", MODE_READ);
semihosting_outfd = semihosting_open(":tt", MODE_WRITE);
sbi_console_set_device(&semihosting_console);

View File

@@ -2,6 +2,10 @@
menu "System Device Support"
config SYS_ATCSMU
bool "Andes System Management Unit (SMU) support"
default n
config SYS_HTIF
bool "Host transfere interface (HTIF) support"
default n

92
lib/utils/sys/atcsmu.c Normal file
View File

@@ -0,0 +1,92 @@
/*
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2023 Andes Technology Corporation
*
* Authors:
* Yu Chien Peter Lin <peterlin@andestech.com>
*/
#include <sbi_utils/sys/atcsmu.h>
#include <sbi/riscv_io.h>
#include <sbi/sbi_console.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_bitops.h>
inline int smu_set_wakeup_events(struct smu_data *smu, u32 events, u32 hartid)
{
if (smu) {
writel(events, (void *)(smu->addr + PCSm_WE_OFFSET(hartid)));
return 0;
} else
return SBI_EINVAL;
}
inline bool smu_support_sleep_mode(struct smu_data *smu, u32 sleep_mode,
u32 hartid)
{
u32 pcs_cfg;
if (!smu) {
sbi_printf("%s(): Failed to access smu_data\n", __func__);
return false;
}
pcs_cfg = readl((void *)(smu->addr + PCSm_CFG_OFFSET(hartid)));
switch (sleep_mode) {
case LIGHTSLEEP_MODE:
if (EXTRACT_FIELD(pcs_cfg, PCS_CFG_LIGHT_SLEEP) == 0) {
sbi_printf(
"SMU: hart%d (PCS%d) does not support light sleep mode\n",
hartid, hartid + 3);
return false;
}
break;
case DEEPSLEEP_MODE:
if (EXTRACT_FIELD(pcs_cfg, PCS_CFG_DEEP_SLEEP) == 0) {
sbi_printf(
"SMU: hart%d (PCS%d) does not support deep sleep mode\n",
hartid, hartid + 3);
return false;
}
break;
}
return true;
}
inline int smu_set_command(struct smu_data *smu, u32 pcs_ctl, u32 hartid)
{
if (smu) {
writel(pcs_ctl, (void *)(smu->addr + PCSm_CTL_OFFSET(hartid)));
return 0;
} else
return SBI_EINVAL;
}
inline int smu_set_reset_vector(struct smu_data *smu, ulong wakeup_addr,
u32 hartid)
{
u32 vec_lo, vec_hi;
u64 reset_vector;
if (!smu)
return SBI_EINVAL;
writel(wakeup_addr, (void *)(smu->addr + HARTn_RESET_VEC_LO(hartid)));
writel((u64)wakeup_addr >> 32,
(void *)(smu->addr + HARTn_RESET_VEC_HI(hartid)));
vec_lo = readl((void *)(smu->addr + HARTn_RESET_VEC_LO(hartid)));
vec_hi = readl((void *)(smu->addr + HARTn_RESET_VEC_HI(hartid)));
reset_vector = ((u64)vec_hi << 32) | vec_lo;
if (reset_vector != (u64)wakeup_addr) {
sbi_printf(
"hard%d (PCS%d): Failed to program the reset vector.\n",
hartid, hartid + 3);
return SBI_EFAIL;
} else
return 0;
}

View File

@@ -135,11 +135,11 @@ static void do_tohost_fromhost(uint64_t dev, uint64_t cmd, uint64_t data)
__set_tohost(HTIF_DEV_SYSTEM, cmd, data);
while (1) {
uint64_t fh = fromhost;
uint64_t fh = __read_fromhost();
if (fh) {
if (FROMHOST_DEV(fh) == HTIF_DEV_SYSTEM &&
FROMHOST_CMD(fh) == cmd) {
fromhost = 0;
__write_fromhost(0);
break;
}
__check_fromhost();

View File

@@ -9,3 +9,4 @@
libsbiutils-objs-$(CONFIG_SYS_HTIF) += sys/htif.o
libsbiutils-objs-$(CONFIG_SYS_SIFIVE_TEST) += sys/sifive_test.o
libsbiutils-objs-$(CONFIG_SYS_ATCSMU) += sys/atcsmu.o

Some files were not shown because too many files have changed in this diff Show More