1 Commits

Author SHA1 Message Date
Anup Patel
057eb10b6d lib: utils/gpio: Fix RV32 compile error for designware GPIO driver
Currently, we see following compile error in the designeware GPIO driver
for RV32 systems:

lib/utils/gpio/fdt_gpio_designware.c:115:20: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
  115 |         chip->dr = (void *)addr + (bank * 0xc);
      |                    ^
lib/utils/gpio/fdt_gpio_designware.c:116:21: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
  116 |         chip->ext = (void *)addr + (bank * 4) + 0x50;

We fix the above error using an explicit type-cast to 'unsigned long'.

Fixes: 7828eebaaa ("gpio/desginware: add Synopsys DesignWare APB GPIO support")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Xiang W <wxjstz@126.com>
2023-07-19 11:51:59 +05:30
154 changed files with 1809 additions and 3435 deletions

8
.gitignore vendored
View File

@@ -1,10 +1,3 @@
# ignore anything begin with dot
.*
# exceptions we need even begin with dot
!.clang-format
!.gitignore
# Object files # Object files
*.o *.o
*.a *.a
@@ -17,3 +10,4 @@ install/
# Development friendly files # Development friendly files
tags tags
cscope* cscope*
*.swp

View File

@@ -168,7 +168,7 @@ endif
OPENSBI_LD_PIE := $(shell $(CC) $(CLANG_TARGET) $(RELAX_FLAG) $(USE_LD_FLAG) -fPIE -nostdlib -Wl,-pie -x c /dev/null -o /dev/null >/dev/null 2>&1 && echo y || echo n) OPENSBI_LD_PIE := $(shell $(CC) $(CLANG_TARGET) $(RELAX_FLAG) $(USE_LD_FLAG) -fPIE -nostdlib -Wl,-pie -x c /dev/null -o /dev/null >/dev/null 2>&1 && echo y || echo n)
# Check whether the compiler supports -m(no-)save-restore # Check whether the compiler supports -m(no-)save-restore
CC_SUPPORT_SAVE_RESTORE := $(shell $(CC) $(CLANG_TARGET) $(RELAX_FLAG) -nostdlib -mno-save-restore -x c /dev/null -o /dev/null 2>&1 | grep -e "-save-restore" >/dev/null && echo n || echo y) CC_SUPPORT_SAVE_RESTORE := $(shell $(CC) $(CLANG_TARGET) $(RELAX_FLAG) -nostdlib -mno-save-restore -x c /dev/null -o /dev/null 2>&1 | grep "\-save\-restore" >/dev/null && echo n || echo y)
# Check whether the assembler and the compiler support the Zicsr and Zifencei extensions # Check whether the assembler and the compiler support the Zicsr and Zifencei extensions
CC_SUPPORT_ZICSR_ZIFENCEI := $(shell $(CC) $(CLANG_TARGET) $(RELAX_FLAG) -nostdlib -march=rv$(OPENSBI_CC_XLEN)imafd_zicsr_zifencei -x c /dev/null -o /dev/null 2>&1 | grep "zicsr\|zifencei" > /dev/null && echo n || echo y) CC_SUPPORT_ZICSR_ZIFENCEI := $(shell $(CC) $(CLANG_TARGET) $(RELAX_FLAG) -nostdlib -march=rv$(OPENSBI_CC_XLEN)imafd_zicsr_zifencei -x c /dev/null -o /dev/null 2>&1 | grep "zicsr\|zifencei" > /dev/null && echo n || echo y)
@@ -375,7 +375,6 @@ ASFLAGS += $(firmware-asflags-y)
ARFLAGS = rcs ARFLAGS = rcs
ELFFLAGS += $(USE_LD_FLAG) ELFFLAGS += $(USE_LD_FLAG)
ELFFLAGS += -Wl,--exclude-libs,ALL
ELFFLAGS += -Wl,--build-id=none ELFFLAGS += -Wl,--build-id=none
ELFFLAGS += $(platform-ldflags-y) ELFFLAGS += $(platform-ldflags-y)
ELFFLAGS += $(firmware-ldflags-y) ELFFLAGS += $(firmware-ldflags-y)

View File

@@ -36,7 +36,7 @@ options. These configuration parameters can be defined using either the top
level `make` command line or the target platform *objects.mk* configuration level `make` command line or the target platform *objects.mk* configuration
file. The parameters currently defined are as follows: file. The parameters currently defined are as follows:
* **FW_PAYLOAD_OFFSET** - Offset from *FW_TEXT_START* where the payload binary * **FW_PAYLOAD_OFFSET** - Offset from *FW_TEXT_BASE* where the payload binary
will be linked in the final *FW_PAYLOAD* firmware binary image. This will be linked in the final *FW_PAYLOAD* firmware binary image. This
configuration parameter is mandatory if *FW_PAYLOAD_ALIGN* is not defined. configuration parameter is mandatory if *FW_PAYLOAD_ALIGN* is not defined.
Compilation errors will result from an incorrect definition of Compilation errors will result from an incorrect definition of

View File

@@ -1,7 +1,7 @@
T-HEAD C9xx Series Processors T-HEAD C9xx Series Processors
============================= =============================
The C9xx series processors are high-performance RISC-V architecture The **C9xx** series processors are high-performance RISC-V architecture
multi-core processors with AI vector acceleration engine. multi-core processors with AI vector acceleration engine.
For more details, refer [T-HEAD.CN](https://www.t-head.cn/) For more details, refer [T-HEAD.CN](https://www.t-head.cn/)
@@ -12,16 +12,185 @@ To build the platform-specific library and firmware images, provide the
Platform Options Platform Options
---------------- ----------------
The T-HEAD C9xx does not have any platform-specific compile options The *T-HEAD C9xx* does not have any platform-specific compile options
because it uses generic platform. because it uses generic platform.
``` ```
CROSS_COMPILE=riscv64-linux-gnu- PLATFORM=generic make CROSS_COMPILE=riscv64-linux-gnu- PLATFORM=generic /usr/bin/make
``` ```
Here is the simplest boot flow for a fpga prototype: The *T-HEAD C9xx* DTB provided to OpenSBI generic firmwares will usually have
"riscv,clint0", "riscv,plic0", "thead,reset-sample" compatible strings.
(Jtag gdbinit) -> (zsb) -> (opensbi) -> (linux) DTS Example1: (Single core, eg: Allwinner D1 - c906)
----------------------------------------------------
For more details, refer: ```
[zero stage boot](https://github.com/c-sky/zero_stage_boot) cpus {
#address-cells = <1>;
#size-cells = <0>;
timebase-frequency = <3000000>;
cpu@0 {
device_type = "cpu";
reg = <0>;
status = "okay";
compatible = "riscv";
riscv,isa = "rv64imafdcv";
mmu-type = "riscv,sv39";
cpu0_intc: interrupt-controller {
#interrupt-cells = <1>;
compatible = "riscv,cpu-intc";
interrupt-controller;
};
};
};
soc {
#address-cells = <2>;
#size-cells = <2>;
compatible = "simple-bus";
ranges;
clint0: clint@14000000 {
compatible = "allwinner,sun20i-d1-clint";
interrupts-extended = <
&cpu0_intc 3 &cpu0_intc 7
>;
reg = <0x0 0x14000000 0x0 0x04000000>;
};
intc: interrupt-controller@10000000 {
#interrupt-cells = <1>;
compatible = "allwinner,sun20i-d1-plic",
"thead,c900-plic";
interrupt-controller;
interrupts-extended = <
&cpu0_intc 0xffffffff &cpu0_intc 9
>;
reg = <0x0 0x10000000 0x0 0x04000000>;
reg-names = "control";
riscv,max-priority = <7>;
riscv,ndev = <200>;
};
}
```
DTS Example2: (Multi cores with soc reset-regs)
-----------------------------------------------
```
cpus {
#address-cells = <1>;
#size-cells = <0>;
timebase-frequency = <3000000>;
cpu@0 {
device_type = "cpu";
reg = <0>;
status = "okay";
compatible = "riscv";
riscv,isa = "rv64imafdc";
mmu-type = "riscv,sv39";
cpu0_intc: interrupt-controller {
#interrupt-cells = <1>;
compatible = "riscv,cpu-intc";
interrupt-controller;
};
};
cpu@1 {
device_type = "cpu";
reg = <1>;
status = "fail";
compatible = "riscv";
riscv,isa = "rv64imafdc";
mmu-type = "riscv,sv39";
cpu1_intc: interrupt-controller {
#interrupt-cells = <1>;
compatible = "riscv,cpu-intc";
interrupt-controller;
};
};
cpu@2 {
device_type = "cpu";
reg = <2>;
status = "fail";
compatible = "riscv";
riscv,isa = "rv64imafdc";
mmu-type = "riscv,sv39";
cpu2_intc: interrupt-controller {
#interrupt-cells = <1>;
compatible = "riscv,cpu-intc";
interrupt-controller;
};
};
cpu@3 {
device_type = "cpu";
reg = <3>;
status = "fail";
compatible = "riscv";
riscv,isa = "rv64imafdc";
mmu-type = "riscv,sv39";
cpu3_intc: interrupt-controller {
#interrupt-cells = <1>;
compatible = "riscv,cpu-intc";
interrupt-controller;
};
};
};
soc {
#address-cells = <2>;
#size-cells = <2>;
compatible = "simple-bus";
ranges;
reset: reset-sample {
compatible = "thead,reset-sample";
entry-reg = <0xff 0xff019050>;
entry-cnt = <4>;
control-reg = <0xff 0xff015004>;
control-val = <0x1c>;
csr-copy = <0x7f3 0x7c0 0x7c1 0x7c2 0x7c3 0x7c5 0x7cc>;
};
clint0: clint@ffdc000000 {
compatible = "riscv,clint0";
interrupts-extended = <
&cpu0_intc 3 &cpu0_intc 7
&cpu1_intc 3 &cpu1_intc 7
&cpu2_intc 3 &cpu2_intc 7
&cpu3_intc 3 &cpu3_intc 7
&cpu4_intc 3 &cpu4_intc 7
>;
reg = <0xff 0xdc000000 0x0 0x04000000>;
};
intc: interrupt-controller@ffd8000000 {
#interrupt-cells = <1>;
compatible = "thead,c900-plic";
interrupt-controller;
interrupts-extended = <
&cpu0_intc 0xffffffff &cpu0_intc 9
&cpu1_intc 0xffffffff &cpu1_intc 9
&cpu2_intc 0xffffffff &cpu2_intc 9
&cpu3_intc 0xffffffff &cpu3_intc 9
>;
reg = <0xff 0xd8000000 0x0 0x04000000>;
reg-names = "control";
riscv,max-priority = <7>;
riscv,ndev = <80>;
};
}
```
DTS Example2: (Multi cores with old reset csrs)
-----------------------------------------------
```
reset: reset-sample {
compatible = "thead,reset-sample";
using-csr-reset;
csr-copy = <0x7c0 0x7c1 0x7c2 0x7c3 0x7c5 0x7cc
0x3b0 0x3b1 0x3b2 0x3b3
0x3b4 0x3b5 0x3b6 0x3b7
0x3a0>;
};
```

View File

@@ -18,7 +18,7 @@ Base Platform Requirements
The base RISC-V platform requirements for OpenSBI are as follows: The base RISC-V platform requirements for OpenSBI are as follows:
1. At least rv32ima_zicsr or rv64ima_zicsr required on all HARTs 1. At least rv32ima or rv64ima required on all HARTs
2. At least one HART should have S-mode support because: 2. At least one HART should have S-mode support because:
* SBI calls are meant for RISC-V S-mode (Supervisor mode) * SBI calls are meant for RISC-V S-mode (Supervisor mode)
@@ -33,7 +33,7 @@ The base RISC-V platform requirements for OpenSBI are as follows:
6. Hardware support for injecting M-mode software interrupts on 6. Hardware support for injecting M-mode software interrupts on
a multi-HART platform a multi-HART platform
The RISC-V extensions not covered by rv32ima_zicsr or rv64ima_zicsr are optional The RISC-V extensions not covered by rv32ima or rv64ima are optional
for OpenSBI. Although, OpenSBI will detect and handle some of these for OpenSBI. Although, OpenSBI will detect and handle some of these
optional RISC-V extensions at runtime. optional RISC-V extensions at runtime.

View File

@@ -125,85 +125,3 @@ pmu {
<0x0 0x2 0xffffffff 0xffffe0ff 0x18>; <0x0 0x2 0xffffffff 0xffffe0ff 0x18>;
}; };
``` ```
### Example 3
```
/*
* For Andes 45-series platforms. The encodings can be found in the
* "Machine Performance Monitoring Event Selector" section
* http://www.andestech.com/wp-content/uploads/AX45MP-1C-Rev.-5.0.0-Datasheet.pdf
*/
pmu {
compatible = "riscv,pmu";
riscv,event-to-mhpmevent =
<0x1 0x0000 0x10>, /* CPU_CYCLES -> Cycle count */
<0x2 0x0000 0x20>, /* INSTRUCTIONS -> Retired instruction count */
<0x3 0x0000 0x41>, /* CACHE_REFERENCES -> D-Cache access */
<0x4 0x0000 0x51>, /* CACHE_MISSES -> D-Cache miss */
<0x5 0x0000 0x80>, /* BRANCH_INSTRUCTIONS -> Conditional branch instruction count */
<0x6 0x0000 0x02>, /* BRANCH_MISSES -> Misprediction of conditional branches */
<0x10000 0x0000 0x61>, /* L1D_READ_ACCESS -> D-Cache load access */
<0x10001 0x0000 0x71>, /* L1D_READ_MISS -> D-Cache load miss */
<0x10002 0x0000 0x81>, /* L1D_WRITE_ACCESS -> D-Cache store access */
<0x10003 0x0000 0x91>, /* L1D_WRITE_MISS -> D-Cache store miss */
<0x10008 0x0000 0x21>, /* L1I_READ_ACCESS -> I-Cache access */
<0x10009 0x0000 0x31>; /* L1I_READ_MISS -> I-Cache miss */
riscv,event-to-mhpmcounters = <0x1 0x6 0x78>,
<0x10000 0x10003 0x78>,
<0x10008 0x10009 0x78>;
riscv,raw-event-to-mhpmcounters =
<0x0 0x10 0xffffffff 0xffffffff 0x78>, /* Cycle count */
<0x0 0x20 0xffffffff 0xffffffff 0x78>, /* Retired instruction count */
<0x0 0x30 0xffffffff 0xffffffff 0x78>, /* Integer load instruction count */
<0x0 0x40 0xffffffff 0xffffffff 0x78>, /* Integer store instruction count */
<0x0 0x50 0xffffffff 0xffffffff 0x78>, /* Atomic instruction count */
<0x0 0x60 0xffffffff 0xffffffff 0x78>, /* System instruction count */
<0x0 0x70 0xffffffff 0xffffffff 0x78>, /* Integer computational instruction count */
<0x0 0x80 0xffffffff 0xffffffff 0x78>, /* Conditional branch instruction count */
<0x0 0x90 0xffffffff 0xffffffff 0x78>, /* Taken conditional branch instruction count */
<0x0 0xA0 0xffffffff 0xffffffff 0x78>, /* JAL instruction count */
<0x0 0xB0 0xffffffff 0xffffffff 0x78>, /* JALR instruction count */
<0x0 0xC0 0xffffffff 0xffffffff 0x78>, /* Return instruction count */
<0x0 0xD0 0xffffffff 0xffffffff 0x78>, /* Control transfer instruction count */
<0x0 0xE0 0xffffffff 0xffffffff 0x78>, /* EXEC.IT instruction count */
<0x0 0xF0 0xffffffff 0xffffffff 0x78>, /* Integer multiplication instruction count */
<0x0 0x100 0xffffffff 0xffffffff 0x78>, /* Integer division instruction count */
<0x0 0x110 0xffffffff 0xffffffff 0x78>, /* Floating-point load instruction count */
<0x0 0x120 0xffffffff 0xffffffff 0x78>, /* Floating-point store instruction count */
<0x0 0x130 0xffffffff 0xffffffff 0x78>, /* Floating-point addition/subtraction instruction count */
<0x0 0x140 0xffffffff 0xffffffff 0x78>, /* Floating-point multiplication instruction count */
<0x0 0x150 0xffffffff 0xffffffff 0x78>, /* Floating-point fused multiply-add instruction count */
<0x0 0x160 0xffffffff 0xffffffff 0x78>, /* Floating-point division or square-root instruction count */
<0x0 0x170 0xffffffff 0xffffffff 0x78>, /* Other floating-point instruction count */
<0x0 0x180 0xffffffff 0xffffffff 0x78>, /* Integer multiplication and add/sub instruction count */
<0x0 0x190 0xffffffff 0xffffffff 0x78>, /* Retired operation count */
<0x0 0x01 0xffffffff 0xffffffff 0x78>, /* ILM access */
<0x0 0x11 0xffffffff 0xffffffff 0x78>, /* DLM access */
<0x0 0x21 0xffffffff 0xffffffff 0x78>, /* I-Cache access */
<0x0 0x31 0xffffffff 0xffffffff 0x78>, /* I-Cache miss */
<0x0 0x41 0xffffffff 0xffffffff 0x78>, /* D-Cache access */
<0x0 0x51 0xffffffff 0xffffffff 0x78>, /* D-Cache miss */
<0x0 0x61 0xffffffff 0xffffffff 0x78>, /* D-Cache load access */
<0x0 0x71 0xffffffff 0xffffffff 0x78>, /* D-Cache load miss */
<0x0 0x81 0xffffffff 0xffffffff 0x78>, /* D-Cache store access */
<0x0 0x91 0xffffffff 0xffffffff 0x78>, /* D-Cache store miss */
<0x0 0xA1 0xffffffff 0xffffffff 0x78>, /* D-Cache writeback */
<0x0 0xB1 0xffffffff 0xffffffff 0x78>, /* Cycles waiting for I-Cache fill data */
<0x0 0xC1 0xffffffff 0xffffffff 0x78>, /* Cycles waiting for D-Cache fill data */
<0x0 0xD1 0xffffffff 0xffffffff 0x78>, /* Uncached fetch data access from bus */
<0x0 0xE1 0xffffffff 0xffffffff 0x78>, /* Uncached load data access from bus */
<0x0 0xF1 0xffffffff 0xffffffff 0x78>, /* Cycles waiting for uncached fetch data from bus */
<0x0 0x101 0xffffffff 0xffffffff 0x78>, /* Cycles waiting for uncached load data from bus */
<0x0 0x111 0xffffffff 0xffffffff 0x78>, /* Main ITLB access */
<0x0 0x121 0xffffffff 0xffffffff 0x78>, /* Main ITLB miss */
<0x0 0x131 0xffffffff 0xffffffff 0x78>, /* Main DTLB access */
<0x0 0x141 0xffffffff 0xffffffff 0x78>, /* Main DTLB miss */
<0x0 0x151 0xffffffff 0xffffffff 0x78>, /* Cycles waiting for Main ITLB fill data */
<0x0 0x161 0xffffffff 0xffffffff 0x78>, /* Pipeline stall cycles caused by Main DTLB miss */
<0x0 0x171 0xffffffff 0xffffffff 0x78>, /* Hardware prefetch bus access */
<0x0 0x02 0xffffffff 0xffffffff 0x78>, /* Misprediction of conditional branches */
<0x0 0x12 0xffffffff 0xffffffff 0x78>, /* Misprediction of taken conditional branches */
<0x0 0x22 0xffffffff 0xffffffff 0x78>; /* Misprediction of targets of Return instructions */
};
```

View File

@@ -88,8 +88,30 @@ _try_lottery:
add t5, t5, t2 add t5, t5, t2
add t3, t3, t2 add t3, t3, t2
REG_S t5, 0(t3) /* store runtime address to the GOT entry */ REG_S t5, 0(t3) /* store runtime address to the GOT entry */
j 5f
3: 3:
lla t4, __dyn_sym_start
4:
srli t6, t5, SYM_INDEX /* t6 <--- sym table index */
andi t5, t5, 0xFF /* t5 <--- relocation type */
li t3, RELOC_TYPE
bne t5, t3, 5f
/* address R_RISCV_64 or R_RISCV_32 cases*/
REG_L t3, 0(t0)
li t5, SYM_SIZE
mul t6, t6, t5
add s5, t4, t6
REG_L t6, (REGBYTES * 2)(t0) /* t0 <-- addend */
REG_L t5, REGBYTES(s5)
add t5, t5, t6
add t5, t5, t2 /* t5 <-- location to fix up in RAM */
add t3, t3, t2 /* t3 <-- location to fix up in RAM */
REG_S t5, 0(t3) /* store runtime address to the variable */
5:
addi t0, t0, (REGBYTES * 3) addi t0, t0, (REGBYTES * 3)
blt t0, t1, 2b blt t0, t1, 2b
j _relocate_done j _relocate_done
@@ -287,8 +309,8 @@ _scratch_init:
REG_S a5, SBI_SCRATCH_FW_SIZE_OFFSET(tp) REG_S a5, SBI_SCRATCH_FW_SIZE_OFFSET(tp)
/* Store R/W section's offset in scratch space */ /* Store R/W section's offset in scratch space */
lla a5, _fw_rw_start lla a4, __fw_rw_offset
sub a5, a5, a4 REG_L a5, 0(a4)
REG_S a5, SBI_SCRATCH_FW_RW_OFFSET(tp) REG_S a5, SBI_SCRATCH_FW_RW_OFFSET(tp)
/* Store fw_heap_offset and fw_heap_size in scratch space */ /* Store fw_heap_offset and fw_heap_size in scratch space */
@@ -399,8 +421,8 @@ _fdt_reloc_done:
/* mark boot hart done */ /* mark boot hart done */
li t0, BOOT_STATUS_BOOT_HART_DONE li t0, BOOT_STATUS_BOOT_HART_DONE
lla t1, _boot_status lla t1, _boot_status
fence rw, rw
REG_S t0, 0(t1) REG_S t0, 0(t1)
fence rw, rw
j _start_warm j _start_warm
/* waiting for boot hart to be done (_boot_status == 2) */ /* waiting for boot hart to be done (_boot_status == 2) */
@@ -409,9 +431,9 @@ _wait_for_boot_hart:
lla t1, _boot_status lla t1, _boot_status
REG_L t1, 0(t1) REG_L t1, 0(t1)
/* Reduce the bus traffic so that boot hart may proceed faster */ /* Reduce the bus traffic so that boot hart may proceed faster */
div t2, t2, zero nop
div t2, t2, zero nop
div t2, t2, zero nop
bne t0, t1, _wait_for_boot_hart bne t0, t1, _wait_for_boot_hart
_start_warm: _start_warm:
@@ -514,6 +536,8 @@ _link_start:
RISCV_PTR FW_TEXT_START RISCV_PTR FW_TEXT_START
_link_end: _link_end:
RISCV_PTR _fw_reloc_end RISCV_PTR _fw_reloc_end
__fw_rw_offset:
RISCV_PTR _fw_rw_start - _fw_start
.section .entry, "ax", %progbits .section .entry, "ax", %progbits
.align 3 .align 3

View File

@@ -40,9 +40,16 @@
. = ALIGN(0x1000); /* Ensure next section is page aligned */ . = ALIGN(0x1000); /* Ensure next section is page aligned */
.dynsym : {
PROVIDE(__dyn_sym_start = .);
*(.dynsym)
PROVIDE(__dyn_sym_end = .);
}
.rela.dyn : { .rela.dyn : {
PROVIDE(__rel_dyn_start = .); PROVIDE(__rel_dyn_start = .);
*(.rela*) *(.rela*)
. = ALIGN(8);
PROVIDE(__rel_dyn_end = .); PROVIDE(__rel_dyn_end = .);
} }

View File

@@ -129,7 +129,7 @@ fw_options:
REG_L a0, (a0) REG_L a0, (a0)
ret ret
.section .data .section .entry, "ax", %progbits
.align 3 .align 3
_dynamic_next_arg1: _dynamic_next_arg1:
RISCV_PTR 0x0 RISCV_PTR 0x0

View File

@@ -90,7 +90,7 @@ fw_options:
#error "Must define FW_JUMP_ADDR" #error "Must define FW_JUMP_ADDR"
#endif #endif
.section .rodata .section .entry, "ax", %progbits
.align 3 .align 3
_jump_addr: _jump_addr:
RISCV_PTR FW_JUMP_ADDR RISCV_PTR FW_JUMP_ADDR

View File

@@ -78,7 +78,7 @@ _start_hang:
wfi wfi
j _start_hang j _start_hang
.section .data .section .entry, "ax", %progbits
.align 3 .align 3
_hart_lottery: _hart_lottery:
RISCV_PTR 0 RISCV_PTR 0

View File

@@ -8,42 +8,31 @@
*/ */
#include <sbi/sbi_ecall_interface.h> #include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_string.h>
struct sbiret { #define SBI_ECALL(__eid, __fid, __a0, __a1, __a2) \
unsigned long error; ({ \
unsigned long value; register unsigned long a0 asm("a0") = (unsigned long)(__a0); \
}; register unsigned long a1 asm("a1") = (unsigned long)(__a1); \
register unsigned long a2 asm("a2") = (unsigned long)(__a2); \
register unsigned long a6 asm("a6") = (unsigned long)(__fid); \
register unsigned long a7 asm("a7") = (unsigned long)(__eid); \
asm volatile("ecall" \
: "+r"(a0) \
: "r"(a1), "r"(a2), "r"(a6), "r"(a7) \
: "memory"); \
a0; \
})
struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, #define SBI_ECALL_0(__eid, __fid) SBI_ECALL(__eid, __fid, 0, 0, 0)
unsigned long arg1, unsigned long arg2, #define SBI_ECALL_1(__eid, __fid, __a0) SBI_ECALL(__eid, __fid, __a0, 0, 0)
unsigned long arg3, unsigned long arg4, #define SBI_ECALL_2(__eid, __fid, __a0, __a1) SBI_ECALL(__eid, __fid, __a0, __a1, 0)
unsigned long arg5)
{
struct sbiret ret;
register unsigned long a0 asm ("a0") = (unsigned long)(arg0); #define sbi_ecall_console_putc(c) SBI_ECALL_1(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, (c))
register unsigned long a1 asm ("a1") = (unsigned long)(arg1);
register unsigned long a2 asm ("a2") = (unsigned long)(arg2);
register unsigned long a3 asm ("a3") = (unsigned long)(arg3);
register unsigned long a4 asm ("a4") = (unsigned long)(arg4);
register unsigned long a5 asm ("a5") = (unsigned long)(arg5);
register unsigned long a6 asm ("a6") = (unsigned long)(fid);
register unsigned long a7 asm ("a7") = (unsigned long)(ext);
asm volatile ("ecall"
: "+r" (a0), "+r" (a1)
: "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
: "memory");
ret.error = a0;
ret.value = a1;
return ret;
}
static inline void sbi_ecall_console_puts(const char *str) static inline void sbi_ecall_console_puts(const char *str)
{ {
sbi_ecall(SBI_EXT_DBCN, SBI_EXT_DBCN_CONSOLE_WRITE, while (str && *str)
sbi_strlen(str), (unsigned long)str, 0, 0, 0, 0); sbi_ecall_console_putc(*str++);
} }
#define wfi() \ #define wfi() \

View File

@@ -181,12 +181,6 @@ int misa_xlen(void);
/* Get RISC-V ISA string representation */ /* Get RISC-V ISA string representation */
void misa_string(int xlen, char *out, unsigned int out_sz); void misa_string(int xlen, char *out, unsigned int out_sz);
/* Disable pmp entry at a given index */
int pmp_disable(unsigned int n);
/* Check if the matching field is set */
int is_pmp_entry_mapped(unsigned long entry);
int pmp_set(unsigned int n, unsigned long prot, unsigned long addr, int pmp_set(unsigned int n, unsigned long prot, unsigned long addr,
unsigned long log2len); unsigned long log2len);

View File

@@ -39,14 +39,14 @@ unsigned int atomic_raw_xchg_uint(volatile unsigned int *ptr,
unsigned long atomic_raw_xchg_ulong(volatile unsigned long *ptr, unsigned long atomic_raw_xchg_ulong(volatile unsigned long *ptr,
unsigned long newval); unsigned long newval);
/** /**
* Set a bit in an atomic variable and return the value of bit before modify. * Set a bit in an atomic variable and return the new value.
* @nr : Bit to set. * @nr : Bit to set.
* @atom: atomic variable to modify * @atom: atomic variable to modify
*/ */
int atomic_set_bit(int nr, atomic_t *atom); int atomic_set_bit(int nr, atomic_t *atom);
/** /**
* Clear a bit in an atomic variable and return the value of bit before modify. * Clear a bit in an atomic variable and return the new value.
* @nr : Bit to set. * @nr : Bit to set.
* @atom: atomic variable to modify * @atom: atomic variable to modify
*/ */
@@ -54,14 +54,14 @@ int atomic_set_bit(int nr, atomic_t *atom);
int atomic_clear_bit(int nr, atomic_t *atom); int atomic_clear_bit(int nr, atomic_t *atom);
/** /**
* Set a bit in any address and return the value of bit before modify. * Set a bit in any address and return the new value .
* @nr : Bit to set. * @nr : Bit to set.
* @addr: Address to modify * @addr: Address to modify
*/ */
int atomic_raw_set_bit(int nr, volatile unsigned long *addr); int atomic_raw_set_bit(int nr, volatile unsigned long *addr);
/** /**
* Clear a bit in any address and return the value of bit before modify. * Clear a bit in any address and return the new value .
* @nr : Bit to set. * @nr : Bit to set.
* @addr: Address to modify * @addr: Address to modify
*/ */

View File

@@ -1,6 +1,14 @@
#ifndef __RISCV_ELF_H__ #ifndef __RISCV_ELF_H__
#define __RISCV_ELF_H__ #define __RISCV_ELF_H__
#include <sbi/riscv_asm.h>
#define R_RISCV_32 1
#define R_RISCV_64 2
#define R_RISCV_RELATIVE 3 #define R_RISCV_RELATIVE 3
#define RELOC_TYPE __REG_SEL(R_RISCV_64, R_RISCV_32)
#define SYM_INDEX __REG_SEL(0x20, 0x8)
#define SYM_SIZE __REG_SEL(0x18,0x10)
#endif #endif

View File

@@ -207,8 +207,13 @@
#define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000) #define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
#if __riscv_xlen > 32
#define ENVCFG_STCE (_ULL(1) << 63) #define ENVCFG_STCE (_ULL(1) << 63)
#define ENVCFG_PBMTE (_ULL(1) << 62) #define ENVCFG_PBMTE (_ULL(1) << 62)
#else
#define ENVCFGH_STCE (_UL(1) << 31)
#define ENVCFGH_PBMTE (_UL(1) << 30)
#endif
#define ENVCFG_CBZE (_UL(1) << 7) #define ENVCFG_CBZE (_UL(1) << 7)
#define ENVCFG_CBCFE (_UL(1) << 6) #define ENVCFG_CBCFE (_UL(1) << 6)
#define ENVCFG_CBIE_SHIFT 4 #define ENVCFG_CBIE_SHIFT 4
@@ -425,7 +430,6 @@
#define CSR_MARCHID 0xf12 #define CSR_MARCHID 0xf12
#define CSR_MIMPID 0xf13 #define CSR_MIMPID 0xf13
#define CSR_MHARTID 0xf14 #define CSR_MHARTID 0xf14
#define CSR_MCONFIGPTR 0xf15
/* Machine Trap Setup */ /* Machine Trap Setup */
#define CSR_MSTATUS 0x300 #define CSR_MSTATUS 0x300
@@ -598,8 +602,6 @@
/* Machine Counter Setup */ /* Machine Counter Setup */
#define CSR_MCOUNTINHIBIT 0x320 #define CSR_MCOUNTINHIBIT 0x320
#define CSR_MCYCLECFG 0x321
#define CSR_MINSTRETCFG 0x322
#define CSR_MHPMEVENT3 0x323 #define CSR_MHPMEVENT3 0x323
#define CSR_MHPMEVENT4 0x324 #define CSR_MHPMEVENT4 0x324
#define CSR_MHPMEVENT5 0x325 #define CSR_MHPMEVENT5 0x325
@@ -631,8 +633,6 @@
#define CSR_MHPMEVENT31 0x33f #define CSR_MHPMEVENT31 0x33f
/* For RV32 */ /* For RV32 */
#define CSR_MCYCLECFGH 0x721
#define CSR_MINSTRETCFGH 0x722
#define CSR_MHPMEVENT3H 0x723 #define CSR_MHPMEVENT3H 0x723
#define CSR_MHPMEVENT4H 0x724 #define CSR_MHPMEVENT4H 0x724
#define CSR_MHPMEVENT5H 0x725 #define CSR_MHPMEVENT5H 0x725
@@ -663,21 +663,6 @@
#define CSR_MHPMEVENT30H 0x73e #define CSR_MHPMEVENT30H 0x73e
#define CSR_MHPMEVENT31H 0x73f #define CSR_MHPMEVENT31H 0x73f
/* Machine Security Configuration CSR (mseccfg) */
#define CSR_MSECCFG 0x747
#define CSR_MSECCFGH 0x757
#define MSECCFG_MML_SHIFT (0)
#define MSECCFG_MML (_UL(1) << MSECCFG_MML_SHIFT)
#define MSECCFG_MMWP_SHIFT (1)
#define MSECCFG_MMWP (_UL(1) << MSECCFG_MMWP_SHIFT)
#define MSECCFG_RLB_SHIFT (2)
#define MSECCFG_RLB (_UL(1) << MSECCFG_RLB_SHIFT)
#define MSECCFG_USEED_SHIFT (8)
#define MSECCFG_USEED (_UL(1) << MSECCFG_USEED_SHIFT)
#define MSECCFG_SSEED_SHIFT (9)
#define MSECCFG_SSEED (_UL(1) << MSECCFG_SSEED_SHIFT)
/* Counter Overflow CSR */ /* Counter Overflow CSR */
#define CSR_SCOUNTOVF 0xda0 #define CSR_SCOUNTOVF 0xda0

View File

@@ -84,7 +84,7 @@
#define GET_FFLAGS() csr_read(CSR_FFLAGS) #define GET_FFLAGS() csr_read(CSR_FFLAGS)
#define SET_FFLAGS(value) csr_write(CSR_FFLAGS, (value)) #define SET_FFLAGS(value) csr_write(CSR_FFLAGS, (value))
#define SET_FS_DIRTY(regs) (regs->mstatus |= MSTATUS_FS) #define SET_FS_DIRTY() ((void)0)
#define GET_F32_RS1(insn, regs) (GET_F32_REG(insn, 15, regs)) #define GET_F32_RS1(insn, regs) (GET_F32_REG(insn, 15, regs))
#define GET_F32_RS2(insn, regs) (GET_F32_REG(insn, 20, regs)) #define GET_F32_RS2(insn, regs) (GET_F32_REG(insn, 20, regs))
@@ -93,9 +93,9 @@
#define GET_F64_RS2(insn, regs) (GET_F64_REG(insn, 20, regs)) #define GET_F64_RS2(insn, regs) (GET_F64_REG(insn, 20, regs))
#define GET_F64_RS3(insn, regs) (GET_F64_REG(insn, 27, regs)) #define GET_F64_RS3(insn, regs) (GET_F64_REG(insn, 27, regs))
#define SET_F32_RD(insn, regs, val) \ #define SET_F32_RD(insn, regs, val) \
(SET_F32_REG(insn, 7, regs, val), SET_FS_DIRTY(regs)) (SET_F32_REG(insn, 7, regs, val), SET_FS_DIRTY())
#define SET_F64_RD(insn, regs, val) \ #define SET_F64_RD(insn, regs, val) \
(SET_F64_REG(insn, 7, regs, val), SET_FS_DIRTY(regs)) (SET_F64_REG(insn, 7, regs, val), SET_FS_DIRTY())
#define GET_F32_RS2C(insn, regs) (GET_F32_REG(insn, 2, regs)) #define GET_F32_RS2C(insn, regs) (GET_F32_REG(insn, 2, regs))
#define GET_F32_RS2S(insn, regs) (GET_F32_REG(RVC_RS2S(insn), 0, regs)) #define GET_F32_RS2S(insn, regs) (GET_F32_REG(RVC_RS2S(insn), 0, regs))

View File

@@ -26,7 +26,6 @@
#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG)) #define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
#define BIT_WORD(bit) ((bit) / BITS_PER_LONG) #define BIT_WORD(bit) ((bit) / BITS_PER_LONG)
#define BIT_WORD_OFFSET(bit) ((bit) & (BITS_PER_LONG - 1)) #define BIT_WORD_OFFSET(bit) ((bit) & (BITS_PER_LONG - 1))
#define BIT_ALIGN(bit, align) (((bit) + ((align) - 1)) & ~((align) - 1))
#define GENMASK(h, l) \ #define GENMASK(h, l) \
(((~0UL) - (1UL << (l)) + 1) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
@@ -113,22 +112,6 @@ static inline unsigned long sbi_fls(unsigned long word)
return num; return num;
} }
/**
* sbi_popcount - find the number of set bit in a long word
* @word: the word to search
*/
static inline unsigned long sbi_popcount(unsigned long word)
{
unsigned long count = 0;
while (word) {
word &= word - 1;
count++;
}
return count;
}
#define for_each_set_bit(bit, addr, size) \ #define for_each_set_bit(bit, addr, size) \
for ((bit) = find_first_bit((addr), (size)); \ for ((bit) = find_first_bit((addr), (size)); \
(bit) < (size); \ (bit) < (size); \

View File

@@ -43,80 +43,6 @@ struct sbi_domain_memregion {
#define SBI_DOMAIN_MEMREGION_SU_WRITABLE (1UL << 4) #define SBI_DOMAIN_MEMREGION_SU_WRITABLE (1UL << 4)
#define SBI_DOMAIN_MEMREGION_SU_EXECUTABLE (1UL << 5) #define SBI_DOMAIN_MEMREGION_SU_EXECUTABLE (1UL << 5)
#define SBI_DOMAIN_MEMREGION_ACCESS_MASK (0x3fUL)
#define SBI_DOMAIN_MEMREGION_M_ACCESS_MASK (0x7UL)
#define SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK (0x38UL)
#define SBI_DOMAIN_MEMREGION_SU_ACCESS_SHIFT (3)
#define SBI_DOMAIN_MEMREGION_SHARED_RDONLY \
(SBI_DOMAIN_MEMREGION_M_READABLE | \
SBI_DOMAIN_MEMREGION_SU_READABLE)
#define SBI_DOMAIN_MEMREGION_SHARED_SUX_MRX \
(SBI_DOMAIN_MEMREGION_M_READABLE | \
SBI_DOMAIN_MEMREGION_M_EXECUTABLE | \
SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
#define SBI_DOMAIN_MEMREGION_SHARED_SUX_MX \
(SBI_DOMAIN_MEMREGION_M_EXECUTABLE | \
SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
#define SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW \
(SBI_DOMAIN_MEMREGION_M_READABLE | \
SBI_DOMAIN_MEMREGION_M_WRITABLE | \
SBI_DOMAIN_MEMREGION_SU_READABLE| \
SBI_DOMAIN_MEMREGION_SU_WRITABLE)
#define SBI_DOMAIN_MEMREGION_SHARED_SUR_MRW \
(SBI_DOMAIN_MEMREGION_M_READABLE | \
SBI_DOMAIN_MEMREGION_M_WRITABLE | \
SBI_DOMAIN_MEMREGION_SU_READABLE)
/* Shared read-only region between M and SU mode */
#define SBI_DOMAIN_MEMREGION_IS_SUR_MR(__flags) \
((__flags & SBI_DOMAIN_MEMREGION_ACCESS_MASK) == \
SBI_DOMAIN_MEMREGION_SHARED_RDONLY)
/* Shared region: SU execute-only and M read/execute */
#define SBI_DOMAIN_MEMREGION_IS_SUX_MRX(__flags) \
((__flags & SBI_DOMAIN_MEMREGION_ACCESS_MASK) == \
SBI_DOMAIN_MEMREGION_SHARED_SUX_MRX)
/* Shared region: SU and M execute-only */
#define SBI_DOMAIN_MEMREGION_IS_SUX_MX(__flags) \
((__flags & SBI_DOMAIN_MEMREGION_ACCESS_MASK) == \
SBI_DOMAIN_MEMREGION_SHARED_SUX_MX)
/* Shared region: SU and M read/write */
#define SBI_DOMAIN_MEMREGION_IS_SURW_MRW(__flags) \
((__flags & SBI_DOMAIN_MEMREGION_ACCESS_MASK) == \
SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW)
/* Shared region: SU read-only and M read/write */
#define SBI_DOMAIN_MEMREGION_IS_SUR_MRW(__flags) \
((__flags & SBI_DOMAIN_MEMREGION_ACCESS_MASK) == \
SBI_DOMAIN_MEMREGION_SHARED_SUR_MRW)
/*
* Check if region flags match with any of the above
* mentioned shared region type
*/
#define SBI_DOMAIN_MEMREGION_IS_SHARED(_flags) \
(SBI_DOMAIN_MEMREGION_IS_SUR_MR(_flags) || \
SBI_DOMAIN_MEMREGION_IS_SUX_MRX(_flags) || \
SBI_DOMAIN_MEMREGION_IS_SUX_MX(_flags) || \
SBI_DOMAIN_MEMREGION_IS_SURW_MRW(_flags)|| \
SBI_DOMAIN_MEMREGION_IS_SUR_MRW(_flags))
#define SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(__flags) \
((__flags & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK) && \
!(__flags & SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK))
#define SBI_DOMAIN_MEMREGION_SU_ONLY_ACCESS(__flags) \
((__flags & SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK) && \
!(__flags & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK))
/** Bit to control if permissions are enforced on all modes */ /** Bit to control if permissions are enforced on all modes */
#define SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS (1UL << 6) #define SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS (1UL << 6)
@@ -152,6 +78,12 @@ struct sbi_domain_memregion {
(SBI_DOMAIN_MEMREGION_SU_EXECUTABLE | \ (SBI_DOMAIN_MEMREGION_SU_EXECUTABLE | \
SBI_DOMAIN_MEMREGION_M_EXECUTABLE) SBI_DOMAIN_MEMREGION_M_EXECUTABLE)
#define SBI_DOMAIN_MEMREGION_ACCESS_MASK (0x3fUL)
#define SBI_DOMAIN_MEMREGION_M_ACCESS_MASK (0x7UL)
#define SBI_DOMAIN_MEMREGION_SU_ACCESS_MASK (0x38UL)
#define SBI_DOMAIN_MEMREGION_SU_ACCESS_SHIFT (3)
#define SBI_DOMAIN_MEMREGION_MMIO (1UL << 31) #define SBI_DOMAIN_MEMREGION_MMIO (1UL << 31)
unsigned long flags; unsigned long flags;
}; };
@@ -197,12 +129,12 @@ struct sbi_domain {
/** The root domain instance */ /** The root domain instance */
extern struct sbi_domain root; extern struct sbi_domain root;
/** Get pointer to sbi_domain from HART index */ /** Get pointer to sbi_domain from HART id */
struct sbi_domain *sbi_hartindex_to_domain(u32 hartindex); struct sbi_domain *sbi_hartid_to_domain(u32 hartid);
/** Get pointer to sbi_domain for current HART */ /** Get pointer to sbi_domain for current HART */
#define sbi_domain_thishart_ptr() \ #define sbi_domain_thishart_ptr() \
sbi_hartindex_to_domain(sbi_hartid_to_hartindex(current_hartid())) sbi_hartid_to_domain(current_hartid())
/** Index to domain table */ /** Index to domain table */
extern struct sbi_domain *domidx_to_domain_table[]; extern struct sbi_domain *domidx_to_domain_table[];

View File

@@ -13,20 +13,13 @@
#include <sbi/sbi_types.h> #include <sbi/sbi_types.h>
#include <sbi/sbi_list.h> #include <sbi/sbi_list.h>
#define SBI_ECALL_VERSION_MAJOR 2 #define SBI_ECALL_VERSION_MAJOR 1
#define SBI_ECALL_VERSION_MINOR 0 #define SBI_ECALL_VERSION_MINOR 0
#define SBI_OPENSBI_IMPID 1 #define SBI_OPENSBI_IMPID 1
struct sbi_trap_regs; struct sbi_trap_regs;
struct sbi_trap_info; struct sbi_trap_info;
struct sbi_ecall_return {
/* Return flag to skip register update */
bool skip_regs_update;
/* Return value */
unsigned long value;
};
struct sbi_ecall_extension { struct sbi_ecall_extension {
/* head is used by the extension list */ /* head is used by the extension list */
struct sbi_dlist head; struct sbi_dlist head;
@@ -69,8 +62,9 @@ struct sbi_ecall_extension {
* never invoked with an invalid or unavailable extension ID. * never invoked with an invalid or unavailable extension ID.
*/ */
int (* handle)(unsigned long extid, unsigned long funcid, int (* handle)(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out); unsigned long *out_val,
struct sbi_trap_info *out_trap);
}; };
u16 sbi_ecall_version_major(void); u16 sbi_ecall_version_major(void);

View File

@@ -103,7 +103,6 @@
#define SBI_EXT_PMU_COUNTER_STOP 0x4 #define SBI_EXT_PMU_COUNTER_STOP 0x4
#define SBI_EXT_PMU_COUNTER_FW_READ 0x5 #define SBI_EXT_PMU_COUNTER_FW_READ 0x5
#define SBI_EXT_PMU_COUNTER_FW_READ_HI 0x6 #define SBI_EXT_PMU_COUNTER_FW_READ_HI 0x6
#define SBI_EXT_PMU_SNAPSHOT_SET_SHMEM 0x7
/** General pmu event codes specified in SBI PMU extension */ /** General pmu event codes specified in SBI PMU extension */
enum sbi_pmu_hw_generic_events_t { enum sbi_pmu_hw_generic_events_t {
@@ -242,11 +241,9 @@ enum sbi_pmu_ctr_type {
/* Flags defined for counter start function */ /* Flags defined for counter start function */
#define SBI_PMU_START_FLAG_SET_INIT_VALUE (1 << 0) #define SBI_PMU_START_FLAG_SET_INIT_VALUE (1 << 0)
#define SBI_PMU_START_FLAG_INIT_FROM_SNAPSHOT (1 << 1)
/* Flags defined for counter stop function */ /* Flags defined for counter stop function */
#define SBI_PMU_STOP_FLAG_RESET (1 << 0) #define SBI_PMU_STOP_FLAG_RESET (1 << 0)
#define SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT (1 << 1)
/* SBI function IDs for DBCN extension */ /* SBI function IDs for DBCN extension */
#define SBI_EXT_DBCN_CONSOLE_WRITE 0x0 #define SBI_EXT_DBCN_CONSOLE_WRITE 0x0
@@ -312,9 +309,8 @@ enum sbi_cppc_reg_id {
#define SBI_ERR_ALREADY_AVAILABLE -6 #define SBI_ERR_ALREADY_AVAILABLE -6
#define SBI_ERR_ALREADY_STARTED -7 #define SBI_ERR_ALREADY_STARTED -7
#define SBI_ERR_ALREADY_STOPPED -8 #define SBI_ERR_ALREADY_STOPPED -8
#define SBI_ERR_NO_SHMEM -9
#define SBI_LAST_ERR SBI_ERR_NO_SHMEM #define SBI_LAST_ERR SBI_ERR_ALREADY_STOPPED
/* clang-format on */ /* clang-format on */

View File

@@ -23,7 +23,6 @@
#define SBI_EALREADY SBI_ERR_ALREADY_AVAILABLE #define SBI_EALREADY SBI_ERR_ALREADY_AVAILABLE
#define SBI_EALREADY_STARTED SBI_ERR_ALREADY_STARTED #define SBI_EALREADY_STARTED SBI_ERR_ALREADY_STARTED
#define SBI_EALREADY_STOPPED SBI_ERR_ALREADY_STOPPED #define SBI_EALREADY_STOPPED SBI_ERR_ALREADY_STOPPED
#define SBI_ENO_SHMEM SBI_ERR_NO_SHMEM
#define SBI_ENODEV -1000 #define SBI_ENODEV -1000
#define SBI_ENOSYS -1001 #define SBI_ENOSYS -1001
@@ -32,8 +31,9 @@
#define SBI_EILL -1004 #define SBI_EILL -1004
#define SBI_ENOSPC -1005 #define SBI_ENOSPC -1005
#define SBI_ENOMEM -1006 #define SBI_ENOMEM -1006
#define SBI_EUNKNOWN -1007 #define SBI_ETRAP -1007
#define SBI_ENOENT -1008 #define SBI_EUNKNOWN -1008
#define SBI_ENOENT -1009
/* clang-format on */ /* clang-format on */

View File

@@ -11,7 +11,6 @@
#define __SBI_HART_H__ #define __SBI_HART_H__
#include <sbi/sbi_types.h> #include <sbi/sbi_types.h>
#include <sbi/sbi_bitops.h>
/** Possible privileged specification versions of a hart */ /** Possible privileged specification versions of a hart */
enum sbi_hart_priv_versions { enum sbi_hart_priv_versions {
@@ -27,67 +26,29 @@ enum sbi_hart_priv_versions {
/** Possible ISA extensions of a hart */ /** Possible ISA extensions of a hart */
enum sbi_hart_extensions { enum sbi_hart_extensions {
/** Hart has Sscofpmt extension */
SBI_HART_EXT_SSCOFPMF = 0,
/** HART has HW time CSR (extension name not available) */
SBI_HART_EXT_TIME,
/** HART has AIA M-mode CSRs */ /** HART has AIA M-mode CSRs */
SBI_HART_EXT_SMAIA = 0, SBI_HART_EXT_SMAIA,
/** HART has Smepmp */
SBI_HART_EXT_SMEPMP,
/** HART has Smstateen CSR **/ /** HART has Smstateen CSR **/
SBI_HART_EXT_SMSTATEEN, SBI_HART_EXT_SMSTATEEN,
/** Hart has Sscofpmt extension */
SBI_HART_EXT_SSCOFPMF,
/** HART has Sstc extension */ /** HART has Sstc extension */
SBI_HART_EXT_SSTC, SBI_HART_EXT_SSTC,
/** HART has Zicntr extension (i.e. HW cycle, time & instret CSRs) */
SBI_HART_EXT_ZICNTR,
/** HART has Zihpm extension */
SBI_HART_EXT_ZIHPM,
/** HART has Zkr extension */
SBI_HART_EXT_ZKR,
/** Hart has Smcntrpmf extension */
SBI_HART_EXT_SMCNTRPMF,
/** Hart has Xandespmu extension */
SBI_HART_EXT_XANDESPMU,
/** Hart has Zicboz extension */
SBI_HART_EXT_ZICBOZ,
/** Hart has Zicbom extension */
SBI_HART_EXT_ZICBOM,
/** Hart has Svpbmt extension */
SBI_HART_EXT_SVPBMT,
/** Maximum index of Hart extension */ /** Maximum index of Hart extension */
SBI_HART_EXT_MAX, SBI_HART_EXT_MAX,
}; };
struct sbi_hart_ext_data {
const unsigned int id;
const char *name;
};
extern const struct sbi_hart_ext_data sbi_hart_ext[];
/*
* Smepmp enforces access boundaries between M-mode and
* S/U-mode. When it is enabled, the PMPs are programmed
* such that M-mode doesn't have access to S/U-mode memory.
*
* To give M-mode R/W access to the shared memory between M and
* S/U-mode, first entry is reserved. It is disabled at boot.
* When shared memory access is required, the physical address
* should be programmed into the first PMP entry with R/W
* permissions to the M-mode. Once the work is done, it should be
* unmapped. sbi_hart_map_saddr/sbi_hart_unmap_saddr function
* pair should be used to map/unmap the shared memory.
*/
#define SBI_SMEPMP_RESV_ENTRY 0
struct sbi_hart_features { struct sbi_hart_features {
bool detected; bool detected;
int priv_version; int priv_version;
unsigned long extensions[BITS_TO_LONGS(SBI_HART_EXT_MAX)]; unsigned long extensions;
unsigned int pmp_count; unsigned int pmp_count;
unsigned int pmp_addr_bits; unsigned int pmp_addr_bits;
unsigned int pmp_log2gran; unsigned long pmp_gran;
unsigned int mhpm_mask; unsigned int mhpm_count;
unsigned int mhpm_bits; unsigned int mhpm_bits;
}; };
@@ -102,16 +63,14 @@ static inline ulong sbi_hart_expected_trap_addr(void)
return (ulong)sbi_hart_expected_trap; return (ulong)sbi_hart_expected_trap;
} }
unsigned int sbi_hart_mhpm_mask(struct sbi_scratch *scratch); unsigned int sbi_hart_mhpm_count(struct sbi_scratch *scratch);
void sbi_hart_delegation_dump(struct sbi_scratch *scratch, void sbi_hart_delegation_dump(struct sbi_scratch *scratch,
const char *prefix, const char *suffix); const char *prefix, const char *suffix);
unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch); unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch);
unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch); unsigned long sbi_hart_pmp_granularity(struct sbi_scratch *scratch);
unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch); unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch);
unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch); unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch);
int sbi_hart_pmp_configure(struct sbi_scratch *scratch); int sbi_hart_pmp_configure(struct sbi_scratch *scratch);
int sbi_hart_map_saddr(unsigned long base, unsigned long size);
int sbi_hart_unmap_saddr(void);
int sbi_hart_priv_version(struct sbi_scratch *scratch); int sbi_hart_priv_version(struct sbi_scratch *scratch);
void sbi_hart_get_priv_version_str(struct sbi_scratch *scratch, void sbi_hart_get_priv_version_str(struct sbi_scratch *scratch,
char *version_str, int nvstr); char *version_str, int nvstr);

View File

@@ -11,7 +11,6 @@
#define __SBI_HARTMASK_H__ #define __SBI_HARTMASK_H__
#include <sbi/sbi_bitmap.h> #include <sbi/sbi_bitmap.h>
#include <sbi/sbi_scratch.h>
/** /**
* Maximum number of bits in a hartmask * Maximum number of bits in a hartmask
@@ -33,10 +32,7 @@ struct sbi_hartmask {
/** Initialize hartmask to zero except a particular HART id */ /** Initialize hartmask to zero except a particular HART id */
#define SBI_HARTMASK_INIT_EXCEPT(__m, __h) \ #define SBI_HARTMASK_INIT_EXCEPT(__m, __h) \
do { \ bitmap_zero_except(((__m)->bits), (__h), SBI_HARTMASK_MAX_BITS)
u32 __i = sbi_hartid_to_hartindex(__h); \
bitmap_zero_except(((__m)->bits), __i, SBI_HARTMASK_MAX_BITS); \
} while(0)
/** /**
* Get underlying bitmap of hartmask * Get underlying bitmap of hartmask
@@ -45,68 +41,37 @@ struct sbi_hartmask {
#define sbi_hartmask_bits(__m) ((__m)->bits) #define sbi_hartmask_bits(__m) ((__m)->bits)
/** /**
* Set a HART index in hartmask * Set a HART in hartmask
* @param i HART index to set
* @param m the hartmask pointer
*/
static inline void sbi_hartmask_set_hartindex(u32 i, struct sbi_hartmask *m)
{
if (i < SBI_HARTMASK_MAX_BITS)
__set_bit(i, m->bits);
}
/**
* Set a HART id in hartmask
* @param h HART id to set * @param h HART id to set
* @param m the hartmask pointer * @param m the hartmask pointer
*/ */
static inline void sbi_hartmask_set_hartid(u32 h, struct sbi_hartmask *m) static inline void sbi_hartmask_set_hart(u32 h, struct sbi_hartmask *m)
{ {
sbi_hartmask_set_hartindex(sbi_hartid_to_hartindex(h), m); if (h < SBI_HARTMASK_MAX_BITS)
__set_bit(h, m->bits);
} }
/** /**
* Clear a HART index in hartmask * Clear a HART in hartmask
* @param i HART index to clear
* @param m the hartmask pointer
*/
static inline void sbi_hartmask_clear_hartindex(u32 i, struct sbi_hartmask *m)
{
if (i < SBI_HARTMASK_MAX_BITS)
__clear_bit(i, m->bits);
}
/**
* Clear a HART id in hartmask
* @param h HART id to clear * @param h HART id to clear
* @param m the hartmask pointer * @param m the hartmask pointer
*/ */
static inline void sbi_hartmask_clear_hartid(u32 h, struct sbi_hartmask *m) static inline void sbi_hartmask_clear_hart(u32 h, struct sbi_hartmask *m)
{ {
sbi_hartmask_clear_hartindex(sbi_hartid_to_hartindex(h), m); if (h < SBI_HARTMASK_MAX_BITS)
__clear_bit(h, m->bits);
} }
/** /**
* Test a HART index in hartmask * Test a HART in hartmask
* @param i HART index to test
* @param m the hartmask pointer
*/
static inline int sbi_hartmask_test_hartindex(u32 i,
const struct sbi_hartmask *m)
{
if (i < SBI_HARTMASK_MAX_BITS)
return __test_bit(i, m->bits);
return 0;
}
/**
* Test a HART id in hartmask
* @param h HART id to test * @param h HART id to test
* @param m the hartmask pointer * @param m the hartmask pointer
*/ */
static inline int sbi_hartmask_test_hartid(u32 h, const struct sbi_hartmask *m) static inline int sbi_hartmask_test_hart(u32 h, const struct sbi_hartmask *m)
{ {
return sbi_hartmask_test_hartindex(sbi_hartid_to_hartindex(h), m); if (h < SBI_HARTMASK_MAX_BITS)
return __test_bit(h, m->bits);
return 0;
} }
/** /**
@@ -169,14 +134,8 @@ static inline void sbi_hartmask_xor(struct sbi_hartmask *dstp,
sbi_hartmask_bits(src2p), SBI_HARTMASK_MAX_BITS); sbi_hartmask_bits(src2p), SBI_HARTMASK_MAX_BITS);
} }
/** /** Iterate over each HART in hartmask */
* Iterate over each HART index in hartmask #define sbi_hartmask_for_each_hart(__h, __m) \
* __i hart index for_each_set_bit(__h, (__m)->bits, SBI_HARTMASK_MAX_BITS)
* __m hartmask
*/
#define sbi_hartmask_for_each_hartindex(__i, __m) \
for((__i) = find_first_bit((__m)->bits, SBI_HARTMASK_MAX_BITS); \
(__i) < SBI_HARTMASK_MAX_BITS; \
(__i) = find_next_bit((__m)->bits, SBI_HARTMASK_MAX_BITS, (__i) + 1))
#endif #endif

View File

@@ -12,9 +12,6 @@
#include <sbi/sbi_types.h> #include <sbi/sbi_types.h>
/* Alignment of heap base address and size */
#define HEAP_BASE_ALIGN 1024
struct sbi_scratch; struct sbi_scratch;
/** Allocate from heap area */ /** Allocate from heap area */

View File

@@ -14,7 +14,7 @@
/* clang-format off */ /* clang-format off */
#define SBI_IPI_EVENT_MAX (8 * __SIZEOF_LONG__) #define SBI_IPI_EVENT_MAX __riscv_xlen
/* clang-format on */ /* clang-format on */
@@ -23,11 +23,11 @@ struct sbi_ipi_device {
/** Name of the IPI device */ /** Name of the IPI device */
char name[32]; char name[32];
/** Send IPI to a target HART index */ /** Send IPI to a target HART */
void (*ipi_send)(u32 hart_index); void (*ipi_send)(u32 target_hart);
/** Clear IPI for a target HART index */ /** Clear IPI for a target HART */
void (*ipi_clear)(u32 hart_index); void (*ipi_clear)(u32 target_hart);
}; };
enum sbi_ipi_update_type { enum sbi_ipi_update_type {
@@ -54,7 +54,7 @@ struct sbi_ipi_event_ops {
*/ */
int (* update)(struct sbi_scratch *scratch, int (* update)(struct sbi_scratch *scratch,
struct sbi_scratch *remote_scratch, struct sbi_scratch *remote_scratch,
u32 remote_hartindex, void *data); u32 remote_hartid, void *data);
/** /**
* Sync callback to wait for remote HART * Sync callback to wait for remote HART
@@ -85,9 +85,9 @@ int sbi_ipi_send_halt(ulong hmask, ulong hbase);
void sbi_ipi_process(void); void sbi_ipi_process(void);
int sbi_ipi_raw_send(u32 hartindex); int sbi_ipi_raw_send(u32 target_hart);
void sbi_ipi_raw_clear(u32 hartindex); void sbi_ipi_raw_clear(u32 target_hart);
const struct sbi_ipi_device *sbi_ipi_get_device(void); const struct sbi_ipi_device *sbi_ipi_get_device(void);

View File

@@ -50,7 +50,7 @@
#include <sbi/sbi_version.h> #include <sbi/sbi_version.h>
struct sbi_domain_memregion; struct sbi_domain_memregion;
struct sbi_ecall_return; struct sbi_trap_info;
struct sbi_trap_regs; struct sbi_trap_regs;
struct sbi_hart_features; struct sbi_hart_features;
@@ -125,9 +125,6 @@ struct sbi_platform_operations {
/** Get tlb flush limit value **/ /** Get tlb flush limit value **/
u64 (*get_tlbr_flush_limit)(void); u64 (*get_tlbr_flush_limit)(void);
/** Get tlb fifo num entries*/
u32 (*get_tlb_num_entries)(void);
/** Initialize platform timer for current HART */ /** Initialize platform timer for current HART */
int (*timer_init)(bool cold_boot); int (*timer_init)(bool cold_boot);
/** Exit platform timer for current HART */ /** Exit platform timer for current HART */
@@ -137,8 +134,9 @@ struct sbi_platform_operations {
bool (*vendor_ext_check)(void); bool (*vendor_ext_check)(void);
/** platform specific SBI extension implementation provider */ /** platform specific SBI extension implementation provider */
int (*vendor_ext_provider)(long funcid, int (*vendor_ext_provider)(long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out); unsigned long *out_value,
struct sbi_trap_info *out_trap);
}; };
/** Platform default per-HART stack size for exception/interrupt handling */ /** Platform default per-HART stack size for exception/interrupt handling */
@@ -260,6 +258,16 @@ _Static_assert(
#define sbi_platform_has_mfaults_delegation(__p) \ #define sbi_platform_has_mfaults_delegation(__p) \
((__p)->features & SBI_PLATFORM_HAS_MFAULTS_DELEGATION) ((__p)->features & SBI_PLATFORM_HAS_MFAULTS_DELEGATION)
/**
* Get HART index for the given HART
*
* @param plat pointer to struct sbi_platform
* @param hartid HART ID
*
* @return 0 <= value < hart_count for valid HART otherwise -1U
*/
u32 sbi_platform_hart_index(const struct sbi_platform *plat, u32 hartid);
/** /**
* Get the platform features in string format * Get the platform features in string format
* *
@@ -317,20 +325,6 @@ static inline u64 sbi_platform_tlbr_flush_limit(const struct sbi_platform *plat)
return SBI_PLATFORM_TLB_RANGE_FLUSH_LIMIT_DEFAULT; return SBI_PLATFORM_TLB_RANGE_FLUSH_LIMIT_DEFAULT;
} }
/**
* Get platform specific tlb fifo num entries.
*
* @param plat pointer to struct sbi_platform
*
* @return number of tlb fifo entries
*/
static inline u32 sbi_platform_tlb_fifo_num_entries(const struct sbi_platform *plat)
{
if (plat && sbi_platform_ops(plat)->get_tlb_num_entries)
return sbi_platform_ops(plat)->get_tlb_num_entries();
return sbi_scratch_last_hartindex() + 1;
}
/** /**
* Get total number of HARTs supported by the platform * Get total number of HARTs supported by the platform
* *
@@ -359,6 +353,24 @@ static inline u32 sbi_platform_hart_stack_size(const struct sbi_platform *plat)
return 0; return 0;
} }
/**
* Check whether given HART is invalid
*
* @param plat pointer to struct sbi_platform
* @param hartid HART ID
*
* @return true if HART is invalid and false otherwise
*/
static inline bool sbi_platform_hart_invalid(const struct sbi_platform *plat,
u32 hartid)
{
if (!plat)
return true;
if (plat->hart_count <= sbi_platform_hart_index(plat, hartid))
return true;
return false;
}
/** /**
* Check whether given HART is allowed to do cold boot * Check whether given HART is allowed to do cold boot
* *
@@ -665,12 +677,16 @@ static inline bool sbi_platform_vendor_ext_check(
static inline int sbi_platform_vendor_ext_provider( static inline int sbi_platform_vendor_ext_provider(
const struct sbi_platform *plat, const struct sbi_platform *plat,
long funcid, long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_value,
struct sbi_trap_info *out_trap)
{ {
if (plat && sbi_platform_ops(plat)->vendor_ext_provider) if (plat && sbi_platform_ops(plat)->vendor_ext_provider) {
return sbi_platform_ops(plat)->vendor_ext_provider(funcid, return sbi_platform_ops(plat)->vendor_ext_provider(funcid,
regs, out); regs,
out_value,
out_trap);
}
return SBI_ENOTSUPP; return SBI_ENOTSUPP;
} }

View File

@@ -23,7 +23,6 @@ struct sbi_scratch;
#define SBI_PMU_HW_CTR_MAX 32 #define SBI_PMU_HW_CTR_MAX 32
#define SBI_PMU_CTR_MAX (SBI_PMU_HW_CTR_MAX + SBI_PMU_FW_CTR_MAX) #define SBI_PMU_CTR_MAX (SBI_PMU_HW_CTR_MAX + SBI_PMU_FW_CTR_MAX)
#define SBI_PMU_FIXED_CTR_MASK 0x07 #define SBI_PMU_FIXED_CTR_MASK 0x07
#define SBI_PMU_CY_IR_MASK 0x05
struct sbi_pmu_device { struct sbi_pmu_device {
/** Name of the PMU platform device */ /** Name of the PMU platform device */
@@ -90,12 +89,6 @@ struct sbi_pmu_device {
* Custom function returning the machine-specific irq-bit. * Custom function returning the machine-specific irq-bit.
*/ */
int (*hw_counter_irq_bit)(void); int (*hw_counter_irq_bit)(void);
/**
* Custom function to inhibit counting of events while in
* specified mode.
*/
void (*hw_counter_filter_mode)(unsigned long flags, int counter_index);
}; };
/** Get the PMU platform device */ /** Get the PMU platform device */

View File

@@ -202,51 +202,18 @@ do { \
= (__type)(__ptr); \ = (__type)(__ptr); \
} while (0) } while (0)
/** Last HART index having a sbi_scratch pointer */ /** HART id to scratch table */
extern u32 last_hartindex_having_scratch; extern struct sbi_scratch *hartid_to_scratch_table[];
/** Get last HART index having a sbi_scratch pointer */
#define sbi_scratch_last_hartindex() last_hartindex_having_scratch
/** Check whether a particular HART index is valid or not */
#define sbi_hartindex_valid(__hartindex) \
(((__hartindex) <= sbi_scratch_last_hartindex()) ? true : false)
/** HART index to HART id table */
extern u32 hartindex_to_hartid_table[];
/** Get sbi_scratch from HART index */
#define sbi_hartindex_to_hartid(__hartindex) \
({ \
((__hartindex) <= sbi_scratch_last_hartindex()) ?\
hartindex_to_hartid_table[__hartindex] : -1U; \
})
/** HART index to scratch table */
extern struct sbi_scratch *hartindex_to_scratch_table[];
/** Get sbi_scratch from HART index */
#define sbi_hartindex_to_scratch(__hartindex) \
({ \
((__hartindex) <= sbi_scratch_last_hartindex()) ?\
hartindex_to_scratch_table[__hartindex] : NULL;\
})
/**
* Get logical index for given HART id
* @param hartid physical HART id
* @returns value between 0 to SBI_HARTMASK_MAX_BITS upon success and
* SBI_HARTMASK_MAX_BITS upon failure.
*/
u32 sbi_hartid_to_hartindex(u32 hartid);
/** Get sbi_scratch from HART id */ /** Get sbi_scratch from HART id */
#define sbi_hartid_to_scratch(__hartid) \ #define sbi_hartid_to_scratch(__hartid) \
sbi_hartindex_to_scratch(sbi_hartid_to_hartindex(__hartid)) hartid_to_scratch_table[__hartid]
/** Check whether particular HART id is valid or not */ /** Last HART id having a sbi_scratch pointer */
#define sbi_hartid_valid(__hartid) \ extern u32 last_hartid_having_scratch;
sbi_hartindex_valid(sbi_hartid_to_hartindex(__hartid))
/** Get last HART id having a sbi_scratch pointer */
#define sbi_scratch_last_hartid() last_hartid_having_scratch
#endif #endif

View File

@@ -20,35 +20,34 @@
/* clang-format on */ /* clang-format on */
struct sbi_scratch; #define SBI_TLB_FIFO_NUM_ENTRIES 8
enum sbi_tlb_type { struct sbi_scratch;
SBI_TLB_FENCE_I = 0,
SBI_TLB_SFENCE_VMA,
SBI_TLB_SFENCE_VMA_ASID,
SBI_TLB_HFENCE_GVMA_VMID,
SBI_TLB_HFENCE_GVMA,
SBI_TLB_HFENCE_VVMA_ASID,
SBI_TLB_HFENCE_VVMA,
SBI_TLB_TYPE_MAX,
};
struct sbi_tlb_info { struct sbi_tlb_info {
unsigned long start; unsigned long start;
unsigned long size; unsigned long size;
uint16_t asid; unsigned long asid;
uint16_t vmid; unsigned long vmid;
enum sbi_tlb_type type; void (*local_fn)(struct sbi_tlb_info *tinfo);
struct sbi_hartmask smask; struct sbi_hartmask smask;
}; };
#define SBI_TLB_INFO_INIT(__p, __start, __size, __asid, __vmid, __type, __src) \ void sbi_tlb_local_hfence_vvma(struct sbi_tlb_info *tinfo);
void sbi_tlb_local_hfence_gvma(struct sbi_tlb_info *tinfo);
void sbi_tlb_local_sfence_vma(struct sbi_tlb_info *tinfo);
void sbi_tlb_local_hfence_vvma_asid(struct sbi_tlb_info *tinfo);
void sbi_tlb_local_hfence_gvma_vmid(struct sbi_tlb_info *tinfo);
void sbi_tlb_local_sfence_vma_asid(struct sbi_tlb_info *tinfo);
void sbi_tlb_local_fence_i(struct sbi_tlb_info *tinfo);
#define SBI_TLB_INFO_INIT(__p, __start, __size, __asid, __vmid, __lfn, __src) \
do { \ do { \
(__p)->start = (__start); \ (__p)->start = (__start); \
(__p)->size = (__size); \ (__p)->size = (__size); \
(__p)->asid = (__asid); \ (__p)->asid = (__asid); \
(__p)->vmid = (__vmid); \ (__p)->vmid = (__vmid); \
(__p)->type = (__type); \ (__p)->local_fn = (__lfn); \
SBI_HARTMASK_INIT_EXCEPT(&(__p)->smask, (__src)); \ SBI_HARTMASK_INIT_EXCEPT(&(__p)->smask, (__src)); \
} while (0) } while (0)

View File

@@ -11,7 +11,7 @@
#define __SBI_VERSION_H__ #define __SBI_VERSION_H__
#define OPENSBI_VERSION_MAJOR 1 #define OPENSBI_VERSION_MAJOR 1
#define OPENSBI_VERSION_MINOR 4 #define OPENSBI_VERSION_MINOR 3
/** /**
* OpenSBI 32-bit version with: * OpenSBI 32-bit version with:

View File

@@ -48,9 +48,6 @@ int fdt_parse_phandle_with_args(void *fdt, int nodeoff,
int fdt_get_node_addr_size(void *fdt, int node, int index, int fdt_get_node_addr_size(void *fdt, int node, int index,
uint64_t *addr, uint64_t *size); uint64_t *addr, uint64_t *size);
int fdt_get_node_addr_size_by_name(void *fdt, int node, const char *name,
uint64_t *addr, uint64_t *size);
bool fdt_node_is_enabled(void *fdt, int nodeoff); bool fdt_node_is_enabled(void *fdt, int nodeoff);
int fdt_parse_hart_id(void *fdt, int cpu_offset, u32 *hartid); int fdt_parse_hart_id(void *fdt, int cpu_offset, u32 *hartid);
@@ -59,9 +56,6 @@ int fdt_parse_max_enabled_hart_id(void *fdt, u32 *max_hartid);
int fdt_parse_timebase_frequency(void *fdt, unsigned long *freq); int fdt_parse_timebase_frequency(void *fdt, unsigned long *freq);
int fdt_parse_isa_extensions(void *fdt, unsigned int hard_id,
unsigned long *extensions);
int fdt_parse_gaisler_uart_node(void *fdt, int nodeoffset, int fdt_parse_gaisler_uart_node(void *fdt, int nodeoffset,
struct platform_uart_data *uart); struct platform_uart_data *uart);
@@ -99,8 +93,7 @@ int fdt_parse_plic_node(void *fdt, int nodeoffset, struct plic_data *plic);
int fdt_parse_plic(void *fdt, struct plic_data *plic, const char *compat); int fdt_parse_plic(void *fdt, struct plic_data *plic, const char *compat);
int fdt_parse_aclint_node(void *fdt, int nodeoffset, int fdt_parse_aclint_node(void *fdt, int nodeoffset, bool for_timer,
bool for_timer, bool allow_regname,
unsigned long *out_addr1, unsigned long *out_size1, unsigned long *out_addr1, unsigned long *out_size1,
unsigned long *out_addr2, unsigned long *out_size2, unsigned long *out_addr2, unsigned long *out_size2,
u32 *out_first_hartid, u32 *out_hart_count); u32 *out_first_hartid, u32 *out_hart_count);

View File

@@ -13,23 +13,6 @@
#include <sbi/sbi_types.h> #include <sbi/sbi_types.h>
struct fdt_pmu_hw_event_select_map {
uint32_t eidx;
uint64_t select;
};
struct fdt_pmu_hw_event_counter_map {
uint32_t eidx_start;
uint32_t eidx_end;
uint32_t ctr_map;
};
struct fdt_pmu_raw_event_counter_map {
uint64_t select;
uint64_t select_mask;
uint32_t ctr_map;
};
#ifdef CONFIG_FDT_PMU #ifdef CONFIG_FDT_PMU
/** /**
@@ -43,7 +26,7 @@ struct fdt_pmu_raw_event_counter_map {
* *
* @param fdt device tree blob * @param fdt device tree blob
*/ */
int fdt_pmu_fixup(void *fdt); void fdt_pmu_fixup(void *fdt);
/** /**
* Setup PMU data from device tree * Setup PMU data from device tree
@@ -62,11 +45,6 @@ int fdt_pmu_setup(void *fdt);
*/ */
uint64_t fdt_pmu_get_select_value(uint32_t event_idx); uint64_t fdt_pmu_get_select_value(uint32_t event_idx);
/** The event index to selector value table instance */
extern struct fdt_pmu_hw_event_select_map fdt_pmu_evt_select[];
/** The number of valid entries in fdt_pmu_evt_select[] */
extern uint32_t hw_event_count;
#else #else
static inline void fdt_pmu_fixup(void *fdt) { } static inline void fdt_pmu_fixup(void *fdt) { }

View File

@@ -15,6 +15,9 @@
/** Representation of a I2C adapter */ /** Representation of a I2C adapter */
struct i2c_adapter { struct i2c_adapter {
/** Pointer to I2C driver owning this I2C adapter */
void *driver;
/** Unique ID of the I2C adapter assigned by the driver */ /** Unique ID of the I2C adapter assigned by the driver */
int id; int id;

View File

@@ -16,6 +16,7 @@
#define PLICSW_PRIORITY_BASE 0x4 #define PLICSW_PRIORITY_BASE 0x4
#define PLICSW_PENDING_BASE 0x1000 #define PLICSW_PENDING_BASE 0x1000
#define PLICSW_PENDING_STRIDE 0x8
#define PLICSW_ENABLE_BASE 0x2000 #define PLICSW_ENABLE_BASE 0x2000
#define PLICSW_ENABLE_STRIDE 0x80 #define PLICSW_ENABLE_STRIDE 0x80
@@ -24,12 +25,18 @@
#define PLICSW_CONTEXT_STRIDE 0x1000 #define PLICSW_CONTEXT_STRIDE 0x1000
#define PLICSW_CONTEXT_CLAIM 0x4 #define PLICSW_CONTEXT_CLAIM 0x4
#define PLICSW_HART_MASK 0x01010101
#define PLICSW_HART_MAX_NR 8
#define PLICSW_REGION_ALIGN 0x1000 #define PLICSW_REGION_ALIGN 0x1000
struct plicsw_data { struct plicsw_data {
unsigned long addr; unsigned long addr;
unsigned long size; unsigned long size;
uint32_t hart_count; uint32_t hart_count;
/* hart id to source id table */
uint32_t source_id[PLICSW_HART_MAX_NR];
}; };
int plicsw_warm_ipi_init(void); int plicsw_warm_ipi_init(void);

View File

@@ -14,7 +14,6 @@
struct plic_data { struct plic_data {
unsigned long addr; unsigned long addr;
unsigned long size;
unsigned long num_src; unsigned long num_src;
}; };

View File

@@ -1,31 +0,0 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2023 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel <apatel@ventanamicro.com>
*/
#ifndef __FDT_REGMAP_H__
#define __FDT_REGMAP_H__
#include <sbi_utils/regmap/regmap.h>
struct fdt_phandle_args;
/** FDT based regmap driver */
struct fdt_regmap {
const struct fdt_match *match_table;
int (*init)(void *fdt, int nodeoff, u32 phandle,
const struct fdt_match *match);
};
/** Get regmap instance based on phandle */
int fdt_regmap_get_by_phandle(void *fdt, u32 phandle,
struct regmap **out_rmap);
/** Get regmap instance based on "regmap' property of the specified DT node */
int fdt_regmap_get(void *fdt, int nodeoff, struct regmap **out_rmap);
#endif

View File

@@ -1,67 +0,0 @@
/*
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2023 Ventana Micro Systems Inc.
*
* Authors:
* Anup Patel <apatel@ventanamicro.com>
*/
#ifndef __REGMAP_H__
#define __REGMAP_H__
#include <sbi/sbi_types.h>
#include <sbi/sbi_list.h>
/** Representation of a regmap instance */
struct regmap {
/** Uniquie ID of the regmap instance assigned by the driver */
unsigned int id;
/** Configuration of regmap registers */
int reg_shift;
int reg_stride;
unsigned int reg_base;
unsigned int reg_max;
/** Read a regmap register */
int (*reg_read)(struct regmap *rmap, unsigned int reg,
unsigned int *val);
/** Write a regmap register */
int (*reg_write)(struct regmap *rmap, unsigned int reg,
unsigned int val);
/** Read-modify-write a regmap register */
int (*reg_update_bits)(struct regmap *rmap, unsigned int reg,
unsigned int mask, unsigned int val);
/** List */
struct sbi_dlist node;
};
static inline struct regmap *to_regmap(struct sbi_dlist *node)
{
return container_of(node, struct regmap, node);
}
/** Find a registered regmap instance */
struct regmap *regmap_find(unsigned int id);
/** Register a regmap instance */
int regmap_add(struct regmap *rmap);
/** Un-register a regmap instance */
void regmap_remove(struct regmap *rmap);
/** Read a register in a regmap instance */
int regmap_read(struct regmap *rmap, unsigned int reg, unsigned int *val);
/** Write a register in a regmap instance */
int regmap_write(struct regmap *rmap, unsigned int reg, unsigned int val);
/** Read-modify-write a register in a regmap instance */
int regmap_update_bits(struct regmap *rmap, unsigned int reg,
unsigned int mask, unsigned int val);
#endif

View File

@@ -0,0 +1,17 @@
/*
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2020 Western Digital Corporation or its affiliates.
*
* Authors:
* Anup Patel <anup.patel@wdc.com>
*/
#ifndef __SYS_SIFIVE_TEST_H__
#define __SYS_SIFIVE_TEST_H__
#include <sbi/sbi_types.h>
int sifive_test_init(unsigned long base);
#endif

View File

@@ -128,8 +128,6 @@ unsigned long csr_read_num(int csr_num)
switchcase_csr_read_8(CSR_MHPMCOUNTER8, ret) switchcase_csr_read_8(CSR_MHPMCOUNTER8, ret)
switchcase_csr_read_16(CSR_MHPMCOUNTER16, ret) switchcase_csr_read_16(CSR_MHPMCOUNTER16, ret)
switchcase_csr_read(CSR_MCOUNTINHIBIT, ret) switchcase_csr_read(CSR_MCOUNTINHIBIT, ret)
switchcase_csr_read(CSR_MCYCLECFG, ret)
switchcase_csr_read(CSR_MINSTRETCFG, ret)
switchcase_csr_read(CSR_MHPMEVENT3, ret) switchcase_csr_read(CSR_MHPMEVENT3, ret)
switchcase_csr_read_4(CSR_MHPMEVENT4, ret) switchcase_csr_read_4(CSR_MHPMEVENT4, ret)
switchcase_csr_read_8(CSR_MHPMEVENT8, ret) switchcase_csr_read_8(CSR_MHPMEVENT8, ret)
@@ -141,12 +139,6 @@ unsigned long csr_read_num(int csr_num)
switchcase_csr_read_4(CSR_MHPMCOUNTER4H, ret) switchcase_csr_read_4(CSR_MHPMCOUNTER4H, ret)
switchcase_csr_read_8(CSR_MHPMCOUNTER8H, ret) switchcase_csr_read_8(CSR_MHPMCOUNTER8H, ret)
switchcase_csr_read_16(CSR_MHPMCOUNTER16H, ret) switchcase_csr_read_16(CSR_MHPMCOUNTER16H, ret)
/**
* The CSR range M[CYCLE, INSTRET]CFGH are available only if smcntrpmf
* extension is present. The caller must ensure that.
*/
switchcase_csr_read(CSR_MCYCLECFGH, ret)
switchcase_csr_read(CSR_MINSTRETCFGH, ret)
/** /**
* The CSR range MHPMEVENT[3-16]H are available only if sscofpmf * The CSR range MHPMEVENT[3-16]H are available only if sscofpmf
* extension is present. The caller must ensure that. * extension is present. The caller must ensure that.
@@ -214,16 +206,12 @@ void csr_write_num(int csr_num, unsigned long val)
switchcase_csr_write_4(CSR_MHPMCOUNTER4H, val) switchcase_csr_write_4(CSR_MHPMCOUNTER4H, val)
switchcase_csr_write_8(CSR_MHPMCOUNTER8H, val) switchcase_csr_write_8(CSR_MHPMCOUNTER8H, val)
switchcase_csr_write_16(CSR_MHPMCOUNTER16H, val) switchcase_csr_write_16(CSR_MHPMCOUNTER16H, val)
switchcase_csr_write(CSR_MCYCLECFGH, val)
switchcase_csr_write(CSR_MINSTRETCFGH, val)
switchcase_csr_write(CSR_MHPMEVENT3H, val) switchcase_csr_write(CSR_MHPMEVENT3H, val)
switchcase_csr_write_4(CSR_MHPMEVENT4H, val) switchcase_csr_write_4(CSR_MHPMEVENT4H, val)
switchcase_csr_write_8(CSR_MHPMEVENT8H, val) switchcase_csr_write_8(CSR_MHPMEVENT8H, val)
switchcase_csr_write_16(CSR_MHPMEVENT16H, val) switchcase_csr_write_16(CSR_MHPMEVENT16H, val)
#endif #endif
switchcase_csr_write(CSR_MCOUNTINHIBIT, val) switchcase_csr_write(CSR_MCOUNTINHIBIT, val)
switchcase_csr_write(CSR_MCYCLECFG, val)
switchcase_csr_write(CSR_MINSTRETCFG, val)
switchcase_csr_write(CSR_MHPMEVENT3, val) switchcase_csr_write(CSR_MHPMEVENT3, val)
switchcase_csr_write_4(CSR_MHPMEVENT4, val) switchcase_csr_write_4(CSR_MHPMEVENT4, val)
switchcase_csr_write_8(CSR_MHPMEVENT8, val) switchcase_csr_write_8(CSR_MHPMEVENT8, val)
@@ -258,48 +246,6 @@ static unsigned long ctz(unsigned long x)
return ret; return ret;
} }
int pmp_disable(unsigned int n)
{
int pmpcfg_csr, pmpcfg_shift;
unsigned long cfgmask, pmpcfg;
if (n >= PMP_COUNT)
return SBI_EINVAL;
#if __riscv_xlen == 32
pmpcfg_csr = CSR_PMPCFG0 + (n >> 2);
pmpcfg_shift = (n & 3) << 3;
#elif __riscv_xlen == 64
pmpcfg_csr = (CSR_PMPCFG0 + (n >> 2)) & ~1;
pmpcfg_shift = (n & 7) << 3;
#else
# error "Unexpected __riscv_xlen"
#endif
/* Clear the address matching bits to disable the pmp entry */
cfgmask = ~(0xffUL << pmpcfg_shift);
pmpcfg = (csr_read_num(pmpcfg_csr) & cfgmask);
csr_write_num(pmpcfg_csr, pmpcfg);
return SBI_OK;
}
int is_pmp_entry_mapped(unsigned long entry)
{
unsigned long prot;
unsigned long addr;
unsigned long log2len;
pmp_get(entry, &prot, &addr, &log2len);
/* If address matching bits are non-zero, the entry is enable */
if (prot & PMP_A)
return true;
return false;
}
int pmp_set(unsigned int n, unsigned long prot, unsigned long addr, int pmp_set(unsigned int n, unsigned long prot, unsigned long addr,
unsigned long log2len) unsigned long log2len)
{ {

View File

@@ -12,10 +12,6 @@
#include <sbi/riscv_atomic.h> #include <sbi/riscv_atomic.h>
#include <sbi/riscv_barrier.h> #include <sbi/riscv_barrier.h>
#ifndef __riscv_atomic
#error "opensbi strongly relies on the A extension of RISC-V"
#endif
long atomic_read(atomic_t *atom) long atomic_read(atomic_t *atom)
{ {
long ret = atom->counter; long ret = atom->counter;
@@ -83,51 +79,175 @@ long atomic_sub_return(atomic_t *atom, long value)
(__typeof__(*(ptr))) __axchg((ptr), _x_, sizeof(*(ptr))); \ (__typeof__(*(ptr))) __axchg((ptr), _x_, sizeof(*(ptr))); \
}) })
#define __xchg(ptr, new, size) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(*(ptr)) __new = (new); \
__typeof__(*(ptr)) __ret; \
register unsigned int __rc; \
switch (size) { \
case 4: \
__asm__ __volatile__("0: lr.w %0, %2\n" \
" sc.w.rl %1, %z3, %2\n" \
" bnez %1, 0b\n" \
" fence rw, rw\n" \
: "=&r"(__ret), "=&r"(__rc), \
"+A"(*__ptr) \
: "rJ"(__new) \
: "memory"); \
break; \
case 8: \
__asm__ __volatile__("0: lr.d %0, %2\n" \
" sc.d.rl %1, %z3, %2\n" \
" bnez %1, 0b\n" \
" fence rw, rw\n" \
: "=&r"(__ret), "=&r"(__rc), \
"+A"(*__ptr) \
: "rJ"(__new) \
: "memory"); \
break; \
default: \
break; \
} \
__ret; \
})
#define xchg(ptr, n) \
({ \
__typeof__(*(ptr)) _n_ = (n); \
(__typeof__(*(ptr))) __xchg((ptr), _n_, sizeof(*(ptr))); \
})
#define __cmpxchg(ptr, old, new, size) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(*(ptr)) __old = (old); \
__typeof__(*(ptr)) __new = (new); \
__typeof__(*(ptr)) __ret; \
register unsigned int __rc; \
switch (size) { \
case 4: \
__asm__ __volatile__("0: lr.w %0, %2\n" \
" bne %0, %z3, 1f\n" \
" sc.w.rl %1, %z4, %2\n" \
" bnez %1, 0b\n" \
" fence rw, rw\n" \
"1:\n" \
: "=&r"(__ret), "=&r"(__rc), \
"+A"(*__ptr) \
: "rJ"(__old), "rJ"(__new) \
: "memory"); \
break; \
case 8: \
__asm__ __volatile__("0: lr.d %0, %2\n" \
" bne %0, %z3, 1f\n" \
" sc.d.rl %1, %z4, %2\n" \
" bnez %1, 0b\n" \
" fence rw, rw\n" \
"1:\n" \
: "=&r"(__ret), "=&r"(__rc), \
"+A"(*__ptr) \
: "rJ"(__old), "rJ"(__new) \
: "memory"); \
break; \
default: \
break; \
} \
__ret; \
})
#define cmpxchg(ptr, o, n) \
({ \
__typeof__(*(ptr)) _o_ = (o); \
__typeof__(*(ptr)) _n_ = (n); \
(__typeof__(*(ptr))) \
__cmpxchg((ptr), _o_, _n_, sizeof(*(ptr))); \
})
long atomic_cmpxchg(atomic_t *atom, long oldval, long newval) long atomic_cmpxchg(atomic_t *atom, long oldval, long newval)
{ {
#ifdef __riscv_atomic
return __sync_val_compare_and_swap(&atom->counter, oldval, newval); return __sync_val_compare_and_swap(&atom->counter, oldval, newval);
#else
return cmpxchg(&atom->counter, oldval, newval);
#endif
} }
long atomic_xchg(atomic_t *atom, long newval) long atomic_xchg(atomic_t *atom, long newval)
{ {
/* Atomically set new value and return old value. */ /* Atomically set new value and return old value. */
#ifdef __riscv_atomic
return axchg(&atom->counter, newval); return axchg(&atom->counter, newval);
#else
return xchg(&atom->counter, newval);
#endif
} }
unsigned int atomic_raw_xchg_uint(volatile unsigned int *ptr, unsigned int atomic_raw_xchg_uint(volatile unsigned int *ptr,
unsigned int newval) unsigned int newval)
{ {
/* Atomically set new value and return old value. */ /* Atomically set new value and return old value. */
#ifdef __riscv_atomic
return axchg(ptr, newval); return axchg(ptr, newval);
#else
return xchg(ptr, newval);
#endif
} }
unsigned long atomic_raw_xchg_ulong(volatile unsigned long *ptr, unsigned long atomic_raw_xchg_ulong(volatile unsigned long *ptr,
unsigned long newval) unsigned long newval)
{ {
/* Atomically set new value and return old value. */ /* Atomically set new value and return old value. */
#ifdef __riscv_atomic
return axchg(ptr, newval); return axchg(ptr, newval);
#else
return xchg(ptr, newval);
#endif
} }
int atomic_raw_set_bit(int nr, volatile unsigned long *addr) #if (__SIZEOF_POINTER__ == 8)
#define __AMO(op) "amo" #op ".d"
#elif (__SIZEOF_POINTER__ == 4)
#define __AMO(op) "amo" #op ".w"
#else
#error "Unexpected __SIZEOF_POINTER__"
#endif
#define __atomic_op_bit_ord(op, mod, nr, addr, ord) \
({ \
unsigned long __res, __mask; \
__mask = BIT_MASK(nr); \
__asm__ __volatile__(__AMO(op) #ord " %0, %2, %1" \
: "=r"(__res), "+A"(addr[BIT_WORD(nr)]) \
: "r"(mod(__mask)) \
: "memory"); \
__res; \
})
#define __atomic_op_bit(op, mod, nr, addr) \
__atomic_op_bit_ord(op, mod, nr, addr, .aqrl)
/* Bitmask modifiers */
#define __NOP(x) (x)
#define __NOT(x) (~(x))
inline int atomic_raw_set_bit(int nr, volatile unsigned long *addr)
{ {
unsigned long res, mask = BIT_MASK(nr); return __atomic_op_bit(or, __NOP, nr, addr);
res = __atomic_fetch_or(&addr[BIT_WORD(nr)], mask, __ATOMIC_RELAXED);
return res & mask ? 1 : 0;
} }
int atomic_raw_clear_bit(int nr, volatile unsigned long *addr) inline int atomic_raw_clear_bit(int nr, volatile unsigned long *addr)
{ {
unsigned long res, mask = BIT_MASK(nr); return __atomic_op_bit(and, __NOT, nr, addr);
res = __atomic_fetch_and(&addr[BIT_WORD(nr)], ~mask, __ATOMIC_RELAXED);
return res & mask ? 1 : 0;
} }
int atomic_set_bit(int nr, atomic_t *atom) inline int atomic_set_bit(int nr, atomic_t *atom)
{ {
return atomic_raw_set_bit(nr, (unsigned long *)&atom->counter); return atomic_raw_set_bit(nr, (unsigned long *)&atom->counter);
} }
int atomic_clear_bit(int nr, atomic_t *atom) inline int atomic_clear_bit(int nr, atomic_t *atom)
{ {
return atomic_raw_clear_bit(nr, (unsigned long *)&atom->counter); return atomic_raw_clear_bit(nr, (unsigned long *)&atom->counter);
} }

View File

@@ -37,22 +37,28 @@ int sbi_getc(void)
return -1; return -1;
} }
void sbi_putc(char ch)
{
if (console_dev && console_dev->console_putc) {
if (ch == '\n')
console_dev->console_putc('\r');
console_dev->console_putc(ch);
}
}
static unsigned long nputs(const char *str, unsigned long len) static unsigned long nputs(const char *str, unsigned long len)
{ {
unsigned long i; unsigned long i, ret;
if (console_dev) { if (console_dev && console_dev->console_puts) {
if (console_dev->console_puts) ret = console_dev->console_puts(str, len);
return console_dev->console_puts(str, len); } else {
else if (console_dev->console_putc) { for (i = 0; i < len; i++)
for (i = 0; i < len; i++) { sbi_putc(str[i]);
if (str[i] == '\n') ret = len;
console_dev->console_putc('\r');
console_dev->console_putc(str[i]);
} }
}
} return ret;
return len;
} }
static void nputs_all(const char *str, unsigned long len) static void nputs_all(const char *str, unsigned long len)
@@ -63,11 +69,6 @@ static void nputs_all(const char *str, unsigned long len)
p += nputs(&str[p], len - p); p += nputs(&str[p], len - p);
} }
void sbi_putc(char ch)
{
nputs_all(&ch, 1);
}
void sbi_puts(const char *str) void sbi_puts(const char *str)
{ {
unsigned long len = sbi_strlen(str); unsigned long len = sbi_strlen(str);
@@ -119,8 +120,6 @@ unsigned long sbi_ngets(char *str, unsigned long len)
#define PAD_RIGHT 1 #define PAD_RIGHT 1
#define PAD_ZERO 2 #define PAD_ZERO 2
#define PAD_ALTERNATE 4 #define PAD_ALTERNATE 4
#define PAD_SIGN 8
#define USE_TBUF 16
#define PRINT_BUF_LEN 64 #define PRINT_BUF_LEN 64
#define va_start(v, l) __builtin_va_start((v), l) #define va_start(v, l) __builtin_va_start((v), l)
@@ -128,7 +127,7 @@ unsigned long sbi_ngets(char *str, unsigned long len)
#define va_arg __builtin_va_arg #define va_arg __builtin_va_arg
typedef __builtin_va_list va_list; typedef __builtin_va_list va_list;
static void printc(char **out, u32 *out_len, char ch, int flags) static void printc(char **out, u32 *out_len, char ch)
{ {
if (!out) { if (!out) {
sbi_putc(ch); sbi_putc(ch);
@@ -142,67 +141,61 @@ static void printc(char **out, u32 *out_len, char ch, int flags)
if (!out_len || *out_len > 1) { if (!out_len || *out_len > 1) {
*(*out)++ = ch; *(*out)++ = ch;
**out = '\0'; **out = '\0';
if (out_len) { }
if (out_len && *out_len > 0)
--(*out_len); --(*out_len);
if ((flags & USE_TBUF) && *out_len == 1) {
nputs_all(console_tbuf, CONSOLE_TBUF_MAX - *out_len);
*out = console_tbuf;
*out_len = CONSOLE_TBUF_MAX;
}
}
}
} }
static int prints(char **out, u32 *out_len, const char *string, int width, static int prints(char **out, u32 *out_len, const char *string, int width,
int flags) int flags)
{ {
int pc = 0; int pc = 0;
width -= sbi_strlen(string); char padchar = ' ';
if (width > 0) {
int len = 0;
const char *ptr;
for (ptr = string; *ptr; ++ptr)
++len;
if (len >= width)
width = 0;
else
width -= len;
if (flags & PAD_ZERO)
padchar = '0';
}
if (!(flags & PAD_RIGHT)) { if (!(flags & PAD_RIGHT)) {
for (; width > 0; --width) { for (; width > 0; --width) {
printc(out, out_len, flags & PAD_ZERO ? '0' : ' ', flags); printc(out, out_len, padchar);
++pc; ++pc;
} }
} }
for (; *string; ++string) { for (; *string; ++string) {
printc(out, out_len, *string, flags); printc(out, out_len, *string);
++pc; ++pc;
} }
for (; width > 0; --width) { for (; width > 0; --width) {
printc(out, out_len, ' ', flags); printc(out, out_len, padchar);
++pc; ++pc;
} }
return pc; return pc;
} }
static int printi(char **out, u32 *out_len, long long i, static int printi(char **out, u32 *out_len, long long i, int b, int sg,
int width, int flags, int type) int width, int flags, int letbase)
{ {
int pc = 0; char print_buf[PRINT_BUF_LEN];
char *s, sign = 0, letbase, print_buf[PRINT_BUF_LEN]; char *s;
unsigned long long u, b, t; int neg = 0, pc = 0;
u64 t;
unsigned long long u = i;
b = 10; if (sg && b == 10 && i < 0) {
letbase = 'a'; neg = 1;
if (type == 'o')
b = 8;
else if (type == 'x' || type == 'X' || type == 'p' || type == 'P') {
b = 16;
letbase &= ~0x20;
letbase |= type & 0x20;
}
u = i;
sign = 0;
if (type == 'i' || type == 'd') {
if ((flags & PAD_SIGN) && i > 0)
sign = '+';
if (i < 0) {
sign = '-';
u = -i; u = -i;
} }
}
s = print_buf + PRINT_BUF_LEN - 1; s = print_buf + PRINT_BUF_LEN - 1;
*s = '\0'; *s = '\0';
@@ -219,33 +212,23 @@ static int printi(char **out, u32 *out_len, long long i,
} }
} }
if (flags & PAD_ZERO) { if (flags & PAD_ALTERNATE) {
if (sign) { if ((b == 16) && (letbase == 'A')) {
printc(out, out_len, sign, flags); *--s = 'X';
++pc; } else if ((b == 16) && (letbase == 'a')) {
--width; *--s = 'x';
} }
if (i && (flags & PAD_ALTERNATE)) {
if (b == 16 || b == 8) {
printc(out, out_len, '0', flags);
++pc;
--width;
}
if (b == 16) {
printc(out, out_len, 'x' - 'a' + letbase, flags);
++pc;
--width;
}
}
} else {
if (i && (flags & PAD_ALTERNATE)) {
if (b == 16)
*--s = 'x' - 'a' + letbase;
if (b == 16 || b == 8)
*--s = '0'; *--s = '0';
} }
if (sign)
*--s = sign; if (neg) {
if (width && (flags & PAD_ZERO)) {
printc(out, out_len, '-');
++pc;
--width;
} else {
*--s = '-';
}
} }
return pc + prints(out, out_len, s, width, flags); return pc + prints(out, out_len, s, width, flags);
@@ -253,10 +236,10 @@ static int printi(char **out, u32 *out_len, long long i,
static int print(char **out, u32 *out_len, const char *format, va_list args) static int print(char **out, u32 *out_len, const char *format, va_list args)
{ {
bool flags_done;
int width, flags, pc = 0; int width, flags, pc = 0;
char type, scr[2], *tout; char scr[2], *tout;
bool use_tbuf = (!out) ? true : false; bool use_tbuf = (!out) ? true : false;
unsigned long long tmp;
/* /*
* The console_tbuf is protected by console_out_lock and * The console_tbuf is protected by console_out_lock and
@@ -270,51 +253,33 @@ static int print(char **out, u32 *out_len, const char *format, va_list args)
out_len = &console_tbuf_len; out_len = &console_tbuf_len;
} }
/* handle special case: *out_len == 1*/ for (; *format != 0; ++format) {
if (out) { if (use_tbuf && !console_tbuf_len) {
if(!out_len || *out_len) nputs_all(console_tbuf, CONSOLE_TBUF_MAX);
**out = '\0'; console_tbuf_len = CONSOLE_TBUF_MAX;
tout = console_tbuf;
} }
for (; *format != 0; ++format) {
width = flags = 0;
if (use_tbuf)
flags |= USE_TBUF;
if (*format == '%') { if (*format == '%') {
++format; ++format;
width = flags = 0;
if (*format == '\0') if (*format == '\0')
break; break;
if (*format == '%') if (*format == '%')
goto literal; goto literal;
/* Get flags */ /* Get flags */
flags_done = false; if (*format == '-') {
while (!flags_done) {
switch (*format) {
case '-':
flags |= PAD_RIGHT;
break;
case '+':
flags |= PAD_SIGN;
break;
case '#':
flags |= PAD_ALTERNATE;
break;
case '0':
flags |= PAD_ZERO;
break;
case ' ':
case '\'':
/* Ignored flags, do nothing */
break;
default:
flags_done = true;
break;
}
if (!flags_done)
++format; ++format;
flags = PAD_RIGHT;
}
if (*format == '#') {
++format;
flags |= PAD_ALTERNATE;
}
while (*format == '0') {
++format;
flags |= PAD_ZERO;
} }
if (flags & PAD_RIGHT)
flags &= ~PAD_ZERO;
/* Get width */ /* Get width */
for (; *format >= '0' && *format <= '9'; ++format) { for (; *format >= '0' && *format <= '9'; ++format) {
width *= 10; width *= 10;
@@ -328,48 +293,84 @@ static int print(char **out, u32 *out_len, const char *format, va_list args)
} }
if ((*format == 'd') || (*format == 'i')) { if ((*format == 'd') || (*format == 'i')) {
pc += printi(out, out_len, va_arg(args, int), pc += printi(out, out_len, va_arg(args, int),
width, flags, *format); 10, 1, width, flags, '0');
continue; continue;
} }
if ((*format == 'u') || (*format == 'o') if (*format == 'x') {
|| (*format == 'x') || (*format == 'X')) { pc += printi(out, out_len,
pc += printi(out, out_len, va_arg(args, unsigned int), va_arg(args, unsigned int), 16, 0,
width, flags, *format); width, flags, 'a');
continue; continue;
} }
if ((*format == 'p') || (*format == 'P')) { if (*format == 'X') {
pc += printi(out, out_len, (uintptr_t)va_arg(args, void*), pc += printi(out, out_len,
width, flags, *format); va_arg(args, unsigned int), 16, 0,
width, flags, 'A');
continue; continue;
} }
if (*format == 'l') { if (*format == 'u') {
type = 'i'; pc += printi(out, out_len,
if (format[1] == 'l') { va_arg(args, unsigned int), 10, 0,
++format; width, flags, 'a');
if ((format[1] == 'u') || (format[1] == 'o')
|| (format[1] == 'd') || (format[1] == 'i')
|| (format[1] == 'x') || (format[1] == 'X')) {
++format;
type = *format;
}
pc += printi(out, out_len, va_arg(args, long long),
width, flags, type);
continue; continue;
} }
if ((format[1] == 'u') || (format[1] == 'o') if (*format == 'p') {
|| (format[1] == 'd') || (format[1] == 'i') pc += printi(out, out_len,
|| (format[1] == 'x') || (format[1] == 'X')) { va_arg(args, unsigned long), 16, 0,
++format; width, flags, 'a');
type = *format;
}
if ((type == 'd') || (type == 'i'))
pc += printi(out, out_len, va_arg(args, long),
width, flags, type);
else
pc += printi(out, out_len, va_arg(args, unsigned long),
width, flags, type);
continue; continue;
} }
if (*format == 'P') {
pc += printi(out, out_len,
va_arg(args, unsigned long), 16, 0,
width, flags, 'A');
continue;
}
if (*format == 'l' && *(format + 1) == 'l') {
tmp = va_arg(args, unsigned long long);
if (*(format + 2) == 'u') {
format += 2;
pc += printi(out, out_len, tmp, 10, 0,
width, flags, 'a');
} else if (*(format + 2) == 'x') {
format += 2;
pc += printi(out, out_len, tmp, 16, 0,
width, flags, 'a');
} else if (*(format + 2) == 'X') {
format += 2;
pc += printi(out, out_len, tmp, 16, 0,
width, flags, 'A');
} else {
format += 1;
pc += printi(out, out_len, tmp, 10, 1,
width, flags, '0');
}
continue;
} else if (*format == 'l') {
if (*(format + 1) == 'u') {
format += 1;
pc += printi(
out, out_len,
va_arg(args, unsigned long), 10,
0, width, flags, 'a');
} else if (*(format + 1) == 'x') {
format += 1;
pc += printi(
out, out_len,
va_arg(args, unsigned long), 16,
0, width, flags, 'a');
} else if (*(format + 1) == 'X') {
format += 1;
pc += printi(
out, out_len,
va_arg(args, unsigned long), 16,
0, width, flags, 'A');
} else {
pc += printi(out, out_len,
va_arg(args, long), 10, 1,
width, flags, '0');
}
}
if (*format == 'c') { if (*format == 'c') {
/* char are converted to int then pushed on the stack */ /* char are converted to int then pushed on the stack */
scr[0] = va_arg(args, int); scr[0] = va_arg(args, int);
@@ -379,7 +380,7 @@ static int print(char **out, u32 *out_len, const char *format, va_list args)
} }
} else { } else {
literal: literal:
printc(out, out_len, *format, flags); printc(out, out_len, *format);
++pc; ++pc;
} }
} }

View File

@@ -40,22 +40,22 @@ struct sbi_domain root = {
static unsigned long domain_hart_ptr_offset; static unsigned long domain_hart_ptr_offset;
struct sbi_domain *sbi_hartindex_to_domain(u32 hartindex) struct sbi_domain *sbi_hartid_to_domain(u32 hartid)
{ {
struct sbi_scratch *scratch; struct sbi_scratch *scratch;
scratch = sbi_hartindex_to_scratch(hartindex); scratch = sbi_hartid_to_scratch(hartid);
if (!scratch || !domain_hart_ptr_offset) if (!scratch || !domain_hart_ptr_offset)
return NULL; return NULL;
return sbi_scratch_read_type(scratch, void *, domain_hart_ptr_offset); return sbi_scratch_read_type(scratch, void *, domain_hart_ptr_offset);
} }
static void update_hartindex_to_domain(u32 hartindex, struct sbi_domain *dom) static void update_hartid_to_domain(u32 hartid, struct sbi_domain *dom)
{ {
struct sbi_scratch *scratch; struct sbi_scratch *scratch;
scratch = sbi_hartindex_to_scratch(hartindex); scratch = sbi_hartid_to_scratch(hartid);
if (!scratch) if (!scratch)
return; return;
@@ -65,7 +65,7 @@ static void update_hartindex_to_domain(u32 hartindex, struct sbi_domain *dom)
bool sbi_domain_is_assigned_hart(const struct sbi_domain *dom, u32 hartid) bool sbi_domain_is_assigned_hart(const struct sbi_domain *dom, u32 hartid)
{ {
if (dom) if (dom)
return sbi_hartmask_test_hartid(hartid, &dom->assigned_harts); return sbi_hartmask_test_hart(hartid, &dom->assigned_harts);
return false; return false;
} }
@@ -73,10 +73,18 @@ bool sbi_domain_is_assigned_hart(const struct sbi_domain *dom, u32 hartid)
ulong sbi_domain_get_assigned_hartmask(const struct sbi_domain *dom, ulong sbi_domain_get_assigned_hartmask(const struct sbi_domain *dom,
ulong hbase) ulong hbase)
{ {
ulong ret = 0; ulong ret, bword, boff;
for (int i = 0; i < 8 * sizeof(ret); i++) {
if (sbi_domain_is_assigned_hart(dom, hbase + i)) if (!dom)
ret |= 1UL << i; return 0;
bword = BIT_WORD(hbase);
boff = BIT_WORD_OFFSET(hbase);
ret = sbi_hartmask_bits(&dom->assigned_harts)[bword++] >> boff;
if (boff && bword < BIT_WORD(SBI_HARTMASK_MAX_BITS)) {
ret |= (sbi_hartmask_bits(&dom->assigned_harts)[bword] &
(BIT(boff) - 1UL)) << (BITS_PER_LONG - boff);
} }
return ret; return ret;
@@ -193,11 +201,12 @@ static bool is_region_subset(const struct sbi_domain_memregion *regA,
return false; return false;
} }
/** Check if regionA can be replaced by regionB */ /** Check if regionA conflicts regionB */
static bool is_region_compatible(const struct sbi_domain_memregion *regA, static bool is_region_conflict(const struct sbi_domain_memregion *regA,
const struct sbi_domain_memregion *regB) const struct sbi_domain_memregion *regB)
{ {
if (is_region_subset(regA, regB) && regA->flags == regB->flags) if ((is_region_subset(regA, regB) || is_region_subset(regB, regA)) &&
regA->flags == regB->flags)
return true; return true;
return false; return false;
@@ -255,26 +264,11 @@ static const struct sbi_domain_memregion *find_next_subset_region(
return ret; return ret;
} }
static void swap_region(struct sbi_domain_memregion* reg1, static int sanitize_domain(const struct sbi_platform *plat,
struct sbi_domain_memregion* reg2) struct sbi_domain *dom)
{
struct sbi_domain_memregion treg;
sbi_memcpy(&treg, reg1, sizeof(treg));
sbi_memcpy(reg1, reg2, sizeof(treg));
sbi_memcpy(reg2, &treg, sizeof(treg));
}
static void clear_region(struct sbi_domain_memregion* reg)
{
sbi_memset(reg, 0x0, sizeof(*reg));
}
static int sanitize_domain(struct sbi_domain *dom)
{ {
u32 i, j, count; u32 i, j, count;
bool is_covered; struct sbi_domain_memregion treg, *reg, *reg1;
struct sbi_domain_memregion *reg, *reg1;
/* Check possible HARTs */ /* Check possible HARTs */
if (!dom->possible_harts) { if (!dom->possible_harts) {
@@ -282,11 +276,10 @@ static int sanitize_domain(struct sbi_domain *dom)
__func__, dom->name); __func__, dom->name);
return SBI_EINVAL; return SBI_EINVAL;
} }
sbi_hartmask_for_each_hartindex(i, dom->possible_harts) { sbi_hartmask_for_each_hart(i, dom->possible_harts) {
if (!sbi_hartindex_valid(i)) { if (sbi_platform_hart_invalid(plat, i)) {
sbi_printf("%s: %s possible HART mask has invalid " sbi_printf("%s: %s possible HART mask has invalid "
"hart %d\n", __func__, "hart %d\n", __func__, dom->name, i);
dom->name, sbi_hartindex_to_hartid(i));
return SBI_EINVAL; return SBI_EINVAL;
} }
} }
@@ -325,38 +318,25 @@ static int sanitize_domain(struct sbi_domain *dom)
for (j = i + 1; j < count; j++) { for (j = i + 1; j < count; j++) {
reg1 = &dom->regions[j]; reg1 = &dom->regions[j];
if (is_region_conflict(reg1, reg)) {
sbi_printf("%s: %s conflict between regions "
"(base=0x%lx order=%lu flags=0x%lx) and "
"(base=0x%lx order=%lu flags=0x%lx)\n",
__func__, dom->name,
reg->base, reg->order, reg->flags,
reg1->base, reg1->order, reg1->flags);
return SBI_EINVAL;
}
if (!is_region_before(reg1, reg)) if (!is_region_before(reg1, reg))
continue; continue;
swap_region(reg, reg1); sbi_memcpy(&treg, reg1, sizeof(treg));
sbi_memcpy(reg1, reg, sizeof(treg));
sbi_memcpy(reg, &treg, sizeof(treg));
} }
} }
/* Remove covered regions */
while(i < (count - 1)) {
is_covered = false;
reg = &dom->regions[i];
for (j = i + 1; j < count; j++) {
reg1 = &dom->regions[j];
if (is_region_compatible(reg, reg1)) {
is_covered = true;
break;
}
}
/* find a region is superset of reg, remove reg */
if (is_covered) {
for (j = i; j < (count - 1); j++)
swap_region(&dom->regions[j],
&dom->regions[j + 1]);
clear_region(&dom->regions[count - 1]);
count--;
} else
i++;
}
/* /*
* We don't need to check boot HART id of domain because if boot * We don't need to check boot HART id of domain because if boot
* HART id is not possible/assigned to this domain then it won't * HART id is not possible/assigned to this domain then it won't
@@ -420,7 +400,7 @@ bool sbi_domain_check_addr_range(const struct sbi_domain *dom,
void sbi_domain_dump(const struct sbi_domain *dom, const char *suffix) void sbi_domain_dump(const struct sbi_domain *dom, const char *suffix)
{ {
u32 i, j, k; u32 i, k;
unsigned long rstart, rend; unsigned long rstart, rend;
struct sbi_domain_memregion *reg; struct sbi_domain_memregion *reg;
@@ -432,11 +412,9 @@ void sbi_domain_dump(const struct sbi_domain *dom, const char *suffix)
k = 0; k = 0;
sbi_printf("Domain%d HARTs %s: ", dom->index, suffix); sbi_printf("Domain%d HARTs %s: ", dom->index, suffix);
sbi_hartmask_for_each_hartindex(i, dom->possible_harts) { sbi_hartmask_for_each_hart(i, dom->possible_harts)
j = sbi_hartindex_to_hartid(i);
sbi_printf("%s%d%s", (k++) ? "," : "", sbi_printf("%s%d%s", (k++) ? "," : "",
j, sbi_domain_is_assigned_hart(dom, j) ? "*" : ""); i, sbi_domain_is_assigned_hart(dom, i) ? "*" : "");
}
sbi_printf("\n"); sbi_printf("\n");
i = 0; i = 0;
@@ -521,6 +499,7 @@ int sbi_domain_register(struct sbi_domain *dom,
int rc; int rc;
struct sbi_domain *tdom; struct sbi_domain *tdom;
u32 cold_hartid = current_hartid(); u32 cold_hartid = current_hartid();
const struct sbi_platform *plat = sbi_platform_thishart_ptr();
/* Sanity checks */ /* Sanity checks */
if (!dom || !assign_mask || domain_finalized) if (!dom || !assign_mask || domain_finalized)
@@ -543,7 +522,7 @@ int sbi_domain_register(struct sbi_domain *dom,
} }
/* Sanitize discovered domain */ /* Sanitize discovered domain */
rc = sanitize_domain(dom); rc = sanitize_domain(plat, dom);
if (rc) { if (rc) {
sbi_printf("%s: sanity checks failed for" sbi_printf("%s: sanity checks failed for"
" %s (error %d)\n", __func__, " %s (error %d)\n", __func__,
@@ -559,22 +538,22 @@ int sbi_domain_register(struct sbi_domain *dom,
sbi_hartmask_clear_all(&dom->assigned_harts); sbi_hartmask_clear_all(&dom->assigned_harts);
/* Assign domain to HART if HART is a possible HART */ /* Assign domain to HART if HART is a possible HART */
sbi_hartmask_for_each_hartindex(i, assign_mask) { sbi_hartmask_for_each_hart(i, assign_mask) {
if (!sbi_hartmask_test_hartindex(i, dom->possible_harts)) if (!sbi_hartmask_test_hart(i, dom->possible_harts))
continue; continue;
tdom = sbi_hartindex_to_domain(i); tdom = sbi_hartid_to_domain(i);
if (tdom) if (tdom)
sbi_hartmask_clear_hartindex(i, sbi_hartmask_clear_hart(i,
&tdom->assigned_harts); &tdom->assigned_harts);
update_hartindex_to_domain(i, dom); update_hartid_to_domain(i, dom);
sbi_hartmask_set_hartindex(i, &dom->assigned_harts); sbi_hartmask_set_hart(i, &dom->assigned_harts);
/* /*
* If cold boot HART is assigned to this domain then * If cold boot HART is assigned to this domain then
* override boot HART of this domain. * override boot HART of this domain.
*/ */
if (sbi_hartindex_to_hartid(i) == cold_hartid && if (i == cold_hartid &&
dom->boot_hartid != cold_hartid) { dom->boot_hartid != cold_hartid) {
sbi_printf("Domain%d Boot HARTID forced to" sbi_printf("Domain%d Boot HARTID forced to"
" %d\n", dom->index, cold_hartid); " %d\n", dom->index, cold_hartid);
@@ -590,16 +569,21 @@ int sbi_domain_root_add_memregion(const struct sbi_domain_memregion *reg)
int rc; int rc;
bool reg_merged; bool reg_merged;
struct sbi_domain_memregion *nreg, *nreg1, *nreg2; struct sbi_domain_memregion *nreg, *nreg1, *nreg2;
const struct sbi_platform *plat = sbi_platform_thishart_ptr();
/* Sanity checks */ /* Sanity checks */
if (!reg || domain_finalized || !root.regions || if (!reg || domain_finalized || !root.regions ||
(ROOT_REGION_MAX <= root_memregs_count)) (ROOT_REGION_MAX <= root_memregs_count))
return SBI_EINVAL; return SBI_EINVAL;
/* Check whether compatible region exists for the new one */ /* Check for conflicts */
sbi_domain_for_each_memregion(&root, nreg) { sbi_domain_for_each_memregion(&root, nreg) {
if (is_region_compatible(reg, nreg)) if (is_region_conflict(reg, nreg)) {
return 0; sbi_printf("%s: is_region_conflict check failed"
" 0x%lx conflicts existing 0x%lx\n", __func__,
reg->base, nreg->base);
return SBI_EALREADY;
}
} }
/* Append the memregion to root memregions */ /* Append the memregion to root memregions */
@@ -611,7 +595,7 @@ int sbi_domain_root_add_memregion(const struct sbi_domain_memregion *reg)
/* Sort and optimize root regions */ /* Sort and optimize root regions */
do { do {
/* Sanitize the root domain so that memregions are sorted */ /* Sanitize the root domain so that memregions are sorted */
rc = sanitize_domain(&root); rc = sanitize_domain(plat, &root);
if (rc) { if (rc) {
sbi_printf("%s: sanity checks failed for" sbi_printf("%s: sanity checks failed for"
" %s (error %d)\n", __func__, " %s (error %d)\n", __func__,
@@ -689,37 +673,36 @@ int sbi_domain_finalize(struct sbi_scratch *scratch, u32 cold_hartid)
/* Startup boot HART of domains */ /* Startup boot HART of domains */
sbi_domain_for_each(i, dom) { sbi_domain_for_each(i, dom) {
/* Domain boot HART index */ /* Domain boot HART */
dhart = sbi_hartid_to_hartindex(dom->boot_hartid); dhart = dom->boot_hartid;
/* Ignore of boot HART is off limits */ /* Ignore of boot HART is off limits */
if (!sbi_hartindex_valid(dhart)) if (SBI_HARTMASK_MAX_BITS <= dhart)
continue; continue;
/* Ignore if boot HART not possible for this domain */ /* Ignore if boot HART not possible for this domain */
if (!sbi_hartmask_test_hartindex(dhart, dom->possible_harts)) if (!sbi_hartmask_test_hart(dhart, dom->possible_harts))
continue; continue;
/* Ignore if boot HART assigned different domain */ /* Ignore if boot HART assigned different domain */
if (sbi_hartindex_to_domain(dhart) != dom || if (sbi_hartid_to_domain(dhart) != dom ||
!sbi_hartmask_test_hartindex(dhart, &dom->assigned_harts)) !sbi_hartmask_test_hart(dhart, &dom->assigned_harts))
continue; continue;
/* Startup boot HART of domain */ /* Startup boot HART of domain */
if (dom->boot_hartid == cold_hartid) { if (dhart == cold_hartid) {
scratch->next_addr = dom->next_addr; scratch->next_addr = dom->next_addr;
scratch->next_mode = dom->next_mode; scratch->next_mode = dom->next_mode;
scratch->next_arg1 = dom->next_arg1; scratch->next_arg1 = dom->next_arg1;
} else { } else {
rc = sbi_hsm_hart_start(scratch, NULL, rc = sbi_hsm_hart_start(scratch, NULL, dhart,
dom->boot_hartid,
dom->next_addr, dom->next_addr,
dom->next_mode, dom->next_mode,
dom->next_arg1); dom->next_arg1);
if (rc) { if (rc) {
sbi_printf("%s: failed to start boot HART %d" sbi_printf("%s: failed to start boot HART %d"
" for %s (error %d)\n", __func__, " for %s (error %d)\n", __func__,
dom->boot_hartid, dom->name, rc); dhart, dom->name, rc);
return rc; return rc;
} }
} }
@@ -789,17 +772,11 @@ int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid)
root.fw_region_inited = true; root.fw_region_inited = true;
/* /* Root domain allow everything memory region */
* Allow SU RWX on rest of the memory region. Since pmp entries
* have implicit priority on index, previous entries will
* deny access to SU on M-mode region. Also, M-mode will not
* have access to SU region while previous entries will allow
* access to M-mode regions.
*/
sbi_domain_memregion_init(0, ~0UL, sbi_domain_memregion_init(0, ~0UL,
(SBI_DOMAIN_MEMREGION_SU_READABLE | (SBI_DOMAIN_MEMREGION_READABLE |
SBI_DOMAIN_MEMREGION_SU_WRITABLE | SBI_DOMAIN_MEMREGION_WRITEABLE |
SBI_DOMAIN_MEMREGION_SU_EXECUTABLE), SBI_DOMAIN_MEMREGION_EXECUTABLE),
&root_memregs[root_memregs_count++]); &root_memregs[root_memregs_count++]);
/* Root domain memory region end */ /* Root domain memory region end */
@@ -814,8 +791,11 @@ int sbi_domain_init(struct sbi_scratch *scratch, u32 cold_hartid)
root.next_mode = scratch->next_mode; root.next_mode = scratch->next_mode;
/* Root domain possible and assigned HARTs */ /* Root domain possible and assigned HARTs */
for (i = 0; i < plat->hart_count; i++) for (i = 0; i < SBI_HARTMASK_MAX_BITS; i++) {
sbi_hartmask_set_hartindex(i, root_hmask); if (sbi_platform_hart_invalid(plat, i))
continue;
sbi_hartmask_set_hart(i, root_hmask);
}
/* Finally register the root domain */ /* Finally register the root domain */
rc = sbi_domain_register(&root, root_hmask); rc = sbi_domain_register(&root, root_hmask);

View File

@@ -101,12 +101,14 @@ int sbi_ecall_handler(struct sbi_trap_regs *regs)
struct sbi_ecall_extension *ext; struct sbi_ecall_extension *ext;
unsigned long extension_id = regs->a7; unsigned long extension_id = regs->a7;
unsigned long func_id = regs->a6; unsigned long func_id = regs->a6;
struct sbi_ecall_return out = {0}; struct sbi_trap_info trap = {0};
unsigned long out_val = 0;
bool is_0_1_spec = 0; bool is_0_1_spec = 0;
ext = sbi_ecall_find_extension(extension_id); ext = sbi_ecall_find_extension(extension_id);
if (ext && ext->handle) { if (ext && ext->handle) {
ret = ext->handle(extension_id, func_id, regs, &out); ret = ext->handle(extension_id, func_id,
regs, &out_val, &trap);
if (extension_id >= SBI_EXT_0_1_SET_TIMER && if (extension_id >= SBI_EXT_0_1_SET_TIMER &&
extension_id <= SBI_EXT_0_1_SHUTDOWN) extension_id <= SBI_EXT_0_1_SHUTDOWN)
is_0_1_spec = 1; is_0_1_spec = 1;
@@ -114,7 +116,10 @@ int sbi_ecall_handler(struct sbi_trap_regs *regs)
ret = SBI_ENOTSUPP; ret = SBI_ENOTSUPP;
} }
if (!out.skip_regs_update) { if (ret == SBI_ETRAP) {
trap.epc = regs->mepc;
sbi_trap_redirect(regs, &trap);
} else {
if (ret < SBI_LAST_ERR || if (ret < SBI_LAST_ERR ||
(extension_id != SBI_EXT_0_1_CONSOLE_GETCHAR && (extension_id != SBI_EXT_0_1_CONSOLE_GETCHAR &&
SBI_SUCCESS < ret)) { SBI_SUCCESS < ret)) {
@@ -135,7 +140,7 @@ int sbi_ecall_handler(struct sbi_trap_regs *regs)
regs->mepc += 4; regs->mepc += 4;
regs->a0 = ret; regs->a0 = ret;
if (!is_0_1_spec) if (!is_0_1_spec)
regs->a1 = out.value; regs->a1 = out_val;
} }
return 0; return 0;

View File

@@ -33,36 +33,37 @@ static int sbi_ecall_base_probe(unsigned long extid, unsigned long *out_val)
} }
static int sbi_ecall_base_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_base_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = 0; int ret = 0;
switch (funcid) { switch (funcid) {
case SBI_EXT_BASE_GET_SPEC_VERSION: case SBI_EXT_BASE_GET_SPEC_VERSION:
out->value = (SBI_ECALL_VERSION_MAJOR << *out_val = (SBI_ECALL_VERSION_MAJOR <<
SBI_SPEC_VERSION_MAJOR_OFFSET) & SBI_SPEC_VERSION_MAJOR_OFFSET) &
(SBI_SPEC_VERSION_MAJOR_MASK << (SBI_SPEC_VERSION_MAJOR_MASK <<
SBI_SPEC_VERSION_MAJOR_OFFSET); SBI_SPEC_VERSION_MAJOR_OFFSET);
out->value = out->value | SBI_ECALL_VERSION_MINOR; *out_val = *out_val | SBI_ECALL_VERSION_MINOR;
break; break;
case SBI_EXT_BASE_GET_IMP_ID: case SBI_EXT_BASE_GET_IMP_ID:
out->value = sbi_ecall_get_impid(); *out_val = sbi_ecall_get_impid();
break; break;
case SBI_EXT_BASE_GET_IMP_VERSION: case SBI_EXT_BASE_GET_IMP_VERSION:
out->value = OPENSBI_VERSION; *out_val = OPENSBI_VERSION;
break; break;
case SBI_EXT_BASE_GET_MVENDORID: case SBI_EXT_BASE_GET_MVENDORID:
out->value = csr_read(CSR_MVENDORID); *out_val = csr_read(CSR_MVENDORID);
break; break;
case SBI_EXT_BASE_GET_MARCHID: case SBI_EXT_BASE_GET_MARCHID:
out->value = csr_read(CSR_MARCHID); *out_val = csr_read(CSR_MARCHID);
break; break;
case SBI_EXT_BASE_GET_MIMPID: case SBI_EXT_BASE_GET_MIMPID:
out->value = csr_read(CSR_MIMPID); *out_val = csr_read(CSR_MIMPID);
break; break;
case SBI_EXT_BASE_PROBE_EXT: case SBI_EXT_BASE_PROBE_EXT:
ret = sbi_ecall_base_probe(regs->a0, &out->value); ret = sbi_ecall_base_probe(regs->a0, out_val);
break; break;
default: default:
ret = SBI_ENOTSUPP; ret = SBI_ENOTSUPP;

View File

@@ -12,8 +12,9 @@
#include <sbi/sbi_cppc.h> #include <sbi/sbi_cppc.h>
static int sbi_ecall_cppc_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_cppc_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = 0; int ret = 0;
uint64_t temp; uint64_t temp;
@@ -21,14 +22,14 @@ static int sbi_ecall_cppc_handler(unsigned long extid, unsigned long funcid,
switch (funcid) { switch (funcid) {
case SBI_EXT_CPPC_READ: case SBI_EXT_CPPC_READ:
ret = sbi_cppc_read(regs->a0, &temp); ret = sbi_cppc_read(regs->a0, &temp);
out->value = temp; *out_val = temp;
break; break;
case SBI_EXT_CPPC_READ_HI: case SBI_EXT_CPPC_READ_HI:
#if __riscv_xlen == 32 #if __riscv_xlen == 32
ret = sbi_cppc_read(regs->a0, &temp); ret = sbi_cppc_read(regs->a0, &temp);
out->value = temp >> 32; *out_val = temp >> 32;
#else #else
out->value = 0; *out_val = 0;
#endif #endif
break; break;
case SBI_EXT_CPPC_WRITE: case SBI_EXT_CPPC_WRITE:
@@ -37,7 +38,7 @@ static int sbi_ecall_cppc_handler(unsigned long extid, unsigned long funcid,
case SBI_EXT_CPPC_PROBE: case SBI_EXT_CPPC_PROBE:
ret = sbi_cppc_probe(regs->a0); ret = sbi_cppc_probe(regs->a0);
if (ret >= 0) { if (ret >= 0) {
out->value = ret; *out_val = ret;
ret = 0; ret = 0;
} }
break; break;

View File

@@ -14,11 +14,11 @@
#include <sbi/sbi_ecall_interface.h> #include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_trap.h> #include <sbi/sbi_trap.h>
#include <sbi/riscv_asm.h> #include <sbi/riscv_asm.h>
#include <sbi/sbi_hart.h>
static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
ulong smode = (csr_read(CSR_MSTATUS) & MSTATUS_MPP) >> ulong smode = (csr_read(CSR_MSTATUS) & MSTATUS_MPP) >>
MSTATUS_MPP_SHIFT; MSTATUS_MPP_SHIFT;
@@ -46,12 +46,10 @@ static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid,
regs->a1, regs->a0, smode, regs->a1, regs->a0, smode,
SBI_DOMAIN_READ|SBI_DOMAIN_WRITE)) SBI_DOMAIN_READ|SBI_DOMAIN_WRITE))
return SBI_ERR_INVALID_PARAM; return SBI_ERR_INVALID_PARAM;
sbi_hart_map_saddr(regs->a1, regs->a0);
if (funcid == SBI_EXT_DBCN_CONSOLE_WRITE) if (funcid == SBI_EXT_DBCN_CONSOLE_WRITE)
out->value = sbi_nputs((const char *)regs->a1, regs->a0); *out_val = sbi_nputs((const char *)regs->a1, regs->a0);
else else
out->value = sbi_ngets((char *)regs->a1, regs->a0); *out_val = sbi_ngets((char *)regs->a1, regs->a0);
sbi_hart_unmap_saddr();
return 0; return 0;
case SBI_EXT_DBCN_CONSOLE_WRITE_BYTE: case SBI_EXT_DBCN_CONSOLE_WRITE_BYTE:
sbi_putc(regs->a0); sbi_putc(regs->a0);

View File

@@ -17,8 +17,9 @@
#include <sbi/riscv_asm.h> #include <sbi/riscv_asm.h>
static int sbi_ecall_hsm_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_hsm_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = 0; int ret = 0;
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr(); struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
@@ -46,7 +47,7 @@ static int sbi_ecall_hsm_handler(unsigned long extid, unsigned long funcid,
} }
if (ret >= 0) { if (ret >= 0) {
out->value = ret; *out_val = ret;
ret = 0; ret = 0;
} }

View File

@@ -15,8 +15,9 @@
#include <sbi/sbi_ipi.h> #include <sbi/sbi_ipi.h>
static int sbi_ecall_ipi_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_ipi_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = 0; int ret = 0;

View File

@@ -24,7 +24,7 @@
#include <sbi/sbi_unpriv.h> #include <sbi/sbi_unpriv.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
static bool sbi_load_hart_mask_unpriv(ulong *pmask, ulong *hmask, static int sbi_load_hart_mask_unpriv(ulong *pmask, ulong *hmask,
struct sbi_trap_info *uptrap) struct sbi_trap_info *uptrap)
{ {
ulong mask = 0; ulong mask = 0;
@@ -32,24 +32,24 @@ static bool sbi_load_hart_mask_unpriv(ulong *pmask, ulong *hmask,
if (pmask) { if (pmask) {
mask = sbi_load_ulong(pmask, uptrap); mask = sbi_load_ulong(pmask, uptrap);
if (uptrap->cause) if (uptrap->cause)
return false; return SBI_ETRAP;
} else { } else {
sbi_hsm_hart_interruptible_mask(sbi_domain_thishart_ptr(), sbi_hsm_hart_interruptible_mask(sbi_domain_thishart_ptr(),
0, &mask); 0, &mask);
} }
*hmask = mask; *hmask = mask;
return true; return 0;
} }
static int sbi_ecall_legacy_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_legacy_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = 0; int ret = 0;
struct sbi_tlb_info tlb_info; struct sbi_tlb_info tlb_info;
u32 source_hart = current_hartid(); u32 source_hart = current_hartid();
struct sbi_trap_info trap = {0};
ulong hmask = 0; ulong hmask = 0;
switch (extid) { switch (extid) {
@@ -70,51 +70,40 @@ static int sbi_ecall_legacy_handler(unsigned long extid, unsigned long funcid,
sbi_ipi_clear_smode(); sbi_ipi_clear_smode();
break; break;
case SBI_EXT_0_1_SEND_IPI: case SBI_EXT_0_1_SEND_IPI:
if (sbi_load_hart_mask_unpriv((ulong *)regs->a0, ret = sbi_load_hart_mask_unpriv((ulong *)regs->a0,
&hmask, &trap)) { &hmask, out_trap);
if (ret != SBI_ETRAP)
ret = sbi_ipi_send_smode(hmask, 0); ret = sbi_ipi_send_smode(hmask, 0);
} else {
trap.epc = regs->mepc;
sbi_trap_redirect(regs, &trap);
out->skip_regs_update = true;
}
break; break;
case SBI_EXT_0_1_REMOTE_FENCE_I: case SBI_EXT_0_1_REMOTE_FENCE_I:
if (sbi_load_hart_mask_unpriv((ulong *)regs->a0, ret = sbi_load_hart_mask_unpriv((ulong *)regs->a0,
&hmask, &trap)) { &hmask, out_trap);
if (ret != SBI_ETRAP) {
SBI_TLB_INFO_INIT(&tlb_info, 0, 0, 0, 0, SBI_TLB_INFO_INIT(&tlb_info, 0, 0, 0, 0,
SBI_TLB_FENCE_I, source_hart); sbi_tlb_local_fence_i,
source_hart);
ret = sbi_tlb_request(hmask, 0, &tlb_info); ret = sbi_tlb_request(hmask, 0, &tlb_info);
} else {
trap.epc = regs->mepc;
sbi_trap_redirect(regs, &trap);
out->skip_regs_update = true;
} }
break; break;
case SBI_EXT_0_1_REMOTE_SFENCE_VMA: case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
if (sbi_load_hart_mask_unpriv((ulong *)regs->a0, ret = sbi_load_hart_mask_unpriv((ulong *)regs->a0,
&hmask, &trap)) { &hmask, out_trap);
if (ret != SBI_ETRAP) {
SBI_TLB_INFO_INIT(&tlb_info, regs->a1, regs->a2, 0, 0, SBI_TLB_INFO_INIT(&tlb_info, regs->a1, regs->a2, 0, 0,
SBI_TLB_SFENCE_VMA, source_hart); sbi_tlb_local_sfence_vma,
source_hart);
ret = sbi_tlb_request(hmask, 0, &tlb_info); ret = sbi_tlb_request(hmask, 0, &tlb_info);
} else {
trap.epc = regs->mepc;
sbi_trap_redirect(regs, &trap);
out->skip_regs_update = true;
} }
break; break;
case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID: case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
if (sbi_load_hart_mask_unpriv((ulong *)regs->a0, ret = sbi_load_hart_mask_unpriv((ulong *)regs->a0,
&hmask, &trap)) { &hmask, out_trap);
if (ret != SBI_ETRAP) {
SBI_TLB_INFO_INIT(&tlb_info, regs->a1, SBI_TLB_INFO_INIT(&tlb_info, regs->a1,
regs->a2, regs->a3, 0, regs->a2, regs->a3, 0,
SBI_TLB_SFENCE_VMA_ASID, sbi_tlb_local_sfence_vma_asid,
source_hart); source_hart);
ret = sbi_tlb_request(hmask, 0, &tlb_info); ret = sbi_tlb_request(hmask, 0, &tlb_info);
} else {
trap.epc = regs->mepc;
sbi_trap_redirect(regs, &trap);
out->skip_regs_update = true;
} }
break; break;
case SBI_EXT_0_1_SHUTDOWN: case SBI_EXT_0_1_SHUTDOWN:

View File

@@ -18,8 +18,9 @@
#include <sbi/riscv_asm.h> #include <sbi/riscv_asm.h>
static int sbi_ecall_pmu_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_pmu_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = 0; int ret = 0;
uint64_t temp; uint64_t temp;
@@ -28,12 +29,12 @@ static int sbi_ecall_pmu_handler(unsigned long extid, unsigned long funcid,
case SBI_EXT_PMU_NUM_COUNTERS: case SBI_EXT_PMU_NUM_COUNTERS:
ret = sbi_pmu_num_ctr(); ret = sbi_pmu_num_ctr();
if (ret >= 0) { if (ret >= 0) {
out->value = ret; *out_val = ret;
ret = 0; ret = 0;
} }
break; break;
case SBI_EXT_PMU_COUNTER_GET_INFO: case SBI_EXT_PMU_COUNTER_GET_INFO:
ret = sbi_pmu_ctr_get_info(regs->a0, &out->value); ret = sbi_pmu_ctr_get_info(regs->a0, out_val);
break; break;
case SBI_EXT_PMU_COUNTER_CFG_MATCH: case SBI_EXT_PMU_COUNTER_CFG_MATCH:
#if __riscv_xlen == 32 #if __riscv_xlen == 32
@@ -44,21 +45,21 @@ static int sbi_ecall_pmu_handler(unsigned long extid, unsigned long funcid,
ret = sbi_pmu_ctr_cfg_match(regs->a0, regs->a1, regs->a2, ret = sbi_pmu_ctr_cfg_match(regs->a0, regs->a1, regs->a2,
regs->a3, temp); regs->a3, temp);
if (ret >= 0) { if (ret >= 0) {
out->value = ret; *out_val = ret;
ret = 0; ret = 0;
} }
break; break;
case SBI_EXT_PMU_COUNTER_FW_READ: case SBI_EXT_PMU_COUNTER_FW_READ:
ret = sbi_pmu_ctr_fw_read(regs->a0, &temp); ret = sbi_pmu_ctr_fw_read(regs->a0, &temp);
out->value = temp; *out_val = temp;
break; break;
case SBI_EXT_PMU_COUNTER_FW_READ_HI: case SBI_EXT_PMU_COUNTER_FW_READ_HI:
#if __riscv_xlen == 32 #if __riscv_xlen == 32
ret = sbi_pmu_ctr_fw_read(regs->a0, &temp); ret = sbi_pmu_ctr_fw_read(regs->a0, &temp);
out->value = temp >> 32; *out_val = temp >> 32;
#else #else
out->value = 0; *out_val = 0;
#endif #endif
break; break;
case SBI_EXT_PMU_COUNTER_START: case SBI_EXT_PMU_COUNTER_START:
@@ -73,8 +74,6 @@ static int sbi_ecall_pmu_handler(unsigned long extid, unsigned long funcid,
case SBI_EXT_PMU_COUNTER_STOP: case SBI_EXT_PMU_COUNTER_STOP:
ret = sbi_pmu_ctr_stop(regs->a0, regs->a1, regs->a2); ret = sbi_pmu_ctr_stop(regs->a0, regs->a1, regs->a2);
break; break;
case SBI_EXT_PMU_SNAPSHOT_SET_SHMEM:
/* fallthrough as OpenSBI doesn't support snapshot yet */
default: default:
ret = SBI_ENOTSUPP; ret = SBI_ENOTSUPP;
} }

View File

@@ -16,8 +16,9 @@
#include <sbi/sbi_tlb.h> #include <sbi/sbi_tlb.h>
static int sbi_ecall_rfence_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_rfence_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = 0; int ret = 0;
unsigned long vmid; unsigned long vmid;
@@ -32,41 +33,43 @@ static int sbi_ecall_rfence_handler(unsigned long extid, unsigned long funcid,
switch (funcid) { switch (funcid) {
case SBI_EXT_RFENCE_REMOTE_FENCE_I: case SBI_EXT_RFENCE_REMOTE_FENCE_I:
SBI_TLB_INFO_INIT(&tlb_info, 0, 0, 0, 0, SBI_TLB_INFO_INIT(&tlb_info, 0, 0, 0, 0,
SBI_TLB_FENCE_I, source_hart); sbi_tlb_local_fence_i, source_hart);
ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info); ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info);
break; break;
case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA:
SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, 0, 0, SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, 0, 0,
SBI_TLB_HFENCE_GVMA, source_hart); sbi_tlb_local_hfence_gvma, source_hart);
ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info); ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info);
break; break;
case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID: case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID:
SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, 0, regs->a4, SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, 0, regs->a4,
SBI_TLB_HFENCE_GVMA_VMID, source_hart); sbi_tlb_local_hfence_gvma_vmid,
source_hart);
ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info); ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info);
break; break;
case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA: case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA:
vmid = (csr_read(CSR_HGATP) & HGATP_VMID_MASK); vmid = (csr_read(CSR_HGATP) & HGATP_VMID_MASK);
vmid = vmid >> HGATP_VMID_SHIFT; vmid = vmid >> HGATP_VMID_SHIFT;
SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, 0, vmid, SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, 0, vmid,
SBI_TLB_HFENCE_VVMA, source_hart); sbi_tlb_local_hfence_vvma, source_hart);
ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info); ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info);
break; break;
case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID: case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID:
vmid = (csr_read(CSR_HGATP) & HGATP_VMID_MASK); vmid = (csr_read(CSR_HGATP) & HGATP_VMID_MASK);
vmid = vmid >> HGATP_VMID_SHIFT; vmid = vmid >> HGATP_VMID_SHIFT;
SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, regs->a4, SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, regs->a4,
vmid, SBI_TLB_HFENCE_VVMA_ASID, source_hart); vmid, sbi_tlb_local_hfence_vvma_asid,
source_hart);
ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info); ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info);
break; break;
case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA:
SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, 0, 0, SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, 0, 0,
SBI_TLB_SFENCE_VMA, source_hart); sbi_tlb_local_sfence_vma, source_hart);
ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info); ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info);
break; break;
case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID:
SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, regs->a4, 0, SBI_TLB_INFO_INIT(&tlb_info, regs->a2, regs->a3, regs->a4, 0,
SBI_TLB_SFENCE_VMA_ASID, source_hart); sbi_tlb_local_sfence_vma_asid, source_hart);
ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info); ret = sbi_tlb_request(regs->a0, regs->a1, &tlb_info);
break; break;
default: default:

View File

@@ -15,8 +15,9 @@
#include <sbi/sbi_system.h> #include <sbi/sbi_system.h>
static int sbi_ecall_srst_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_srst_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
if (funcid == SBI_EXT_SRST_RESET) { if (funcid == SBI_EXT_SRST_RESET) {
if ((((u32)-1U) <= ((u64)regs->a0)) || if ((((u32)-1U) <= ((u64)regs->a0)) ||

View File

@@ -6,8 +6,9 @@
#include <sbi/sbi_system.h> #include <sbi/sbi_system.h>
static int sbi_ecall_susp_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_susp_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = SBI_ENOTSUPP; int ret = SBI_ENOTSUPP;
@@ -15,7 +16,7 @@ static int sbi_ecall_susp_handler(unsigned long extid, unsigned long funcid,
ret = sbi_system_suspend(regs->a0, regs->a1, regs->a2); ret = sbi_system_suspend(regs->a0, regs->a1, regs->a2);
if (ret >= 0) { if (ret >= 0) {
out->value = ret; *out_val = ret;
ret = 0; ret = 0;
} }

View File

@@ -15,8 +15,9 @@
#include <sbi/sbi_timer.h> #include <sbi/sbi_timer.h>
static int sbi_ecall_time_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_time_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
int ret = 0; int ret = 0;

View File

@@ -23,11 +23,13 @@ static inline unsigned long sbi_ecall_vendor_id(void)
} }
static int sbi_ecall_vendor_handler(unsigned long extid, unsigned long funcid, static int sbi_ecall_vendor_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs, const struct sbi_trap_regs *regs,
struct sbi_ecall_return *out) unsigned long *out_val,
struct sbi_trap_info *out_trap)
{ {
return sbi_platform_vendor_ext_provider(sbi_platform_thishart_ptr(), return sbi_platform_vendor_ext_provider(sbi_platform_thishart_ptr(),
funcid, regs, out); funcid, regs,
out_val, out_trap);
} }
struct sbi_ecall_extension ecall_vendor; struct sbi_ecall_extension ecall_vendor;

View File

@@ -109,7 +109,7 @@ int sbi_emulate_csr_read(int csr_num, struct sbi_trap_regs *regs,
#define switchcase_hpm(__uref, __mref, __csr) \ #define switchcase_hpm(__uref, __mref, __csr) \
case __csr: \ case __csr: \
if (sbi_hart_mhpm_mask(scratch) & (1 << (__csr - __uref)))\ if ((sbi_hart_mhpm_count(scratch) + 3) <= (__csr - __uref))\
return SBI_ENOTSUPP; \ return SBI_ENOTSUPP; \
if (!hpm_allowed(__csr - __uref, prev_mode, virt)) \ if (!hpm_allowed(__csr - __uref, prev_mode, virt)) \
return SBI_ENOTSUPP; \ return SBI_ENOTSUPP; \

View File

@@ -33,11 +33,11 @@ static unsigned long hart_features_offset;
static void mstatus_init(struct sbi_scratch *scratch) static void mstatus_init(struct sbi_scratch *scratch)
{ {
unsigned long menvcfg_val, mstatus_val = 0;
int cidx; int cidx;
unsigned long mstatus_val = 0; unsigned int num_mhpm = sbi_hart_mhpm_count(scratch);
unsigned int mhpm_mask = sbi_hart_mhpm_mask(scratch);
uint64_t mhpmevent_init_val = 0; uint64_t mhpmevent_init_val = 0;
uint64_t menvcfg_val, mstateen_val; uint64_t mstateen_val;
/* Enable FPU */ /* Enable FPU */
if (misa_extension('D') || misa_extension('F')) if (misa_extension('D') || misa_extension('F'))
@@ -69,14 +69,13 @@ static void mstatus_init(struct sbi_scratch *scratch)
/** /**
* The mhpmeventn[h] CSR should be initialized with interrupt disabled * The mhpmeventn[h] CSR should be initialized with interrupt disabled
* and inhibited running in M-mode during init. * and inhibited running in M-mode during init.
* To keep it simple, only contiguous mhpmcounters are supported as a
* platform with discontiguous mhpmcounters may not make much sense.
*/ */
mhpmevent_init_val |= (MHPMEVENT_OF | MHPMEVENT_MINH); mhpmevent_init_val |= (MHPMEVENT_OF | MHPMEVENT_MINH);
for (cidx = 0; cidx <= 28; cidx++) { for (cidx = 0; cidx < num_mhpm; cidx++) {
if (!(mhpm_mask & 1 << (cidx + 3)))
continue;
#if __riscv_xlen == 32 #if __riscv_xlen == 32
csr_write_num(CSR_MHPMEVENT3 + cidx, csr_write_num(CSR_MHPMEVENT3 + cidx, mhpmevent_init_val & 0xFFFFFFFF);
mhpmevent_init_val & 0xFFFFFFFF);
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF)) if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
csr_write_num(CSR_MHPMEVENT3H + cidx, csr_write_num(CSR_MHPMEVENT3H + cidx,
mhpmevent_init_val >> BITS_PER_LONG); mhpmevent_init_val >> BITS_PER_LONG);
@@ -108,40 +107,58 @@ static void mstatus_init(struct sbi_scratch *scratch)
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12) { if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12) {
menvcfg_val = csr_read(CSR_MENVCFG); menvcfg_val = csr_read(CSR_MENVCFG);
#if __riscv_xlen == 32
menvcfg_val |= ((uint64_t)csr_read(CSR_MENVCFGH)) << 32;
#endif
#define __set_menvcfg_ext(__ext, __bits) \
if (sbi_hart_has_extension(scratch, __ext)) \
menvcfg_val |= __bits;
/* /*
* Enable access to extensions if they are present in the * Set menvcfg.CBZE == 1
* hardware or in the device tree. *
* If Zicboz extension is not available then writes to
* menvcfg.CBZE will be ignored because it is a WARL field.
*/ */
menvcfg_val |= ENVCFG_CBZE;
__set_menvcfg_ext(SBI_HART_EXT_ZICBOZ, ENVCFG_CBZE) /*
__set_menvcfg_ext(SBI_HART_EXT_ZICBOM, ENVCFG_CBCFE) * Set menvcfg.CBCFE == 1
__set_menvcfg_ext(SBI_HART_EXT_ZICBOM, *
ENVCFG_CBIE_INV << ENVCFG_CBIE_SHIFT) * If Zicbom extension is not available then writes to
* menvcfg.CBCFE will be ignored because it is a WARL field.
*/
menvcfg_val |= ENVCFG_CBCFE;
/*
* Set menvcfg.CBIE == 3
*
* If Zicbom extension is not available then writes to
* menvcfg.CBIE will be ignored because it is a WARL field.
*/
menvcfg_val |= ENVCFG_CBIE_INV << ENVCFG_CBIE_SHIFT;
/*
* Set menvcfg.PBMTE == 1 for RV64 or RV128
*
* If Svpbmt extension is not available then menvcfg.PBMTE
* will be read-only zero.
*/
#if __riscv_xlen > 32 #if __riscv_xlen > 32
__set_menvcfg_ext(SBI_HART_EXT_SVPBMT, ENVCFG_PBMTE) menvcfg_val |= ENVCFG_PBMTE;
#endif #endif
__set_menvcfg_ext(SBI_HART_EXT_SSTC, ENVCFG_STCE)
#undef __set_menvcfg_ext /*
* The spec doesn't explicitly describe the reset value of menvcfg.
* Enable access to stimecmp if sstc extension is present in the
* hardware.
*/
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSTC)) {
#if __riscv_xlen == 32
unsigned long menvcfgh_val;
menvcfgh_val = csr_read(CSR_MENVCFGH);
menvcfgh_val |= ENVCFGH_STCE;
csr_write(CSR_MENVCFGH, menvcfgh_val);
#else
menvcfg_val |= ENVCFG_STCE;
#endif
}
csr_write(CSR_MENVCFG, menvcfg_val); csr_write(CSR_MENVCFG, menvcfg_val);
#if __riscv_xlen == 32
csr_write(CSR_MENVCFGH, menvcfg_val >> 32);
#endif
/* Enable S-mode access to seed CSR */
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_ZKR)) {
csr_set(CSR_MSECCFG, MSECCFG_SSEED);
csr_clear(CSR_MSECCFG, MSECCFG_USEED);
}
} }
/* Disable all interrupts */ /* Disable all interrupts */
@@ -227,12 +244,12 @@ void sbi_hart_delegation_dump(struct sbi_scratch *scratch,
prefix, suffix, csr_read(CSR_MEDELEG)); prefix, suffix, csr_read(CSR_MEDELEG));
} }
unsigned int sbi_hart_mhpm_mask(struct sbi_scratch *scratch) unsigned int sbi_hart_mhpm_count(struct sbi_scratch *scratch)
{ {
struct sbi_hart_features *hfeatures = struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset); sbi_scratch_offset_ptr(scratch, hart_features_offset);
return hfeatures->mhpm_mask; return hfeatures->mhpm_count;
} }
unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch) unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch)
@@ -243,12 +260,12 @@ unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch)
return hfeatures->pmp_count; return hfeatures->pmp_count;
} }
unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch) unsigned long sbi_hart_pmp_granularity(struct sbi_scratch *scratch)
{ {
struct sbi_hart_features *hfeatures = struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset); sbi_scratch_offset_ptr(scratch, hart_features_offset);
return hfeatures->pmp_log2gran; return hfeatures->pmp_gran;
} }
unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch) unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch)
@@ -267,174 +284,20 @@ unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch)
return hfeatures->mhpm_bits; return hfeatures->mhpm_bits;
} }
/* int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
* Returns Smepmp flags for a given domain and region based on permissions.
*/
static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
struct sbi_domain *dom,
struct sbi_domain_memregion *reg)
{
unsigned int pmp_flags = 0;
if (SBI_DOMAIN_MEMREGION_IS_SHARED(reg->flags)) {
/* Read only for both M and SU modes */
if (SBI_DOMAIN_MEMREGION_IS_SUR_MR(reg->flags))
pmp_flags = (PMP_L | PMP_R | PMP_W | PMP_X);
/* Execute for SU but Read/Execute for M mode */
else if (SBI_DOMAIN_MEMREGION_IS_SUX_MRX(reg->flags))
/* locked region */
pmp_flags = (PMP_L | PMP_W | PMP_X);
/* Execute only for both M and SU modes */
else if (SBI_DOMAIN_MEMREGION_IS_SUX_MX(reg->flags))
pmp_flags = (PMP_L | PMP_W);
/* Read/Write for both M and SU modes */
else if (SBI_DOMAIN_MEMREGION_IS_SURW_MRW(reg->flags))
pmp_flags = (PMP_W | PMP_X);
/* Read only for SU mode but Read/Write for M mode */
else if (SBI_DOMAIN_MEMREGION_IS_SUR_MRW(reg->flags))
pmp_flags = (PMP_W);
} else if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
/*
* When smepmp is supported and used, M region cannot have RWX
* permissions on any region.
*/
if ((reg->flags & SBI_DOMAIN_MEMREGION_M_ACCESS_MASK)
== SBI_DOMAIN_MEMREGION_M_RWX) {
sbi_printf("%s: M-mode only regions cannot have"
"RWX permissions\n", __func__);
return 0;
}
/* M-mode only access regions are always locked */
pmp_flags |= PMP_L;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_M_EXECUTABLE)
pmp_flags |= PMP_X;
} else if (SBI_DOMAIN_MEMREGION_SU_ONLY_ACCESS(reg->flags)) {
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
pmp_flags |= PMP_R;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
pmp_flags |= PMP_W;
if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
pmp_flags |= PMP_X;
}
return pmp_flags;
}
static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
struct sbi_domain *dom,
struct sbi_domain_memregion *reg,
unsigned int pmp_idx,
unsigned int pmp_flags,
unsigned int pmp_log2gran,
unsigned long pmp_addr_max)
{
unsigned long pmp_addr = reg->base >> PMP_SHIFT;
if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
pmp_set(pmp_idx, pmp_flags, reg->base, reg->order);
} else {
sbi_printf("Can not configure pmp for domain %s because"
" memory region address 0x%lx or size 0x%lx "
"is not in range.\n", dom->name, reg->base,
reg->order);
}
}
static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
unsigned int pmp_count,
unsigned int pmp_log2gran,
unsigned long pmp_addr_max)
{ {
struct sbi_domain_memregion *reg; struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr(); struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned int pmp_idx, pmp_flags; unsigned int pmp_idx = 0, pmp_flags, pmp_bits, pmp_gran_log2;
unsigned int pmp_count = sbi_hart_pmp_count(scratch);
unsigned long pmp_addr = 0, pmp_addr_max = 0;
/* if (!pmp_count)
* Set the RLB so that, we can write to PMP entries without
* enforcement even if some entries are locked.
*/
csr_set(CSR_MSECCFG, MSECCFG_RLB);
/* Disable the reserved entry */
pmp_disable(SBI_SMEPMP_RESV_ENTRY);
/* Program M-only regions when MML is not set. */
pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
/* Skip reserved entry */
if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
pmp_idx++;
if (pmp_count <= pmp_idx)
break;
/* Skip shared and SU-only regions */
if (!SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
pmp_idx++;
continue;
}
pmp_flags = sbi_hart_get_smepmp_flags(scratch, dom, reg);
if (!pmp_flags)
return 0; return 0;
sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags, pmp_gran_log2 = log2roundup(sbi_hart_pmp_granularity(scratch));
pmp_log2gran, pmp_addr_max); pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
} pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
/* Set the MML to enforce new encoding */
csr_set(CSR_MSECCFG, MSECCFG_MML);
/* Program shared and SU-only regions */
pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
/* Skip reserved entry */
if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
pmp_idx++;
if (pmp_count <= pmp_idx)
break;
/* Skip M-only regions */
if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
pmp_idx++;
continue;
}
pmp_flags = sbi_hart_get_smepmp_flags(scratch, dom, reg);
if (!pmp_flags)
return 0;
sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
pmp_log2gran, pmp_addr_max);
}
/*
* All entries are programmed.
* Keep the RLB bit so that dynamic mappings can be done.
*/
return 0;
}
static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
unsigned int pmp_count,
unsigned int pmp_log2gran,
unsigned long pmp_addr_max)
{
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned int pmp_idx = 0;
unsigned int pmp_flags;
unsigned long pmp_addr;
sbi_domain_for_each_memregion(dom, reg) { sbi_domain_for_each_memregion(dom, reg) {
if (pmp_count <= pmp_idx) if (pmp_count <= pmp_idx)
@@ -443,8 +306,8 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
pmp_flags = 0; pmp_flags = 0;
/* /*
* If permissions are to be enforced for all modes on * If permissions are to be enforced for all modes on this
* this region, the lock bit should be set. * region, the lock bit should be set.
*/ */
if (reg->flags & SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS) if (reg->flags & SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS)
pmp_flags |= PMP_L; pmp_flags |= PMP_L;
@@ -457,83 +320,15 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
pmp_flags |= PMP_X; pmp_flags |= PMP_X;
pmp_addr = reg->base >> PMP_SHIFT; pmp_addr = reg->base >> PMP_SHIFT;
if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) { if (pmp_gran_log2 <= reg->order && pmp_addr < pmp_addr_max)
pmp_set(pmp_idx++, pmp_flags, reg->base, reg->order); pmp_set(pmp_idx++, pmp_flags, reg->base, reg->order);
} else { else {
sbi_printf("Can not configure pmp for domain %s because" sbi_printf("Can not configure pmp for domain %s", dom->name);
" memory region address 0x%lx or size 0x%lx " sbi_printf(" because memory region address %lx or size %lx is not in range\n",
"is not in range.\n", dom->name, reg->base, reg->base, reg->order);
reg->order);
} }
} }
return 0;
}
int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
{
/* shared R/W access for M and S/U mode */
unsigned int pmp_flags = (PMP_W | PMP_X);
unsigned long order, base = 0;
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
/* If Smepmp is not supported no special mapping is required */
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
return SBI_OK;
if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
return SBI_ENOSPC;
for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
order <= __riscv_xlen; order++) {
if (order < __riscv_xlen) {
base = addr & ~((1UL << order) - 1UL);
if ((base <= addr) &&
(addr < (base + (1UL << order))) &&
(base <= (addr + size - 1UL)) &&
((addr + size - 1UL) < (base + (1UL << order))))
break;
} else {
return SBI_EFAIL;
}
}
pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
return SBI_OK;
}
int sbi_hart_unmap_saddr(void)
{
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
return SBI_OK;
return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
}
int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
{
int rc;
unsigned int pmp_bits, pmp_log2gran;
unsigned int pmp_count = sbi_hart_pmp_count(scratch);
unsigned long pmp_addr_max;
if (!pmp_count)
return 0;
pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
rc = sbi_hart_smepmp_configure(scratch, pmp_count,
pmp_log2gran, pmp_addr_max);
else
rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
pmp_log2gran, pmp_addr_max);
/* /*
* As per section 3.7.2 of privileged specification v1.12, * As per section 3.7.2 of privileged specification v1.12,
* virtual address translations can be speculatively performed * virtual address translations can be speculatively performed
@@ -555,7 +350,7 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
__sbi_hfence_gvma_all(); __sbi_hfence_gvma_all();
} }
return rc; return 0;
} }
int sbi_hart_priv_version(struct sbi_scratch *scratch) int sbi_hart_priv_version(struct sbi_scratch *scratch)
@@ -597,9 +392,9 @@ static inline void __sbi_hart_update_extension(
bool enable) bool enable)
{ {
if (enable) if (enable)
__set_bit(ext, hfeatures->extensions); hfeatures->extensions |= BIT(ext);
else else
__clear_bit(ext, hfeatures->extensions); hfeatures->extensions &= ~BIT(ext);
} }
/** /**
@@ -632,32 +427,38 @@ bool sbi_hart_has_extension(struct sbi_scratch *scratch,
struct sbi_hart_features *hfeatures = struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset); sbi_scratch_offset_ptr(scratch, hart_features_offset);
if (__test_bit(ext, hfeatures->extensions)) if (hfeatures->extensions & BIT(ext))
return true; return true;
else else
return false; return false;
} }
#define __SBI_HART_EXT_DATA(_name, _id) { \ static inline char *sbi_hart_extension_id2string(int ext)
.name = #_name, \ {
.id = _id, \ char *estr = NULL;
}
const struct sbi_hart_ext_data sbi_hart_ext[] = { switch (ext) {
__SBI_HART_EXT_DATA(smaia, SBI_HART_EXT_SMAIA), case SBI_HART_EXT_SSCOFPMF:
__SBI_HART_EXT_DATA(smepmp, SBI_HART_EXT_SMEPMP), estr = "sscofpmf";
__SBI_HART_EXT_DATA(smstateen, SBI_HART_EXT_SMSTATEEN), break;
__SBI_HART_EXT_DATA(sscofpmf, SBI_HART_EXT_SSCOFPMF), case SBI_HART_EXT_TIME:
__SBI_HART_EXT_DATA(sstc, SBI_HART_EXT_SSTC), estr = "time";
__SBI_HART_EXT_DATA(zicntr, SBI_HART_EXT_ZICNTR), break;
__SBI_HART_EXT_DATA(zihpm, SBI_HART_EXT_ZIHPM), case SBI_HART_EXT_SMAIA:
__SBI_HART_EXT_DATA(zkr, SBI_HART_EXT_ZKR), estr = "smaia";
__SBI_HART_EXT_DATA(smcntrpmf, SBI_HART_EXT_SMCNTRPMF), break;
__SBI_HART_EXT_DATA(xandespmu, SBI_HART_EXT_XANDESPMU), case SBI_HART_EXT_SSTC:
__SBI_HART_EXT_DATA(zicboz, SBI_HART_EXT_ZICBOZ), estr = "sstc";
__SBI_HART_EXT_DATA(zicbom, SBI_HART_EXT_ZICBOM), break;
__SBI_HART_EXT_DATA(svpbmt, SBI_HART_EXT_SVPBMT), case SBI_HART_EXT_SMSTATEEN:
}; estr = "smstateen";
break;
default:
break;
}
return estr;
}
/** /**
* Get the hart extensions in string format * Get the hart extensions in string format
@@ -674,18 +475,30 @@ void sbi_hart_get_extensions_str(struct sbi_scratch *scratch,
struct sbi_hart_features *hfeatures = struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset); sbi_scratch_offset_ptr(scratch, hart_features_offset);
int offset = 0, ext = 0; int offset = 0, ext = 0;
char *temp;
if (!extensions_str || nestr <= 0) if (!extensions_str || nestr <= 0)
return; return;
sbi_memset(extensions_str, 0, nestr); sbi_memset(extensions_str, 0, nestr);
for_each_set_bit(ext, hfeatures->extensions, SBI_HART_EXT_MAX) { if (!hfeatures->extensions)
goto done;
do {
if (hfeatures->extensions & BIT(ext)) {
temp = sbi_hart_extension_id2string(ext);
if (temp) {
sbi_snprintf(extensions_str + offset, sbi_snprintf(extensions_str + offset,
nestr - offset, nestr - offset,
"%s,", sbi_hart_ext[ext].name); "%s,", temp);
offset = offset + sbi_strlen(sbi_hart_ext[ext].name) + 1; offset = offset + sbi_strlen(temp) + 1;
}
} }
ext++;
} while (ext < SBI_HART_EXT_MAX);
done:
if (offset) if (offset)
extensions_str[offset - 1] = '\0'; extensions_str[offset - 1] = '\0';
else else
@@ -711,7 +524,7 @@ static unsigned long hart_pmp_get_allowed_addr(void)
return val; return val;
} }
static int hart_mhpm_get_allowed_bits(void) static int hart_pmu_get_allowed_bits(void)
{ {
unsigned long val = ~(0UL); unsigned long val = ~(0UL);
struct sbi_trap_info trap = {0}; struct sbi_trap_info trap = {0};
@@ -748,7 +561,6 @@ static int hart_detect_features(struct sbi_scratch *scratch)
struct sbi_hart_features *hfeatures = struct sbi_hart_features *hfeatures =
sbi_scratch_offset_ptr(scratch, hart_features_offset); sbi_scratch_offset_ptr(scratch, hart_features_offset);
unsigned long val, oldval; unsigned long val, oldval;
bool has_zicntr = false;
int rc; int rc;
/* If hart features already detected then do nothing */ /* If hart features already detected then do nothing */
@@ -756,32 +568,9 @@ static int hart_detect_features(struct sbi_scratch *scratch)
return 0; return 0;
/* Clear hart features */ /* Clear hart features */
sbi_memset(hfeatures->extensions, 0, sizeof(hfeatures->extensions)); hfeatures->extensions = 0;
hfeatures->pmp_count = 0; hfeatures->pmp_count = 0;
hfeatures->mhpm_mask = 0; hfeatures->mhpm_count = 0;
hfeatures->priv_version = SBI_HART_PRIV_VER_UNKNOWN;
#define __check_hpm_csr(__csr, __mask) \
oldval = csr_read_allowed(__csr, (ulong)&trap); \
if (!trap.cause) { \
csr_write_allowed(__csr, (ulong)&trap, 1UL); \
if (!trap.cause && csr_swap(__csr, oldval) == 1UL) { \
(hfeatures->__mask) |= 1 << (__csr - CSR_MCYCLE); \
} \
}
#define __check_hpm_csr_2(__csr, __mask) \
__check_hpm_csr(__csr + 0, __mask) \
__check_hpm_csr(__csr + 1, __mask)
#define __check_hpm_csr_4(__csr, __mask) \
__check_hpm_csr_2(__csr + 0, __mask) \
__check_hpm_csr_2(__csr + 2, __mask)
#define __check_hpm_csr_8(__csr, __mask) \
__check_hpm_csr_4(__csr + 0, __mask) \
__check_hpm_csr_4(__csr + 4, __mask)
#define __check_hpm_csr_16(__csr, __mask) \
__check_hpm_csr_8(__csr + 0, __mask) \
__check_hpm_csr_8(__csr + 8, __mask)
#define __check_csr(__csr, __rdonly, __wrval, __field, __skip) \ #define __check_csr(__csr, __rdonly, __wrval, __field, __skip) \
oldval = csr_read_allowed(__csr, (ulong)&trap); \ oldval = csr_read_allowed(__csr, (ulong)&trap); \
@@ -827,23 +616,28 @@ static int hart_detect_features(struct sbi_scratch *scratch)
*/ */
val = hart_pmp_get_allowed_addr(); val = hart_pmp_get_allowed_addr();
if (val) { if (val) {
hfeatures->pmp_log2gran = sbi_ffs(val) + 2; hfeatures->pmp_gran = 1 << (sbi_ffs(val) + 2);
hfeatures->pmp_addr_bits = sbi_fls(val) + 1; hfeatures->pmp_addr_bits = sbi_fls(val) + 1;
/* Detect number of PMP regions. At least PMPADDR0 should be implemented*/ /* Detect number of PMP regions. At least PMPADDR0 should be implemented*/
__check_csr_64(CSR_PMPADDR0, 0, val, pmp_count, __pmp_skip); __check_csr_64(CSR_PMPADDR0, 0, val, pmp_count, __pmp_skip);
} }
__pmp_skip: __pmp_skip:
/* Detect number of MHPM counters */ /* Detect number of MHPM counters */
__check_hpm_csr(CSR_MHPMCOUNTER3, mhpm_mask); __check_csr(CSR_MHPMCOUNTER3, 0, 1UL, mhpm_count, __mhpm_skip);
hfeatures->mhpm_bits = hart_mhpm_get_allowed_bits(); hfeatures->mhpm_bits = hart_pmu_get_allowed_bits();
__check_hpm_csr_4(CSR_MHPMCOUNTER4, mhpm_mask);
__check_hpm_csr_8(CSR_MHPMCOUNTER8, mhpm_mask); __check_csr_4(CSR_MHPMCOUNTER4, 0, 1UL, mhpm_count, __mhpm_skip);
__check_hpm_csr_16(CSR_MHPMCOUNTER16, mhpm_mask); __check_csr_8(CSR_MHPMCOUNTER8, 0, 1UL, mhpm_count, __mhpm_skip);
__check_csr_16(CSR_MHPMCOUNTER16, 0, 1UL, mhpm_count, __mhpm_skip);
/** /**
* No need to check for MHPMCOUNTERH for RV32 as they are expected to be * No need to check for MHPMCOUNTERH for RV32 as they are expected to be
* implemented if MHPMCOUNTER is implemented. * implemented if MHPMCOUNTER is implemented.
*/ */
__mhpm_skip:
#undef __check_csr_64 #undef __check_csr_64
#undef __check_csr_32 #undef __check_csr_32
#undef __check_csr_16 #undef __check_csr_16
@@ -852,57 +646,59 @@ __pmp_skip:
#undef __check_csr_2 #undef __check_csr_2
#undef __check_csr #undef __check_csr
#define __check_priv(__csr, __base_priv, __priv) \
val = csr_read_allowed(__csr, (ulong)&trap); \
if (!trap.cause && (hfeatures->priv_version >= __base_priv)) { \
hfeatures->priv_version = __priv; \
}
/* Detect if hart supports Priv v1.10 */ /* Detect if hart supports Priv v1.10 */
__check_priv(CSR_MCOUNTEREN, val = csr_read_allowed(CSR_MCOUNTEREN, (unsigned long)&trap);
SBI_HART_PRIV_VER_UNKNOWN, SBI_HART_PRIV_VER_1_10); if (!trap.cause)
hfeatures->priv_version = SBI_HART_PRIV_VER_1_10;
/* Detect if hart supports Priv v1.11 */ /* Detect if hart supports Priv v1.11 */
__check_priv(CSR_MCOUNTINHIBIT, val = csr_read_allowed(CSR_MCOUNTINHIBIT, (unsigned long)&trap);
SBI_HART_PRIV_VER_1_10, SBI_HART_PRIV_VER_1_11); if (!trap.cause &&
(hfeatures->priv_version >= SBI_HART_PRIV_VER_1_10))
hfeatures->priv_version = SBI_HART_PRIV_VER_1_11;
/* Detect if hart supports Priv v1.12 */ /* Detect if hart supports Priv v1.12 */
__check_priv(CSR_MENVCFG, csr_read_allowed(CSR_MENVCFG, (unsigned long)&trap);
SBI_HART_PRIV_VER_1_11, SBI_HART_PRIV_VER_1_12); if (!trap.cause &&
(hfeatures->priv_version >= SBI_HART_PRIV_VER_1_11))
#undef __check_priv_csr hfeatures->priv_version = SBI_HART_PRIV_VER_1_12;
#define __check_ext_csr(__base_priv, __csr, __ext) \
if (hfeatures->priv_version >= __base_priv) { \
csr_read_allowed(__csr, (ulong)&trap); \
if (!trap.cause) \
__sbi_hart_update_extension(hfeatures, \
__ext, true); \
}
/* Counter overflow/filtering is not useful without mcounter/inhibit */ /* Counter overflow/filtering is not useful without mcounter/inhibit */
if (hfeatures->priv_version >= SBI_HART_PRIV_VER_1_12) {
/* Detect if hart supports sscofpmf */ /* Detect if hart supports sscofpmf */
__check_ext_csr(SBI_HART_PRIV_VER_1_11, csr_read_allowed(CSR_SCOUNTOVF, (unsigned long)&trap);
CSR_SCOUNTOVF, SBI_HART_EXT_SSCOFPMF); if (!trap.cause)
__sbi_hart_update_extension(hfeatures,
SBI_HART_EXT_SSCOFPMF, true);
}
/* Detect if hart supports time CSR */ /* Detect if hart supports time CSR */
__check_ext_csr(SBI_HART_PRIV_VER_UNKNOWN, csr_read_allowed(CSR_TIME, (unsigned long)&trap);
CSR_TIME, SBI_HART_EXT_ZICNTR); if (!trap.cause)
__sbi_hart_update_extension(hfeatures,
SBI_HART_EXT_TIME, true);
/* Detect if hart has AIA local interrupt CSRs */ /* Detect if hart has AIA local interrupt CSRs */
__check_ext_csr(SBI_HART_PRIV_VER_UNKNOWN, csr_read_allowed(CSR_MTOPI, (unsigned long)&trap);
CSR_MTOPI, SBI_HART_EXT_SMAIA); if (!trap.cause)
__sbi_hart_update_extension(hfeatures,
SBI_HART_EXT_SMAIA, true);
/* Detect if hart supports stimecmp CSR(Sstc extension) */ /* Detect if hart supports stimecmp CSR(Sstc extension) */
__check_ext_csr(SBI_HART_PRIV_VER_1_12, if (hfeatures->priv_version >= SBI_HART_PRIV_VER_1_12) {
CSR_STIMECMP, SBI_HART_EXT_SSTC); csr_read_allowed(CSR_STIMECMP, (unsigned long)&trap);
if (!trap.cause)
__sbi_hart_update_extension(hfeatures,
SBI_HART_EXT_SSTC, true);
}
/* Detect if hart supports mstateen CSRs */ /* Detect if hart supports mstateen CSRs */
__check_ext_csr(SBI_HART_PRIV_VER_1_12, if (hfeatures->priv_version >= SBI_HART_PRIV_VER_1_12) {
CSR_MSTATEEN0, SBI_HART_EXT_SMSTATEEN); val = csr_read_allowed(CSR_MSTATEEN0, (unsigned long)&trap);
/* Detect if hart supports smcntrpmf */ if (!trap.cause)
__check_ext_csr(SBI_HART_PRIV_VER_1_12, __sbi_hart_update_extension(hfeatures,
CSR_MCYCLECFG, SBI_HART_EXT_SMCNTRPMF); SBI_HART_EXT_SMSTATEEN, true);
}
#undef __check_ext_csr
/* Save trap based detection of Zicntr */
has_zicntr = sbi_hart_has_extension(scratch, SBI_HART_EXT_ZICNTR);
/* Let platform populate extensions */ /* Let platform populate extensions */
rc = sbi_platform_extensions_init(sbi_platform_thishart_ptr(), rc = sbi_platform_extensions_init(sbi_platform_thishart_ptr(),
@@ -910,28 +706,9 @@ __pmp_skip:
if (rc) if (rc)
return rc; return rc;
/* Zicntr should only be detected using traps */
__sbi_hart_update_extension(hfeatures, SBI_HART_EXT_ZICNTR,
has_zicntr);
/* Extensions implied by other extensions and features */
if (hfeatures->mhpm_mask)
__sbi_hart_update_extension(hfeatures,
SBI_HART_EXT_ZIHPM, true);
/* Mark hart feature detection done */ /* Mark hart feature detection done */
hfeatures->detected = true; hfeatures->detected = true;
/*
* On platforms with Smepmp, the previous booting stage must
* enter OpenSBI with mseccfg.MML == 0. This allows OpenSBI
* to configure it's own M-mode only regions without depending
* on the previous booting stage.
*/
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP) &&
(csr_read(CSR_MSECCFG) & MSECCFG_MML))
return SBI_EILL;
return 0; return 0;
} }

View File

@@ -14,6 +14,8 @@
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
/* Alignment of heap base address and size */
#define HEAP_BASE_ALIGN 1024
/* Minimum size and alignment of heap allocations */ /* Minimum size and alignment of heap allocations */
#define HEAP_ALLOC_ALIGN 64 #define HEAP_ALLOC_ALIGN 64
#define HEAP_HOUSEKEEPING_FACTOR 16 #define HEAP_HOUSEKEEPING_FACTOR 16

View File

@@ -115,23 +115,24 @@ int sbi_hsm_hart_interruptible_mask(const struct sbi_domain *dom,
{ {
int hstate; int hstate;
ulong i, hmask, dmask; ulong i, hmask, dmask;
ulong hend = sbi_scratch_last_hartid() + 1;
*out_hmask = 0; *out_hmask = 0;
if (!sbi_hartid_valid(hbase)) if (hend <= hbase)
return SBI_EINVAL; return SBI_EINVAL;
if (BITS_PER_LONG < (hend - hbase))
hend = hbase + BITS_PER_LONG;
dmask = sbi_domain_get_assigned_hartmask(dom, hbase); dmask = sbi_domain_get_assigned_hartmask(dom, hbase);
for (i = 0; i < BITS_PER_LONG; i++) { for (i = hbase; i < hend; i++) {
hmask = 1UL << i; hmask = 1UL << (i - hbase);
if (!(dmask & hmask)) if (dmask & hmask) {
continue; hstate = __sbi_hsm_hart_get_state(i);
hstate = __sbi_hsm_hart_get_state(hbase + i);
if (hstate == SBI_HSM_STATE_STARTED || if (hstate == SBI_HSM_STATE_STARTED ||
hstate == SBI_HSM_STATE_SUSPENDED || hstate == SBI_HSM_STATE_SUSPENDED)
hstate == SBI_HSM_STATE_RESUME_PENDING)
*out_hmask |= hmask; *out_hmask |= hmask;
} }
}
return 0; return 0;
} }
@@ -248,15 +249,15 @@ int sbi_hsm_init(struct sbi_scratch *scratch, u32 hartid, bool cold_boot)
return SBI_ENOMEM; return SBI_ENOMEM;
/* Initialize hart state data for every hart */ /* Initialize hart state data for every hart */
for (i = 0; i <= sbi_scratch_last_hartindex(); i++) { for (i = 0; i <= sbi_scratch_last_hartid(); i++) {
rscratch = sbi_hartindex_to_scratch(i); rscratch = sbi_hartid_to_scratch(i);
if (!rscratch) if (!rscratch)
continue; continue;
hdata = sbi_scratch_offset_ptr(rscratch, hdata = sbi_scratch_offset_ptr(rscratch,
hart_data_offset); hart_data_offset);
ATOMIC_INIT(&hdata->state, ATOMIC_INIT(&hdata->state,
(sbi_hartindex_to_hartid(i) == hartid) ? (i == hartid) ?
SBI_HSM_STATE_START_PENDING : SBI_HSM_STATE_START_PENDING :
SBI_HSM_STATE_STOPPED); SBI_HSM_STATE_STOPPED);
ATOMIC_INIT(&hdata->start_ticket, 0); ATOMIC_INIT(&hdata->start_ticket, 0);
@@ -355,7 +356,7 @@ int sbi_hsm_hart_start(struct sbi_scratch *scratch,
(hsm_device_has_hart_secondary_boot() && !init_count)) { (hsm_device_has_hart_secondary_boot() && !init_count)) {
rc = hsm_device_hart_start(hartid, scratch->warmboot_addr); rc = hsm_device_hart_start(hartid, scratch->warmboot_addr);
} else { } else {
rc = sbi_ipi_raw_send(sbi_hartid_to_hartindex(hartid)); rc = sbi_ipi_raw_send(hartid);
} }
if (!rc) if (!rc)

View File

@@ -176,13 +176,12 @@ static void sbi_boot_print_hart(struct sbi_scratch *scratch, u32 hartid)
sbi_printf("Boot HART ISA Extensions : %s\n", str); sbi_printf("Boot HART ISA Extensions : %s\n", str);
sbi_printf("Boot HART PMP Count : %d\n", sbi_printf("Boot HART PMP Count : %d\n",
sbi_hart_pmp_count(scratch)); sbi_hart_pmp_count(scratch));
sbi_printf("Boot HART PMP Granularity : %u bits\n", sbi_printf("Boot HART PMP Granularity : %lu\n",
sbi_hart_pmp_log2gran(scratch)); sbi_hart_pmp_granularity(scratch));
sbi_printf("Boot HART PMP Address Bits: %d\n", sbi_printf("Boot HART PMP Address Bits: %d\n",
sbi_hart_pmp_addrbits(scratch)); sbi_hart_pmp_addrbits(scratch));
sbi_printf("Boot HART MHPM Info : %lu (0x%08x)\n", sbi_printf("Boot HART MHPM Count : %d\n",
sbi_popcount(sbi_hart_mhpm_mask(scratch)), sbi_hart_mhpm_count(scratch));
sbi_hart_mhpm_mask(scratch));
sbi_hart_delegation_dump(scratch, "Boot HART ", " "); sbi_hart_delegation_dump(scratch, "Boot HART ", " ");
} }
@@ -195,9 +194,6 @@ static void wait_for_coldboot(struct sbi_scratch *scratch, u32 hartid)
{ {
unsigned long saved_mie, cmip; unsigned long saved_mie, cmip;
if (__smp_load_acquire(&coldboot_done))
return;
/* Save MIE CSR */ /* Save MIE CSR */
saved_mie = csr_read(CSR_MIE); saved_mie = csr_read(CSR_MIE);
@@ -208,7 +204,7 @@ static void wait_for_coldboot(struct sbi_scratch *scratch, u32 hartid)
spin_lock(&coldboot_lock); spin_lock(&coldboot_lock);
/* Mark current HART as waiting */ /* Mark current HART as waiting */
sbi_hartmask_set_hartid(hartid, &coldboot_wait_hmask); sbi_hartmask_set_hart(hartid, &coldboot_wait_hmask);
/* Release coldboot lock */ /* Release coldboot lock */
spin_unlock(&coldboot_lock); spin_unlock(&coldboot_lock);
@@ -225,7 +221,7 @@ static void wait_for_coldboot(struct sbi_scratch *scratch, u32 hartid)
spin_lock(&coldboot_lock); spin_lock(&coldboot_lock);
/* Unmark current HART as waiting */ /* Unmark current HART as waiting */
sbi_hartmask_clear_hartid(hartid, &coldboot_wait_hmask); sbi_hartmask_clear_hart(hartid, &coldboot_wait_hmask);
/* Release coldboot lock */ /* Release coldboot lock */
spin_unlock(&coldboot_lock); spin_unlock(&coldboot_lock);
@@ -245,8 +241,6 @@ static void wait_for_coldboot(struct sbi_scratch *scratch, u32 hartid)
static void wake_coldboot_harts(struct sbi_scratch *scratch, u32 hartid) static void wake_coldboot_harts(struct sbi_scratch *scratch, u32 hartid)
{ {
u32 i, hartindex = sbi_hartid_to_hartindex(hartid);
/* Mark coldboot done */ /* Mark coldboot done */
__smp_store_release(&coldboot_done, 1); __smp_store_release(&coldboot_done, 1);
@@ -254,9 +248,9 @@ static void wake_coldboot_harts(struct sbi_scratch *scratch, u32 hartid)
spin_lock(&coldboot_lock); spin_lock(&coldboot_lock);
/* Send an IPI to all HARTs waiting for coldboot */ /* Send an IPI to all HARTs waiting for coldboot */
sbi_hartmask_for_each_hartindex(i, &coldboot_wait_hmask) { for (u32 i = 0; i <= sbi_scratch_last_hartid(); i++) {
if (i == hartindex) if ((i != hartid) &&
continue; sbi_hartmask_test_hart(i, &coldboot_wait_hmask))
sbi_ipi_raw_send(i); sbi_ipi_raw_send(i);
} }
@@ -362,6 +356,13 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
sbi_hart_hang(); sbi_hart_hang();
} }
rc = sbi_hart_pmp_configure(scratch);
if (rc) {
sbi_printf("%s: PMP configure failed (error %d)\n",
__func__, rc);
sbi_hart_hang();
}
/* /*
* Note: Platform final initialization should be after finalizing * Note: Platform final initialization should be after finalizing
* domains so that it sees correct domain assignment and PMP * domains so that it sees correct domain assignment and PMP
@@ -391,17 +392,6 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
sbi_boot_print_hart(scratch, hartid); sbi_boot_print_hart(scratch, hartid);
/*
* Configure PMP at last because if SMEPMP is detected,
* M-mode access to the S/U space will be rescinded.
*/
rc = sbi_hart_pmp_configure(scratch);
if (rc) {
sbi_printf("%s: PMP configure failed (error %d)\n",
__func__, rc);
sbi_hart_hang();
}
wake_coldboot_harts(scratch, hartid); wake_coldboot_harts(scratch, hartid);
count = sbi_scratch_offset_ptr(scratch, init_count_offset); count = sbi_scratch_offset_ptr(scratch, init_count_offset);
@@ -455,15 +445,11 @@ static void __noreturn init_warm_startup(struct sbi_scratch *scratch,
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
rc = sbi_platform_final_init(plat, false); rc = sbi_hart_pmp_configure(scratch);
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
/* rc = sbi_platform_final_init(plat, false);
* Configure PMP at last because if SMEPMP is detected,
* M-mode access to the S/U space will be rescinded.
*/
rc = sbi_hart_pmp_configure(scratch);
if (rc) if (rc)
sbi_hart_hang(); sbi_hart_hang();
@@ -504,7 +490,7 @@ static void __noreturn init_warmboot(struct sbi_scratch *scratch, u32 hartid)
if (hstate == SBI_HSM_STATE_SUSPENDED) { if (hstate == SBI_HSM_STATE_SUSPENDED) {
init_warm_resume(scratch, hartid); init_warm_resume(scratch, hartid);
} else { } else {
sbi_ipi_raw_clear(sbi_hartid_to_hartindex(hartid)); sbi_ipi_raw_clear(hartid);
init_warm_startup(scratch, hartid); init_warm_startup(scratch, hartid);
} }
} }
@@ -525,19 +511,13 @@ static atomic_t coldboot_lottery = ATOMIC_INITIALIZER(0);
*/ */
void __noreturn sbi_init(struct sbi_scratch *scratch) void __noreturn sbi_init(struct sbi_scratch *scratch)
{ {
u32 i, h;
bool hartid_valid = false;
bool next_mode_supported = false; bool next_mode_supported = false;
bool coldboot = false; bool coldboot = false;
u32 hartid = current_hartid(); u32 hartid = current_hartid();
const struct sbi_platform *plat = sbi_platform_ptr(scratch); const struct sbi_platform *plat = sbi_platform_ptr(scratch);
for (i = 0; i < plat->hart_count; i++) { if ((SBI_HARTMASK_MAX_BITS <= hartid) ||
h = (plat->hart_index2id) ? plat->hart_index2id[i] : i; sbi_platform_hart_invalid(plat, hartid))
if (h == hartid)
hartid_valid = true;
}
if (!hartid_valid)
sbi_hart_hang(); sbi_hart_hang();
switch (scratch->next_mode) { switch (scratch->next_mode) {
@@ -634,7 +614,7 @@ void __noreturn sbi_exit(struct sbi_scratch *scratch)
u32 hartid = current_hartid(); u32 hartid = current_hartid();
const struct sbi_platform *plat = sbi_platform_ptr(scratch); const struct sbi_platform *plat = sbi_platform_ptr(scratch);
if (!sbi_hartid_valid(hartid)) if (sbi_platform_hart_invalid(plat, hartid))
sbi_hart_hang(); sbi_hart_hang();
sbi_platform_early_exit(plat); sbi_platform_early_exit(plat);

View File

@@ -27,19 +27,14 @@ struct sbi_ipi_data {
unsigned long ipi_type; unsigned long ipi_type;
}; };
_Static_assert(
8 * sizeof(((struct sbi_ipi_data*)0)->ipi_type) == SBI_IPI_EVENT_MAX,
"type of sbi_ipi_data.ipi_type has changed, please redefine SBI_IPI_EVENT_MAX"
);
static unsigned long ipi_data_off; static unsigned long ipi_data_off;
static const struct sbi_ipi_device *ipi_dev = NULL; static const struct sbi_ipi_device *ipi_dev = NULL;
static const struct sbi_ipi_event_ops *ipi_ops_array[SBI_IPI_EVENT_MAX]; static const struct sbi_ipi_event_ops *ipi_ops_array[SBI_IPI_EVENT_MAX];
static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartindex, static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartid,
u32 event, void *data) u32 event, void *data)
{ {
int ret = 0; int ret;
struct sbi_scratch *remote_scratch = NULL; struct sbi_scratch *remote_scratch = NULL;
struct sbi_ipi_data *ipi_data; struct sbi_ipi_data *ipi_data;
const struct sbi_ipi_event_ops *ipi_ops; const struct sbi_ipi_event_ops *ipi_ops;
@@ -49,7 +44,7 @@ static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartindex,
return SBI_EINVAL; return SBI_EINVAL;
ipi_ops = ipi_ops_array[event]; ipi_ops = ipi_ops_array[event];
remote_scratch = sbi_hartindex_to_scratch(remote_hartindex); remote_scratch = sbi_hartid_to_scratch(remote_hartid);
if (!remote_scratch) if (!remote_scratch)
return SBI_EINVAL; return SBI_EINVAL;
@@ -57,34 +52,24 @@ static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartindex,
if (ipi_ops->update) { if (ipi_ops->update) {
ret = ipi_ops->update(scratch, remote_scratch, ret = ipi_ops->update(scratch, remote_scratch,
remote_hartindex, data); remote_hartid, data);
if (ret != SBI_IPI_UPDATE_SUCCESS) if (ret != SBI_IPI_UPDATE_SUCCESS)
return ret; return ret;
} else if (scratch == remote_scratch) {
/*
* IPI events with an update() callback are expected to return
* SBI_IPI_UPDATE_BREAK for self-IPIs. For other events, check
* for self-IPI and execute the callback directly here.
*/
ipi_ops->process(scratch);
return 0;
} }
/* /*
* Set IPI type on remote hart's scratch area and * Set IPI type on remote hart's scratch area and
* trigger the interrupt. * trigger the interrupt
*
* Multiple harts may be trying to send IPI to the
* remote hart so call sbi_ipi_raw_send() only when
* the ipi_type was previously zero.
*/ */
if (!__atomic_fetch_or(&ipi_data->ipi_type, atomic_raw_set_bit(event, &ipi_data->ipi_type);
BIT(event), __ATOMIC_RELAXED)) smp_wmb();
ret = sbi_ipi_raw_send(remote_hartindex);
if (ipi_dev && ipi_dev->ipi_send)
ipi_dev->ipi_send(remote_hartid);
sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_SENT); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_SENT);
return ret; return 0;
} }
static int sbi_ipi_sync(struct sbi_scratch *scratch, u32 event) static int sbi_ipi_sync(struct sbi_scratch *scratch, u32 event)
@@ -109,7 +94,7 @@ static int sbi_ipi_sync(struct sbi_scratch *scratch, u32 event)
*/ */
int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data) int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data)
{ {
int rc = 0; int rc;
bool retry_needed; bool retry_needed;
ulong i, m; ulong i, m;
struct sbi_hartmask target_mask = {0}; struct sbi_hartmask target_mask = {0};
@@ -125,14 +110,14 @@ int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data)
for (i = hbase; m; i++, m >>= 1) { for (i = hbase; m; i++, m >>= 1) {
if (m & 1UL) if (m & 1UL)
sbi_hartmask_set_hartid(i, &target_mask); sbi_hartmask_set_hart(i, &target_mask);
} }
} else { } else {
hbase = 0; hbase = 0;
while (!sbi_hsm_hart_interruptible_mask(dom, hbase, &m)) { while (!sbi_hsm_hart_interruptible_mask(dom, hbase, &m)) {
for (i = hbase; m; i++, m >>= 1) { for (i = hbase; m; i++, m >>= 1) {
if (m & 1UL) if (m & 1UL)
sbi_hartmask_set_hartid(i, &target_mask); sbi_hartmask_set_hart(i, &target_mask);
} }
hbase += BITS_PER_LONG; hbase += BITS_PER_LONG;
} }
@@ -141,23 +126,19 @@ int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data)
/* Send IPIs */ /* Send IPIs */
do { do {
retry_needed = false; retry_needed = false;
sbi_hartmask_for_each_hartindex(i, &target_mask) { sbi_hartmask_for_each_hart(i, &target_mask) {
rc = sbi_ipi_send(scratch, i, event, data); rc = sbi_ipi_send(scratch, i, event, data);
if (rc < 0)
goto done;
if (rc == SBI_IPI_UPDATE_RETRY) if (rc == SBI_IPI_UPDATE_RETRY)
retry_needed = true; retry_needed = true;
else else
sbi_hartmask_clear_hartindex(i, &target_mask); sbi_hartmask_clear_hart(i, &target_mask);
rc = 0;
} }
} while (retry_needed); } while (retry_needed);
done:
/* Sync IPIs */ /* Sync IPIs */
sbi_ipi_sync(scratch, event); sbi_ipi_sync(scratch, event);
return rc; return 0;
} }
int sbi_ipi_event_create(const struct sbi_ipi_event_ops *ops) int sbi_ipi_event_create(const struct sbi_ipi_event_ops *ops)
@@ -233,17 +214,18 @@ void sbi_ipi_process(void)
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr(); struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
struct sbi_ipi_data *ipi_data = struct sbi_ipi_data *ipi_data =
sbi_scratch_offset_ptr(scratch, ipi_data_off); sbi_scratch_offset_ptr(scratch, ipi_data_off);
u32 hartindex = sbi_hartid_to_hartindex(current_hartid()); u32 hartid = current_hartid();
sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_RECVD); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_IPI_RECVD);
sbi_ipi_raw_clear(hartindex); if (ipi_dev && ipi_dev->ipi_clear)
ipi_dev->ipi_clear(hartid);
ipi_type = atomic_raw_xchg_ulong(&ipi_data->ipi_type, 0); ipi_type = atomic_raw_xchg_ulong(&ipi_data->ipi_type, 0);
ipi_event = 0; ipi_event = 0;
while (ipi_type) { while (ipi_type) {
if (ipi_type & 1UL) { if (ipi_type & 1UL) {
ipi_ops = ipi_ops_array[ipi_event]; ipi_ops = ipi_ops_array[ipi_event];
if (ipi_ops) if (ipi_ops && ipi_ops->process)
ipi_ops->process(scratch); ipi_ops->process(scratch);
} }
ipi_type = ipi_type >> 1; ipi_type = ipi_type >> 1;
@@ -251,41 +233,19 @@ void sbi_ipi_process(void)
} }
} }
int sbi_ipi_raw_send(u32 hartindex) int sbi_ipi_raw_send(u32 target_hart)
{ {
if (!ipi_dev || !ipi_dev->ipi_send) if (!ipi_dev || !ipi_dev->ipi_send)
return SBI_EINVAL; return SBI_EINVAL;
/* ipi_dev->ipi_send(target_hart);
* Ensure that memory or MMIO writes done before
* this function are not observed after the memory
* or MMIO writes done by the ipi_send() device
* callback. This also allows the ipi_send() device
* callback to use relaxed MMIO writes.
*
* This pairs with the wmb() in sbi_ipi_raw_clear().
*/
wmb();
ipi_dev->ipi_send(hartindex);
return 0; return 0;
} }
void sbi_ipi_raw_clear(u32 hartindex) void sbi_ipi_raw_clear(u32 target_hart)
{ {
if (ipi_dev && ipi_dev->ipi_clear) if (ipi_dev && ipi_dev->ipi_clear)
ipi_dev->ipi_clear(hartindex); ipi_dev->ipi_clear(target_hart);
/*
* Ensure that memory or MMIO writes after this
* function returns are not observed before the
* memory or MMIO writes done by the ipi_clear()
* device callback. This also allows ipi_clear()
* device callback to use relaxed MMIO writes.
*
* This pairs with the wmb() in sbi_ipi_raw_send().
*/
wmb();
} }
const struct sbi_ipi_device *sbi_ipi_get_device(void) const struct sbi_ipi_device *sbi_ipi_get_device(void)

View File

@@ -211,14 +211,16 @@ int sbi_misaligned_store_handler(ulong addr, ulong tval2, ulong tinst,
} else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) { } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) {
len = 8; len = 8;
val.data_ulong = GET_RS2S(insn, regs); val.data_ulong = GET_RS2S(insn, regs);
} else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP) { } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP &&
((insn >> SH_RD) & 0x1f)) {
len = 8; len = 8;
val.data_ulong = GET_RS2C(insn, regs); val.data_ulong = GET_RS2C(insn, regs);
#endif #endif
} else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) { } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) {
len = 4; len = 4;
val.data_ulong = GET_RS2S(insn, regs); val.data_ulong = GET_RS2S(insn, regs);
} else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP) { } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP &&
((insn >> SH_RD) & 0x1f)) {
len = 4; len = 4;
val.data_ulong = GET_RS2C(insn, regs); val.data_ulong = GET_RS2C(insn, regs);
#ifdef __riscv_flen #ifdef __riscv_flen

View File

@@ -71,3 +71,20 @@ done:
else else
sbi_strncpy(features_str, "none", nfstr); sbi_strncpy(features_str, "none", nfstr);
} }
u32 sbi_platform_hart_index(const struct sbi_platform *plat, u32 hartid)
{
u32 i;
if (!plat)
return -1U;
if (plat->hart_index2id) {
for (i = 0; i < plat->hart_count; i++) {
if (plat->hart_index2id[i] == hartid)
return i;
}
return -1U;
}
return hartid;
}

View File

@@ -236,7 +236,8 @@ static int pmu_add_hw_event_map(u32 eidx_start, u32 eidx_end, u32 cmap,
bool is_overlap; bool is_overlap;
struct sbi_pmu_hw_event *event = &hw_event_map[num_hw_events]; struct sbi_pmu_hw_event *event = &hw_event_map[num_hw_events];
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr(); struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
uint32_t ctr_avail_mask = sbi_hart_mhpm_mask(scratch) | 0x7; int hw_ctr_avail = sbi_hart_mhpm_count(scratch);
uint32_t ctr_avail_mask = ((uint32_t)(~0) >> (32 - (hw_ctr_avail + 3)));
/* The first two counters are reserved by priv spec */ /* The first two counters are reserved by priv spec */
if (eidx_start > SBI_PMU_HW_INSTRUCTIONS && (cmap & SBI_PMU_FIXED_CTR_MASK)) if (eidx_start > SBI_PMU_HW_INSTRUCTIONS && (cmap & SBI_PMU_FIXED_CTR_MASK))
@@ -353,11 +354,8 @@ static int pmu_ctr_start_hw(uint32_t cidx, uint64_t ival, bool ival_update)
if (cidx >= num_hw_ctrs || cidx == 1) if (cidx >= num_hw_ctrs || cidx == 1)
return SBI_EINVAL; return SBI_EINVAL;
if (sbi_hart_priv_version(scratch) < SBI_HART_PRIV_VER_1_11) { if (sbi_hart_priv_version(scratch) < SBI_HART_PRIV_VER_1_11)
if (ival_update) goto skip_inhibit_update;
pmu_ctr_write_hw(cidx, ival);
return 0;
}
/* /*
* Some of the hardware may not support mcountinhibit but perf stat * Some of the hardware may not support mcountinhibit but perf stat
@@ -371,13 +369,14 @@ static int pmu_ctr_start_hw(uint32_t cidx, uint64_t ival, bool ival_update)
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF)) if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
pmu_ctr_enable_irq_hw(cidx); pmu_ctr_enable_irq_hw(cidx);
if (ival_update)
pmu_ctr_write_hw(cidx, ival);
if (pmu_dev && pmu_dev->hw_counter_enable_irq) if (pmu_dev && pmu_dev->hw_counter_enable_irq)
pmu_dev->hw_counter_enable_irq(cidx); pmu_dev->hw_counter_enable_irq(cidx);
csr_write(CSR_MCOUNTINHIBIT, mctr_inhbt); csr_write(CSR_MCOUNTINHIBIT, mctr_inhbt);
skip_inhibit_update:
if (ival_update)
pmu_ctr_write_hw(cidx, ival);
return 0; return 0;
} }
@@ -442,13 +441,10 @@ int sbi_pmu_ctr_start(unsigned long cbase, unsigned long cmask,
if ((cbase + sbi_fls(cmask)) >= total_ctrs) if ((cbase + sbi_fls(cmask)) >= total_ctrs)
return ret; return ret;
if (flags & SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT)
return SBI_ENO_SHMEM;
if (flags & SBI_PMU_START_FLAG_SET_INIT_VALUE) if (flags & SBI_PMU_START_FLAG_SET_INIT_VALUE)
bUpdate = true; bUpdate = true;
for_each_set_bit(i, &cmask, BITS_PER_LONG) { for_each_set_bit(i, &cmask, total_ctrs) {
cidx = i + cbase; cidx = i + cbase;
event_idx_type = pmu_ctr_validate(phs, cidx, &event_code); event_idx_type = pmu_ctr_validate(phs, cidx, &event_code);
if (event_idx_type < 0) if (event_idx_type < 0)
@@ -485,9 +481,6 @@ static int pmu_ctr_stop_hw(uint32_t cidx)
if (!__test_bit(cidx, &mctr_inhbt)) { if (!__test_bit(cidx, &mctr_inhbt)) {
__set_bit(cidx, &mctr_inhbt); __set_bit(cidx, &mctr_inhbt);
csr_write(CSR_MCOUNTINHIBIT, mctr_inhbt); csr_write(CSR_MCOUNTINHIBIT, mctr_inhbt);
if (pmu_dev && pmu_dev->hw_counter_disable_irq) {
pmu_dev->hw_counter_disable_irq(cidx);
}
return 0; return 0;
} else } else
return SBI_EALREADY_STOPPED; return SBI_EALREADY_STOPPED;
@@ -543,10 +536,7 @@ int sbi_pmu_ctr_stop(unsigned long cbase, unsigned long cmask,
if ((cbase + sbi_fls(cmask)) >= total_ctrs) if ((cbase + sbi_fls(cmask)) >= total_ctrs)
return SBI_EINVAL; return SBI_EINVAL;
if (flag & SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT) for_each_set_bit(i, &cmask, total_ctrs) {
return SBI_ENO_SHMEM;
for_each_set_bit(i, &cmask, BITS_PER_LONG) {
cidx = i + cbase; cidx = i + cbase;
event_idx_type = pmu_ctr_validate(phs, cidx, &event_code); event_idx_type = pmu_ctr_validate(phs, cidx, &event_code);
if (event_idx_type < 0) if (event_idx_type < 0)
@@ -605,10 +595,7 @@ static int pmu_update_hw_mhpmevent(struct sbi_pmu_hw_event *hw_evt, int ctr_idx,
pmu_dev->hw_counter_disable_irq(ctr_idx); pmu_dev->hw_counter_disable_irq(ctr_idx);
/* Update the inhibit flags based on inhibit flags received from supervisor */ /* Update the inhibit flags based on inhibit flags received from supervisor */
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
pmu_update_inhibit_flags(flags, &mhpmevent_val); pmu_update_inhibit_flags(flags, &mhpmevent_val);
if (pmu_dev && pmu_dev->hw_counter_filter_mode)
pmu_dev->hw_counter_filter_mode(flags, ctr_idx);
#if __riscv_xlen == 32 #if __riscv_xlen == 32
csr_write_num(CSR_MHPMEVENT3 + ctr_idx - 3, mhpmevent_val & 0xFFFFFFFF); csr_write_num(CSR_MHPMEVENT3 + ctr_idx - 3, mhpmevent_val & 0xFFFFFFFF);
@@ -622,50 +609,7 @@ static int pmu_update_hw_mhpmevent(struct sbi_pmu_hw_event *hw_evt, int ctr_idx,
return 0; return 0;
} }
static int pmu_fixed_ctr_update_inhibit_bits(int fixed_ctr, unsigned long flags) static int pmu_ctr_find_fixed_fw(unsigned long evt_idx_code)
{
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
uint64_t cfg_val = 0, cfg_csr_no;
#if __riscv_xlen == 32
uint64_t cfgh_csr_no;
#endif
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMCNTRPMF) &&
!(pmu_dev && pmu_dev->hw_counter_filter_mode))
return fixed_ctr;
switch (fixed_ctr) {
case 0:
cfg_csr_no = CSR_MCYCLECFG;
#if __riscv_xlen == 32
cfgh_csr_no = CSR_MCYCLECFGH;
#endif
break;
case 2:
cfg_csr_no = CSR_MINSTRETCFG;
#if __riscv_xlen == 32
cfgh_csr_no = CSR_MINSTRETCFGH;
#endif
break;
default:
return SBI_EFAIL;
}
cfg_val |= MHPMEVENT_MINH;
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMCNTRPMF)) {
pmu_update_inhibit_flags(flags, &cfg_val);
#if __riscv_xlen == 32
csr_write_num(cfg_csr_no, cfg_val & 0xFFFFFFFF);
csr_write_num(cfgh_csr_no, cfg_val >> BITS_PER_LONG);
#else
csr_write_num(cfg_csr_no, cfg_val);
#endif
}
if (pmu_dev && pmu_dev->hw_counter_filter_mode)
pmu_dev->hw_counter_filter_mode(flags, fixed_ctr);
return fixed_ctr;
}
static int pmu_ctr_find_fixed_hw(unsigned long evt_idx_code)
{ {
/* Non-programmables counters are enabled always. No need to do lookup */ /* Non-programmables counters are enabled always. No need to do lookup */
if (evt_idx_code == SBI_PMU_HW_CPU_CYCLES) if (evt_idx_code == SBI_PMU_HW_CPU_CYCLES)
@@ -694,10 +638,10 @@ static int pmu_ctr_find_hw(struct sbi_pmu_hart_state *phs,
* If Sscof is present try to find the programmable counter for * If Sscof is present try to find the programmable counter for
* cycle/instret as well. * cycle/instret as well.
*/ */
fixed_ctr = pmu_ctr_find_fixed_hw(event_idx); fixed_ctr = pmu_ctr_find_fixed_fw(event_idx);
if (fixed_ctr >= 0 && if (fixed_ctr >= 0 &&
!sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF)) !sbi_hart_has_extension(scratch, SBI_HART_EXT_SSCOFPMF))
return pmu_fixed_ctr_update_inhibit_bits(fixed_ctr, flags); return fixed_ctr;
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_11) if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_11)
mctr_inhbt = csr_read(CSR_MCOUNTINHIBIT); mctr_inhbt = csr_read(CSR_MCOUNTINHIBIT);
@@ -740,7 +684,7 @@ static int pmu_ctr_find_hw(struct sbi_pmu_hart_state *phs,
* Return the fixed counter as they are mandatory anyways. * Return the fixed counter as they are mandatory anyways.
*/ */
if (fixed_ctr >= 0) if (fixed_ctr >= 0)
return pmu_fixed_ctr_update_inhibit_bits(fixed_ctr, flags); return fixed_ctr;
else else
return SBI_EFAIL; return SBI_EFAIL;
} }
@@ -899,17 +843,13 @@ int sbi_pmu_ctr_get_info(uint32_t cidx, unsigned long *ctr_info)
int width; int width;
union sbi_pmu_ctr_info cinfo = {0}; union sbi_pmu_ctr_info cinfo = {0};
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr(); struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
unsigned long counter_mask = (unsigned long)sbi_hart_mhpm_mask(scratch) |
SBI_PMU_CY_IR_MASK;
/* Sanity check */ /* Sanity check. Counter1 is not mapped at all */
if (cidx >= total_ctrs) if (cidx >= total_ctrs || cidx == 1)
return SBI_EINVAL; return SBI_EINVAL;
/* We have 31 HW counters with 31 being the last index(MHPMCOUNTER31) */ /* We have 31 HW counters with 31 being the last index(MHPMCOUNTER31) */
if (cidx < num_hw_ctrs) { if (cidx < num_hw_ctrs) {
if (!(__test_bit(cidx, &counter_mask)))
return SBI_EINVAL;
cinfo.type = SBI_PMU_CTR_TYPE_HW; cinfo.type = SBI_PMU_CTR_TYPE_HW;
cinfo.csr = CSR_CYCLE + cidx; cinfo.csr = CSR_CYCLE + cidx;
/* mcycle & minstret are always 64 bit */ /* mcycle & minstret are always 64 bit */
@@ -972,10 +912,8 @@ void sbi_pmu_exit(struct sbi_scratch *scratch)
int sbi_pmu_init(struct sbi_scratch *scratch, bool cold_boot) int sbi_pmu_init(struct sbi_scratch *scratch, bool cold_boot)
{ {
int hpm_count = sbi_fls(sbi_hart_mhpm_mask(scratch));
struct sbi_pmu_hart_state *phs; struct sbi_pmu_hart_state *phs;
const struct sbi_platform *plat; const struct sbi_platform *plat;
int rc;
if (cold_boot) { if (cold_boot) {
hw_event_map = sbi_calloc(sizeof(*hw_event_map), hw_event_map = sbi_calloc(sizeof(*hw_event_map),
@@ -991,21 +929,12 @@ int sbi_pmu_init(struct sbi_scratch *scratch, bool cold_boot)
plat = sbi_platform_ptr(scratch); plat = sbi_platform_ptr(scratch);
/* Initialize hw pmu events */ /* Initialize hw pmu events */
rc = sbi_platform_pmu_init(plat); sbi_platform_pmu_init(plat);
if (rc)
sbi_dprintf("%s: platform pmu init failed "
"(error %d)\n", __func__, rc);
/* mcycle & minstret is available always */ /* mcycle & minstret is available always */
if (!hpm_count) num_hw_ctrs = sbi_hart_mhpm_count(scratch) + 3;
/* Only CY, TM & IR are implemented in the hw */
num_hw_ctrs = 3;
else
num_hw_ctrs = hpm_count + 1;
if (num_hw_ctrs > SBI_PMU_HW_CTR_MAX) if (num_hw_ctrs > SBI_PMU_HW_CTR_MAX)
return SBI_EINVAL; return SBI_EINVAL;
total_ctrs = num_hw_ctrs + SBI_PMU_FW_CTR_MAX; total_ctrs = num_hw_ctrs + SBI_PMU_FW_CTR_MAX;
} }

View File

@@ -14,40 +14,29 @@
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
u32 last_hartindex_having_scratch = 0; u32 last_hartid_having_scratch = SBI_HARTMASK_MAX_BITS - 1;
u32 hartindex_to_hartid_table[SBI_HARTMASK_MAX_BITS + 1] = { -1U }; struct sbi_scratch *hartid_to_scratch_table[SBI_HARTMASK_MAX_BITS] = { 0 };
struct sbi_scratch *hartindex_to_scratch_table[SBI_HARTMASK_MAX_BITS + 1] = { 0 };
static spinlock_t extra_lock = SPIN_LOCK_INITIALIZER; static spinlock_t extra_lock = SPIN_LOCK_INITIALIZER;
static unsigned long extra_offset = SBI_SCRATCH_EXTRA_SPACE_OFFSET; static unsigned long extra_offset = SBI_SCRATCH_EXTRA_SPACE_OFFSET;
u32 sbi_hartid_to_hartindex(u32 hartid)
{
u32 i;
for (i = 0; i <= last_hartindex_having_scratch; i++)
if (hartindex_to_hartid_table[i] == hartid)
return i;
return -1U;
}
typedef struct sbi_scratch *(*hartid2scratch)(ulong hartid, ulong hartindex); typedef struct sbi_scratch *(*hartid2scratch)(ulong hartid, ulong hartindex);
int sbi_scratch_init(struct sbi_scratch *scratch) int sbi_scratch_init(struct sbi_scratch *scratch)
{ {
u32 i, h; u32 i;
const struct sbi_platform *plat = sbi_platform_ptr(scratch); const struct sbi_platform *plat = sbi_platform_ptr(scratch);
for (i = 0; i < plat->hart_count; i++) { for (i = 0; i < SBI_HARTMASK_MAX_BITS; i++) {
h = (plat->hart_index2id) ? plat->hart_index2id[i] : i; if (sbi_platform_hart_invalid(plat, i))
hartindex_to_hartid_table[i] = h; continue;
hartindex_to_scratch_table[i] = hartid_to_scratch_table[i] =
((hartid2scratch)scratch->hartid_to_scratch)(h, i); ((hartid2scratch)scratch->hartid_to_scratch)(i,
sbi_platform_hart_index(plat, i));
if (hartid_to_scratch_table[i])
last_hartid_having_scratch = i;
} }
last_hartindex_having_scratch = plat->hart_count - 1;
return 0; return 0;
} }
@@ -85,8 +74,8 @@ done:
spin_unlock(&extra_lock); spin_unlock(&extra_lock);
if (ret) { if (ret) {
for (i = 0; i <= sbi_scratch_last_hartindex(); i++) { for (i = 0; i <= sbi_scratch_last_hartid(); i++) {
rscratch = sbi_hartindex_to_scratch(i); rscratch = sbi_hartid_to_scratch(i);
if (!rscratch) if (!rscratch)
continue; continue;
ptr = sbi_scratch_offset_ptr(rscratch, ret); ptr = sbi_scratch_offset_ptr(rscratch, ret);

View File

@@ -72,8 +72,7 @@ void __noreturn sbi_system_reset(u32 reset_type, u32 reset_reason)
/* Send HALT IPI to every hart other than the current hart */ /* Send HALT IPI to every hart other than the current hart */
while (!sbi_hsm_hart_interruptible_mask(dom, hbase, &hmask)) { while (!sbi_hsm_hart_interruptible_mask(dom, hbase, &hmask)) {
if ((hbase <= cur_hartid) if (hbase <= cur_hartid)
&& (cur_hartid < hbase + BITS_PER_LONG))
hmask &= ~(1UL << (cur_hartid - hbase)); hmask &= ~(1UL << (cur_hartid - hbase));
if (hmask) if (hmask)
sbi_ipi_send_halt(hmask, hbase); sbi_ipi_send_halt(hmask, hbase);
@@ -153,7 +152,7 @@ int sbi_system_suspend(u32 sleep_type, ulong resume_addr, ulong opaque)
void (*jump_warmboot)(void) = (void (*)(void))scratch->warmboot_addr; void (*jump_warmboot)(void) = (void (*)(void))scratch->warmboot_addr;
unsigned int hartid = current_hartid(); unsigned int hartid = current_hartid();
unsigned long prev_mode; unsigned long prev_mode;
unsigned long i, j; unsigned long i;
int ret; int ret;
if (!dom || !dom->system_suspend_allowed) if (!dom || !dom->system_suspend_allowed)
@@ -171,12 +170,11 @@ int sbi_system_suspend(u32 sleep_type, ulong resume_addr, ulong opaque)
if (prev_mode != PRV_S && prev_mode != PRV_U) if (prev_mode != PRV_S && prev_mode != PRV_U)
return SBI_EFAIL; return SBI_EFAIL;
sbi_hartmask_for_each_hartindex(j, &dom->assigned_harts) { sbi_hartmask_for_each_hart(i, &dom->assigned_harts) {
i = sbi_hartindex_to_hartid(j);
if (i == hartid) if (i == hartid)
continue; continue;
if (__sbi_hsm_hart_get_state(i) != SBI_HSM_STATE_STOPPED) if (__sbi_hsm_hart_get_state(i) != SBI_HSM_STATE_STOPPED)
return SBI_ERR_DENIED; return SBI_EFAIL;
} }
if (!sbi_domain_check_addr(dom, resume_addr, prev_mode, if (!sbi_domain_check_addr(dom, resume_addr, prev_mode,

View File

@@ -188,7 +188,7 @@ int sbi_timer_init(struct sbi_scratch *scratch, bool cold_boot)
if (!time_delta_off) if (!time_delta_off)
return SBI_ENOMEM; return SBI_ENOMEM;
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_ZICNTR)) if (sbi_hart_has_extension(scratch, SBI_HART_EXT_TIME))
get_time_val = get_ticks; get_time_val = get_ticks;
} else { } else {
if (!time_delta_off) if (!time_delta_off)

View File

@@ -14,7 +14,6 @@
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_fifo.h> #include <sbi/sbi_fifo.h>
#include <sbi/sbi_hart.h> #include <sbi/sbi_hart.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_ipi.h> #include <sbi/sbi_ipi.h>
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi/sbi_tlb.h> #include <sbi/sbi_tlb.h>
@@ -34,7 +33,7 @@ static void tlb_flush_all(void)
__asm__ __volatile("sfence.vma"); __asm__ __volatile("sfence.vma");
} }
static void sbi_tlb_local_hfence_vvma(struct sbi_tlb_info *tinfo) void sbi_tlb_local_hfence_vvma(struct sbi_tlb_info *tinfo)
{ {
unsigned long start = tinfo->start; unsigned long start = tinfo->start;
unsigned long size = tinfo->size; unsigned long size = tinfo->size;
@@ -59,7 +58,7 @@ done:
csr_write(CSR_HGATP, hgatp); csr_write(CSR_HGATP, hgatp);
} }
static void sbi_tlb_local_hfence_gvma(struct sbi_tlb_info *tinfo) void sbi_tlb_local_hfence_gvma(struct sbi_tlb_info *tinfo)
{ {
unsigned long start = tinfo->start; unsigned long start = tinfo->start;
unsigned long size = tinfo->size; unsigned long size = tinfo->size;
@@ -77,7 +76,7 @@ static void sbi_tlb_local_hfence_gvma(struct sbi_tlb_info *tinfo)
} }
} }
static void sbi_tlb_local_sfence_vma(struct sbi_tlb_info *tinfo) void sbi_tlb_local_sfence_vma(struct sbi_tlb_info *tinfo)
{ {
unsigned long start = tinfo->start; unsigned long start = tinfo->start;
unsigned long size = tinfo->size; unsigned long size = tinfo->size;
@@ -98,7 +97,7 @@ static void sbi_tlb_local_sfence_vma(struct sbi_tlb_info *tinfo)
} }
} }
static void sbi_tlb_local_hfence_vvma_asid(struct sbi_tlb_info *tinfo) void sbi_tlb_local_hfence_vvma_asid(struct sbi_tlb_info *tinfo)
{ {
unsigned long start = tinfo->start; unsigned long start = tinfo->start;
unsigned long size = tinfo->size; unsigned long size = tinfo->size;
@@ -111,7 +110,12 @@ static void sbi_tlb_local_hfence_vvma_asid(struct sbi_tlb_info *tinfo)
hgatp = csr_swap(CSR_HGATP, hgatp = csr_swap(CSR_HGATP,
(vmid << HGATP_VMID_SHIFT) & HGATP_VMID_MASK); (vmid << HGATP_VMID_SHIFT) & HGATP_VMID_MASK);
if ((start == 0 && size == 0) || (size == SBI_TLB_FLUSH_ALL)) { if (start == 0 && size == 0) {
__sbi_hfence_vvma_all();
goto done;
}
if (size == SBI_TLB_FLUSH_ALL) {
__sbi_hfence_vvma_asid(asid); __sbi_hfence_vvma_asid(asid);
goto done; goto done;
} }
@@ -124,7 +128,7 @@ done:
csr_write(CSR_HGATP, hgatp); csr_write(CSR_HGATP, hgatp);
} }
static void sbi_tlb_local_hfence_gvma_vmid(struct sbi_tlb_info *tinfo) void sbi_tlb_local_hfence_gvma_vmid(struct sbi_tlb_info *tinfo)
{ {
unsigned long start = tinfo->start; unsigned long start = tinfo->start;
unsigned long size = tinfo->size; unsigned long size = tinfo->size;
@@ -133,7 +137,12 @@ static void sbi_tlb_local_hfence_gvma_vmid(struct sbi_tlb_info *tinfo)
sbi_pmu_ctr_incr_fw(SBI_PMU_FW_HFENCE_GVMA_VMID_RCVD); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_HFENCE_GVMA_VMID_RCVD);
if ((start == 0 && size == 0) || (size == SBI_TLB_FLUSH_ALL)) { if (start == 0 && size == 0) {
__sbi_hfence_gvma_all();
return;
}
if (size == SBI_TLB_FLUSH_ALL) {
__sbi_hfence_gvma_vmid(vmid); __sbi_hfence_gvma_vmid(vmid);
return; return;
} }
@@ -143,7 +152,7 @@ static void sbi_tlb_local_hfence_gvma_vmid(struct sbi_tlb_info *tinfo)
} }
} }
static void sbi_tlb_local_sfence_vma_asid(struct sbi_tlb_info *tinfo) void sbi_tlb_local_sfence_vma_asid(struct sbi_tlb_info *tinfo)
{ {
unsigned long start = tinfo->start; unsigned long start = tinfo->start;
unsigned long size = tinfo->size; unsigned long size = tinfo->size;
@@ -152,8 +161,13 @@ static void sbi_tlb_local_sfence_vma_asid(struct sbi_tlb_info *tinfo)
sbi_pmu_ctr_incr_fw(SBI_PMU_FW_SFENCE_VMA_ASID_RCVD); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_SFENCE_VMA_ASID_RCVD);
if (start == 0 && size == 0) {
tlb_flush_all();
return;
}
/* Flush entire MM context for a given ASID */ /* Flush entire MM context for a given ASID */
if ((start == 0 && size == 0) || (size == SBI_TLB_FLUSH_ALL)) { if (size == SBI_TLB_FLUSH_ALL) {
__asm__ __volatile__("sfence.vma x0, %0" __asm__ __volatile__("sfence.vma x0, %0"
: :
: "r"(asid) : "r"(asid)
@@ -169,55 +183,44 @@ static void sbi_tlb_local_sfence_vma_asid(struct sbi_tlb_info *tinfo)
} }
} }
static void sbi_tlb_local_fence_i(struct sbi_tlb_info *tinfo) void sbi_tlb_local_fence_i(struct sbi_tlb_info *tinfo)
{ {
sbi_pmu_ctr_incr_fw(SBI_PMU_FW_FENCE_I_RECVD); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_FENCE_I_RECVD);
__asm__ __volatile("fence.i"); __asm__ __volatile("fence.i");
} }
static void tlb_entry_local_process(struct sbi_tlb_info *data) static void tlb_pmu_incr_fw_ctr(struct sbi_tlb_info *data)
{ {
if (unlikely(!data)) if (unlikely(!data))
return; return;
switch (data->type) { if (data->local_fn == sbi_tlb_local_fence_i)
case SBI_TLB_FENCE_I: sbi_pmu_ctr_incr_fw(SBI_PMU_FW_FENCE_I_SENT);
sbi_tlb_local_fence_i(data); else if (data->local_fn == sbi_tlb_local_sfence_vma)
break; sbi_pmu_ctr_incr_fw(SBI_PMU_FW_SFENCE_VMA_SENT);
case SBI_TLB_SFENCE_VMA: else if (data->local_fn == sbi_tlb_local_sfence_vma_asid)
sbi_tlb_local_sfence_vma(data); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_SFENCE_VMA_ASID_SENT);
break; else if (data->local_fn == sbi_tlb_local_hfence_gvma)
case SBI_TLB_SFENCE_VMA_ASID: sbi_pmu_ctr_incr_fw(SBI_PMU_FW_HFENCE_GVMA_SENT);
sbi_tlb_local_sfence_vma_asid(data); else if (data->local_fn == sbi_tlb_local_hfence_gvma_vmid)
break; sbi_pmu_ctr_incr_fw(SBI_PMU_FW_HFENCE_GVMA_VMID_SENT);
case SBI_TLB_HFENCE_GVMA_VMID: else if (data->local_fn == sbi_tlb_local_hfence_vvma)
sbi_tlb_local_hfence_gvma_vmid(data); sbi_pmu_ctr_incr_fw(SBI_PMU_FW_HFENCE_VVMA_SENT);
break; else if (data->local_fn == sbi_tlb_local_hfence_vvma_asid)
case SBI_TLB_HFENCE_GVMA: sbi_pmu_ctr_incr_fw(SBI_PMU_FW_HFENCE_VVMA_ASID_SENT);
sbi_tlb_local_hfence_gvma(data);
break;
case SBI_TLB_HFENCE_VVMA_ASID:
sbi_tlb_local_hfence_vvma_asid(data);
break;
case SBI_TLB_HFENCE_VVMA:
sbi_tlb_local_hfence_vvma(data);
break;
default:
break;
};
} }
static void tlb_entry_process(struct sbi_tlb_info *tinfo) static void tlb_entry_process(struct sbi_tlb_info *tinfo)
{ {
u32 rindex; u32 rhartid;
struct sbi_scratch *rscratch = NULL; struct sbi_scratch *rscratch = NULL;
atomic_t *rtlb_sync = NULL; atomic_t *rtlb_sync = NULL;
tlb_entry_local_process(tinfo); tinfo->local_fn(tinfo);
sbi_hartmask_for_each_hartindex(rindex, &tinfo->smask) { sbi_hartmask_for_each_hart(rhartid, &tinfo->smask) {
rscratch = sbi_hartindex_to_scratch(rindex); rscratch = sbi_hartid_to_scratch(rhartid);
if (!rscratch) if (!rscratch)
continue; continue;
@@ -316,12 +319,12 @@ static int tlb_update_cb(void *in, void *data)
curr = (struct sbi_tlb_info *)data; curr = (struct sbi_tlb_info *)data;
next = (struct sbi_tlb_info *)in; next = (struct sbi_tlb_info *)in;
if (next->type == SBI_TLB_SFENCE_VMA_ASID && if (next->local_fn == sbi_tlb_local_sfence_vma_asid &&
curr->type == SBI_TLB_SFENCE_VMA_ASID) { curr->local_fn == sbi_tlb_local_sfence_vma_asid) {
if (next->asid == curr->asid) if (next->asid == curr->asid)
ret = tlb_range_check(curr, next); ret = tlb_range_check(curr, next);
} else if (next->type == SBI_TLB_SFENCE_VMA && } else if (next->local_fn == sbi_tlb_local_sfence_vma &&
curr->type == SBI_TLB_SFENCE_VMA) { curr->local_fn == sbi_tlb_local_sfence_vma) {
ret = tlb_range_check(curr, next); ret = tlb_range_check(curr, next);
} }
@@ -330,7 +333,7 @@ static int tlb_update_cb(void *in, void *data)
static int tlb_update(struct sbi_scratch *scratch, static int tlb_update(struct sbi_scratch *scratch,
struct sbi_scratch *remote_scratch, struct sbi_scratch *remote_scratch,
u32 remote_hartindex, void *data) u32 remote_hartid, void *data)
{ {
int ret; int ret;
atomic_t *tlb_sync; atomic_t *tlb_sync;
@@ -338,12 +341,22 @@ static int tlb_update(struct sbi_scratch *scratch,
struct sbi_tlb_info *tinfo = data; struct sbi_tlb_info *tinfo = data;
u32 curr_hartid = current_hartid(); u32 curr_hartid = current_hartid();
/*
* If address range to flush is too big then simply
* upgrade it to flush all because we can only flush
* 4KB at a time.
*/
if (tinfo->size > tlb_range_flush_limit) {
tinfo->start = 0;
tinfo->size = SBI_TLB_FLUSH_ALL;
}
/* /*
* If the request is to queue a tlb flush entry for itself * If the request is to queue a tlb flush entry for itself
* then just do a local flush and return; * then just do a local flush and return;
*/ */
if (sbi_hartindex_to_hartid(remote_hartindex) == curr_hartid) { if (remote_hartid == curr_hartid) {
tlb_entry_local_process(tinfo); tinfo->local_fn(tinfo);
return SBI_IPI_UPDATE_BREAK; return SBI_IPI_UPDATE_BREAK;
} }
@@ -361,8 +374,8 @@ static int tlb_update(struct sbi_scratch *scratch,
* this properly. * this properly.
*/ */
tlb_process_once(scratch); tlb_process_once(scratch);
sbi_dprintf("hart%d: hart%d tlb fifo full\n", curr_hartid, sbi_dprintf("hart%d: hart%d tlb fifo full\n",
sbi_hartindex_to_hartid(remote_hartindex)); curr_hartid, remote_hartid);
return SBI_IPI_UPDATE_RETRY; return SBI_IPI_UPDATE_RETRY;
} }
@@ -381,32 +394,12 @@ static struct sbi_ipi_event_ops tlb_ops = {
static u32 tlb_event = SBI_IPI_EVENT_MAX; static u32 tlb_event = SBI_IPI_EVENT_MAX;
static const u32 tlb_type_to_pmu_fw_event[SBI_TLB_TYPE_MAX] = {
[SBI_TLB_FENCE_I] = SBI_PMU_FW_FENCE_I_SENT,
[SBI_TLB_SFENCE_VMA] = SBI_PMU_FW_SFENCE_VMA_SENT,
[SBI_TLB_SFENCE_VMA_ASID] = SBI_PMU_FW_SFENCE_VMA_ASID_SENT,
[SBI_TLB_HFENCE_GVMA_VMID] = SBI_PMU_FW_HFENCE_GVMA_VMID_SENT,
[SBI_TLB_HFENCE_GVMA] = SBI_PMU_FW_HFENCE_GVMA_SENT,
[SBI_TLB_HFENCE_VVMA_ASID] = SBI_PMU_FW_HFENCE_VVMA_ASID_SENT,
[SBI_TLB_HFENCE_VVMA] = SBI_PMU_FW_HFENCE_VVMA_SENT,
};
int sbi_tlb_request(ulong hmask, ulong hbase, struct sbi_tlb_info *tinfo) int sbi_tlb_request(ulong hmask, ulong hbase, struct sbi_tlb_info *tinfo)
{ {
if (tinfo->type < 0 || tinfo->type >= SBI_TLB_TYPE_MAX) if (!tinfo->local_fn)
return SBI_EINVAL; return SBI_EINVAL;
/* tlb_pmu_incr_fw_ctr(tinfo);
* If address range to flush is too big then simply
* upgrade it to flush all because we can only flush
* 4KB at a time.
*/
if (tinfo->size > tlb_range_flush_limit) {
tinfo->start = 0;
tinfo->size = SBI_TLB_FLUSH_ALL;
}
sbi_pmu_ctr_incr_fw(tlb_type_to_pmu_fw_event[tinfo->type]);
return sbi_ipi_send_many(hmask, hbase, tlb_event, tinfo); return sbi_ipi_send_many(hmask, hbase, tlb_event, tinfo);
} }
@@ -428,7 +421,8 @@ int sbi_tlb_init(struct sbi_scratch *scratch, bool cold_boot)
sbi_scratch_free_offset(tlb_sync_off); sbi_scratch_free_offset(tlb_sync_off);
return SBI_ENOMEM; return SBI_ENOMEM;
} }
tlb_fifo_mem_off = sbi_scratch_alloc_offset(sizeof(tlb_mem)); tlb_fifo_mem_off = sbi_scratch_alloc_offset(
SBI_TLB_FIFO_NUM_ENTRIES * SBI_TLB_INFO_SIZE);
if (!tlb_fifo_mem_off) { if (!tlb_fifo_mem_off) {
sbi_scratch_free_offset(tlb_fifo_off); sbi_scratch_free_offset(tlb_fifo_off);
sbi_scratch_free_offset(tlb_sync_off); sbi_scratch_free_offset(tlb_sync_off);
@@ -454,19 +448,12 @@ int sbi_tlb_init(struct sbi_scratch *scratch, bool cold_boot)
tlb_sync = sbi_scratch_offset_ptr(scratch, tlb_sync_off); tlb_sync = sbi_scratch_offset_ptr(scratch, tlb_sync_off);
tlb_q = sbi_scratch_offset_ptr(scratch, tlb_fifo_off); tlb_q = sbi_scratch_offset_ptr(scratch, tlb_fifo_off);
tlb_mem = sbi_scratch_read_type(scratch, void *, tlb_fifo_mem_off); tlb_mem = sbi_scratch_offset_ptr(scratch, tlb_fifo_mem_off);
if (!tlb_mem) {
tlb_mem = sbi_malloc(
sbi_platform_tlb_fifo_num_entries(plat) * SBI_TLB_INFO_SIZE);
if (!tlb_mem)
return SBI_ENOMEM;
sbi_scratch_write_type(scratch, void *, tlb_fifo_mem_off, tlb_mem);
}
ATOMIC_INIT(tlb_sync, 0); ATOMIC_INIT(tlb_sync, 0);
sbi_fifo_init(tlb_q, tlb_mem, sbi_fifo_init(tlb_q, tlb_mem,
sbi_platform_tlb_fifo_num_entries(plat), SBI_TLB_INFO_SIZE); SBI_TLB_FIFO_NUM_ENTRIES, SBI_TLB_INFO_SIZE);
return 0; return 0;
} }

View File

@@ -14,8 +14,6 @@ source "$(OPENSBI_SRC_DIR)/lib/utils/irqchip/Kconfig"
source "$(OPENSBI_SRC_DIR)/lib/utils/libfdt/Kconfig" source "$(OPENSBI_SRC_DIR)/lib/utils/libfdt/Kconfig"
source "$(OPENSBI_SRC_DIR)/lib/utils/regmap/Kconfig"
source "$(OPENSBI_SRC_DIR)/lib/utils/reset/Kconfig" source "$(OPENSBI_SRC_DIR)/lib/utils/reset/Kconfig"
source "$(OPENSBI_SRC_DIR)/lib/utils/serial/Kconfig" source "$(OPENSBI_SRC_DIR)/lib/utils/serial/Kconfig"

View File

@@ -15,11 +15,4 @@ config FDT_PMU
bool "FDT performance monitoring unit (PMU) support" bool "FDT performance monitoring unit (PMU) support"
default n default n
config FDT_FIXUPS_PRESERVE_PMU_NODE
bool "Preserve PMU node in device-tree"
depends on FDT_PMU
default n
help
Preserve PMU node properties for debugging purposes.
endif endif

View File

@@ -342,7 +342,7 @@ static int __fdt_parse_domain(void *fdt, int domain_offset, void *opaque)
if (!fdt_node_is_enabled(fdt, cpu_offset)) if (!fdt_node_is_enabled(fdt, cpu_offset))
continue; continue;
sbi_hartmask_set_hartid(val32, mask); sbi_hartmask_set_hart(val32, mask);
} }
} }
@@ -472,7 +472,7 @@ static int __fdt_parse_domain(void *fdt, int domain_offset, void *opaque)
} }
if (doffset == domain_offset) if (doffset == domain_offset)
sbi_hartmask_set_hartid(val32, &assign_mask); sbi_hartmask_set_hart(val32, &assign_mask);
} }
/* Register the domain */ /* Register the domain */

View File

@@ -394,8 +394,5 @@ void fdt_fixups(void *fdt)
fdt_plic_fixup(fdt); fdt_plic_fixup(fdt);
fdt_reserved_memory_fixup(fdt); fdt_reserved_memory_fixup(fdt);
#ifndef CONFIG_FDT_FIXUPS_PRESERVE_PMU_NODE
fdt_pmu_fixup(fdt); fdt_pmu_fixup(fdt);
#endif
} }

View File

@@ -12,7 +12,6 @@
#include <sbi/sbi_hartmask.h> #include <sbi/sbi_hartmask.h>
#include <sbi/sbi_platform.h> #include <sbi/sbi_platform.h>
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi/sbi_hart.h>
#include <sbi_utils/fdt/fdt_helper.h> #include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/irqchip/aplic.h> #include <sbi_utils/irqchip/aplic.h>
#include <sbi_utils/irqchip/imsic.h> #include <sbi_utils/irqchip/imsic.h>
@@ -216,32 +215,6 @@ int fdt_get_node_addr_size(void *fdt, int node, int index,
return 0; return 0;
} }
int fdt_get_node_addr_size_by_name(void *fdt, int node, const char *name,
uint64_t *addr, uint64_t *size)
{
int i, j, count;
const char *val;
const char *regname;
if (!fdt || node < 0 || !name)
return SBI_EINVAL;
val = fdt_getprop(fdt, node, "reg-names", &count);
if (!val)
return SBI_ENODEV;
for (i = 0, j = 0; i < count; i++, j++) {
regname = val + i;
if (strcmp(name, regname) == 0)
return fdt_get_node_addr_size(fdt, node, j, addr, size);
i += strlen(regname);
}
return SBI_ENODEV;
}
bool fdt_node_is_enabled(void *fdt, int nodeoff) bool fdt_node_is_enabled(void *fdt, int nodeoff)
{ {
int len; int len;
@@ -340,149 +313,6 @@ int fdt_parse_timebase_frequency(void *fdt, unsigned long *freq)
return 0; return 0;
} }
#define RISCV_ISA_EXT_NAME_LEN_MAX 32
static unsigned long fdt_isa_bitmap_offset;
static int fdt_parse_isa_one_hart(const char *isa, unsigned long *extensions)
{
size_t i, j, isa_len;
char mstr[RISCV_ISA_EXT_NAME_LEN_MAX];
i = 0;
isa_len = strlen(isa);
if (isa[i] == 'r' || isa[i] == 'R')
i++;
else
return SBI_EINVAL;
if (isa[i] == 'v' || isa[i] == 'V')
i++;
else
return SBI_EINVAL;
if (isa[i] == '3' || isa[i+1] == '2')
i += 2;
else if (isa[i] == '6' || isa[i+1] == '4')
i += 2;
else
return SBI_EINVAL;
/* Skip base ISA extensions */
for (; i < isa_len; i++) {
if (isa[i] == '_')
break;
}
while (i < isa_len) {
if (isa[i] != '_') {
i++;
continue;
}
/* Skip the '_' character */
i++;
/* Extract the multi-letter extension name */
j = 0;
while ((i < isa_len) && (isa[i] != '_') &&
(j < (sizeof(mstr) - 1)))
mstr[j++] = isa[i++];
mstr[j] = '\0';
/* Skip empty multi-letter extension name */
if (!j)
continue;
#define set_multi_letter_ext(name, bit) \
if (!strcmp(mstr, name)) { \
__set_bit(bit, extensions); \
continue; \
}
for (j = 0; j < SBI_HART_EXT_MAX; j++) {
set_multi_letter_ext(sbi_hart_ext[j].name,
sbi_hart_ext[j].id);
}
#undef set_multi_letter_ext
}
return 0;
}
static int fdt_parse_isa_all_harts(void *fdt)
{
u32 hartid;
const fdt32_t *val;
unsigned long *hart_exts;
struct sbi_scratch *scratch;
int err, cpu_offset, cpus_offset, len;
if (!fdt || !fdt_isa_bitmap_offset)
return SBI_EINVAL;
cpus_offset = fdt_path_offset(fdt, "/cpus");
if (cpus_offset < 0)
return cpus_offset;
fdt_for_each_subnode(cpu_offset, fdt, cpus_offset) {
err = fdt_parse_hart_id(fdt, cpu_offset, &hartid);
if (err)
continue;
if (!fdt_node_is_enabled(fdt, cpu_offset))
continue;
val = fdt_getprop(fdt, cpu_offset, "riscv,isa", &len);
if (!val || len <= 0)
return SBI_ENOENT;
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch)
return SBI_ENOENT;
hart_exts = sbi_scratch_offset_ptr(scratch,
fdt_isa_bitmap_offset);
err = fdt_parse_isa_one_hart((const char *)val, hart_exts);
if (err)
return err;
}
return 0;
}
int fdt_parse_isa_extensions(void *fdt, unsigned int hartid,
unsigned long *extensions)
{
int rc, i;
unsigned long *hart_exts;
struct sbi_scratch *scratch;
if (!fdt_isa_bitmap_offset) {
fdt_isa_bitmap_offset = sbi_scratch_alloc_offset(
sizeof(*hart_exts) *
BITS_TO_LONGS(SBI_HART_EXT_MAX));
if (!fdt_isa_bitmap_offset)
return SBI_ENOMEM;
rc = fdt_parse_isa_all_harts(fdt);
if (rc)
return rc;
}
scratch = sbi_hartid_to_scratch(hartid);
if (!scratch)
return SBI_ENOENT;
hart_exts = sbi_scratch_offset_ptr(scratch, fdt_isa_bitmap_offset);
for (i = 0; i < BITS_TO_LONGS(SBI_HART_EXT_MAX); i++)
extensions[i] |= hart_exts[i];
return 0;
}
static int fdt_parse_uart_node_common(void *fdt, int nodeoffset, static int fdt_parse_uart_node_common(void *fdt, int nodeoffset,
struct platform_uart_data *uart, struct platform_uart_data *uart,
unsigned long default_freq, unsigned long default_freq,
@@ -880,7 +710,6 @@ int fdt_parse_plic_node(void *fdt, int nodeoffset, struct plic_data *plic)
if (rc < 0 || !reg_addr || !reg_size) if (rc < 0 || !reg_addr || !reg_size)
return SBI_ENODEV; return SBI_ENODEV;
plic->addr = reg_addr; plic->addr = reg_addr;
plic->size = reg_size;
val = fdt_getprop(fdt, nodeoffset, "riscv,ndev", &len); val = fdt_getprop(fdt, nodeoffset, "riscv,ndev", &len);
if (len > 0) if (len > 0)
@@ -903,40 +732,21 @@ int fdt_parse_plic(void *fdt, struct plic_data *plic, const char *compat)
return fdt_parse_plic_node(fdt, nodeoffset, plic); return fdt_parse_plic_node(fdt, nodeoffset, plic);
} }
static int fdt_get_aclint_addr_size_by_name(void *fdt, int nodeoffset, int fdt_parse_aclint_node(void *fdt, int nodeoffset, bool for_timer,
unsigned long *out_addr1, unsigned long *out_addr1, unsigned long *out_size1,
unsigned long *out_size1, unsigned long *out_addr2, unsigned long *out_size2,
unsigned long *out_addr2, u32 *out_first_hartid, u32 *out_hart_count)
unsigned long *out_size2)
{ {
int rc; const fdt32_t *val;
uint64_t reg_addr, reg_size; uint64_t reg_addr, reg_size;
int i, rc, count, cpu_offset, cpu_intc_offset;
u32 phandle, hwirq, hartid, first_hartid, last_hartid, hart_count;
u32 match_hwirq = (for_timer) ? IRQ_M_TIMER : IRQ_M_SOFT;
rc = fdt_get_node_addr_size_by_name(fdt, nodeoffset, "mtime", if (nodeoffset < 0 || !fdt ||
&reg_addr, &reg_size); !out_addr1 || !out_size1 ||
if (rc < 0 || !reg_size) !out_first_hartid || !out_hart_count)
reg_addr = reg_size = 0; return SBI_EINVAL;
*out_addr1 = reg_addr;
*out_size1 = reg_size;
rc = fdt_get_node_addr_size_by_name(fdt, nodeoffset, "mtimecmp",
&reg_addr, &reg_size);
if (rc < 0 || !reg_size)
return SBI_ENODEV;
*out_addr2 = reg_addr;
*out_size2 = reg_size;
return 0;
}
static int fdt_get_aclint_addr_size(void *fdt, int nodeoffset,
unsigned long *out_addr1,
unsigned long *out_size1,
unsigned long *out_addr2,
unsigned long *out_size2)
{
int rc;
uint64_t reg_addr, reg_size;
rc = fdt_get_node_addr_size(fdt, nodeoffset, 0, rc = fdt_get_node_addr_size(fdt, nodeoffset, 0,
&reg_addr, &reg_size); &reg_addr, &reg_size);
@@ -954,37 +764,6 @@ static int fdt_get_aclint_addr_size(void *fdt, int nodeoffset,
if (out_size2) if (out_size2)
*out_size2 = reg_size; *out_size2 = reg_size;
return 0;
}
int fdt_parse_aclint_node(void *fdt, int nodeoffset,
bool for_timer, bool allow_regname,
unsigned long *out_addr1, unsigned long *out_size1,
unsigned long *out_addr2, unsigned long *out_size2,
u32 *out_first_hartid, u32 *out_hart_count)
{
const fdt32_t *val;
int i, rc, count, cpu_offset, cpu_intc_offset;
u32 phandle, hwirq, hartid, first_hartid, last_hartid, hart_count;
u32 match_hwirq = (for_timer) ? IRQ_M_TIMER : IRQ_M_SOFT;
if (nodeoffset < 0 || !fdt ||
!out_addr1 || !out_size1 ||
!out_first_hartid || !out_hart_count)
return SBI_EINVAL;
if (for_timer && allow_regname && out_addr2 && out_size2 &&
fdt_getprop(fdt, nodeoffset, "reg-names", NULL))
rc = fdt_get_aclint_addr_size_by_name(fdt, nodeoffset,
out_addr1, out_size1,
out_addr2, out_size2);
else
rc = fdt_get_aclint_addr_size(fdt, nodeoffset,
out_addr1, out_size1,
out_addr2, out_size2);
if (rc)
return rc;
*out_first_hartid = 0; *out_first_hartid = 0;
*out_hart_count = 0; *out_hart_count = 0;

View File

@@ -14,19 +14,23 @@
#include <sbi/sbi_pmu.h> #include <sbi/sbi_pmu.h>
#include <sbi/sbi_scratch.h> #include <sbi/sbi_scratch.h>
#include <sbi_utils/fdt/fdt_helper.h> #include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/fdt/fdt_pmu.h>
#define FDT_PMU_HW_EVENT_MAX (SBI_PMU_HW_EVENT_MAX * 2) #define FDT_PMU_HW_EVENT_MAX (SBI_PMU_HW_EVENT_MAX * 2)
struct fdt_pmu_hw_event_select_map fdt_pmu_evt_select[FDT_PMU_HW_EVENT_MAX] = {0}; struct fdt_pmu_hw_event_select {
uint32_t hw_event_count; uint32_t eidx;
uint64_t select;
};
static struct fdt_pmu_hw_event_select fdt_pmu_evt_select[FDT_PMU_HW_EVENT_MAX] = {0};
static uint32_t hw_event_count;
uint64_t fdt_pmu_get_select_value(uint32_t event_idx) uint64_t fdt_pmu_get_select_value(uint32_t event_idx)
{ {
int i; int i;
struct fdt_pmu_hw_event_select_map *event; struct fdt_pmu_hw_event_select *event;
for (i = 0; i < hw_event_count; i++) { for (i = 0; i < SBI_PMU_HW_EVENT_MAX; i++) {
event = &fdt_pmu_evt_select[i]; event = &fdt_pmu_evt_select[i];
if (event->eidx == event_idx) if (event->eidx == event_idx)
return event->select; return event->select;
@@ -61,7 +65,7 @@ int fdt_pmu_setup(void *fdt)
int i, pmu_offset, len, result; int i, pmu_offset, len, result;
const u32 *event_val; const u32 *event_val;
const u32 *event_ctr_map; const u32 *event_ctr_map;
struct fdt_pmu_hw_event_select_map *event; struct fdt_pmu_hw_event_select *event;
uint64_t raw_selector, select_mask; uint64_t raw_selector, select_mask;
u32 event_idx_start, event_idx_end, ctr_map; u32 event_idx_start, event_idx_end, ctr_map;
@@ -70,7 +74,7 @@ int fdt_pmu_setup(void *fdt)
pmu_offset = fdt_node_offset_by_compatible(fdt, -1, "riscv,pmu"); pmu_offset = fdt_node_offset_by_compatible(fdt, -1, "riscv,pmu");
if (pmu_offset < 0) if (pmu_offset < 0)
return SBI_ENOENT; return SBI_EFAIL;
event_ctr_map = fdt_getprop(fdt, pmu_offset, event_ctr_map = fdt_getprop(fdt, pmu_offset,
"riscv,event-to-mhpmcounters", &len); "riscv,event-to-mhpmcounters", &len);

View File

@@ -37,6 +37,7 @@ static int starfive_gpio_direction_output(struct gpio_pin *gp, int value)
reg_addr = chip->addr + gp->offset; reg_addr = chip->addr + gp->offset;
reg_addr &= ~(STARFIVE_GPIO_REG_SHIFT_MASK); reg_addr &= ~(STARFIVE_GPIO_REG_SHIFT_MASK);
val = readl((void *)(reg_addr));
shift_bits = (gp->offset & STARFIVE_GPIO_REG_SHIFT_MASK) shift_bits = (gp->offset & STARFIVE_GPIO_REG_SHIFT_MASK)
<< STARFIVE_GPIO_SHIFT_BITS; << STARFIVE_GPIO_SHIFT_BITS;
bit_mask = STARFIVE_GPIO_MASK << shift_bits; bit_mask = STARFIVE_GPIO_MASK << shift_bits;

View File

@@ -15,6 +15,8 @@
#include <sbi_utils/i2c/dw_i2c.h> #include <sbi_utils/i2c/dw_i2c.h>
#include <sbi_utils/i2c/fdt_i2c.h> #include <sbi_utils/i2c/fdt_i2c.h>
extern struct fdt_i2c_adapter fdt_i2c_adapter_dw;
static int fdt_dw_i2c_init(void *fdt, int nodeoff, static int fdt_dw_i2c_init(void *fdt, int nodeoff,
const struct fdt_match *match) const struct fdt_match *match)
{ {
@@ -33,6 +35,7 @@ static int fdt_dw_i2c_init(void *fdt, int nodeoff,
} }
adapter->addr = addr; adapter->addr = addr;
adapter->adapter.driver = &fdt_i2c_adapter_dw;
rc = dw_i2c_init(&adapter->adapter, nodeoff); rc = dw_i2c_init(&adapter->adapter, nodeoff);
if (rc) { if (rc) {

View File

@@ -46,6 +46,8 @@ struct sifive_i2c_adapter {
struct i2c_adapter adapter; struct i2c_adapter adapter;
}; };
extern struct fdt_i2c_adapter fdt_i2c_adapter_sifive;
static inline void sifive_i2c_setreg(struct sifive_i2c_adapter *adap, static inline void sifive_i2c_setreg(struct sifive_i2c_adapter *adap,
uint8_t reg, uint8_t value) uint8_t reg, uint8_t value)
{ {
@@ -248,6 +250,7 @@ static int sifive_i2c_init(void *fdt, int nodeoff,
} }
adapter->addr = addr; adapter->addr = addr;
adapter->adapter.driver = &fdt_i2c_adapter_sifive;
adapter->adapter.id = nodeoff; adapter->adapter.id = nodeoff;
adapter->adapter.write = sifive_i2c_adapter_write; adapter->adapter.write = sifive_i2c_adapter_write;
adapter->adapter.read = sifive_i2c_adapter_read; adapter->adapter.read = sifive_i2c_adapter_read;

View File

@@ -25,13 +25,13 @@ static unsigned long mswi_ptr_offset;
#define mswi_set_hart_data_ptr(__scratch, __mswi) \ #define mswi_set_hart_data_ptr(__scratch, __mswi) \
sbi_scratch_write_type((__scratch), void *, mswi_ptr_offset, (__mswi)) sbi_scratch_write_type((__scratch), void *, mswi_ptr_offset, (__mswi))
static void mswi_ipi_send(u32 hart_index) static void mswi_ipi_send(u32 target_hart)
{ {
u32 *msip; u32 *msip;
struct sbi_scratch *scratch; struct sbi_scratch *scratch;
struct aclint_mswi_data *mswi; struct aclint_mswi_data *mswi;
scratch = sbi_hartindex_to_scratch(hart_index); scratch = sbi_hartid_to_scratch(target_hart);
if (!scratch) if (!scratch)
return; return;
@@ -41,17 +41,16 @@ static void mswi_ipi_send(u32 hart_index)
/* Set ACLINT IPI */ /* Set ACLINT IPI */
msip = (void *)mswi->addr; msip = (void *)mswi->addr;
writel_relaxed(1, &msip[sbi_hartindex_to_hartid(hart_index) - writel(1, &msip[target_hart - mswi->first_hartid]);
mswi->first_hartid]);
} }
static void mswi_ipi_clear(u32 hart_index) static void mswi_ipi_clear(u32 target_hart)
{ {
u32 *msip; u32 *msip;
struct sbi_scratch *scratch; struct sbi_scratch *scratch;
struct aclint_mswi_data *mswi; struct aclint_mswi_data *mswi;
scratch = sbi_hartindex_to_scratch(hart_index); scratch = sbi_hartid_to_scratch(target_hart);
if (!scratch) if (!scratch)
return; return;
@@ -61,8 +60,7 @@ static void mswi_ipi_clear(u32 hart_index)
/* Clear ACLINT IPI */ /* Clear ACLINT IPI */
msip = (void *)mswi->addr; msip = (void *)mswi->addr;
writel_relaxed(0, &msip[sbi_hartindex_to_hartid(hart_index) - writel(0, &msip[target_hart - mswi->first_hartid]);
mswi->first_hartid]);
} }
static struct sbi_ipi_device aclint_mswi = { static struct sbi_ipi_device aclint_mswi = {

View File

@@ -18,45 +18,73 @@
struct plicsw_data plicsw; struct plicsw_data plicsw;
static void plicsw_ipi_send(u32 hart_index) static inline void plicsw_claim(void)
{ {
ulong pending_reg; u32 hartid = current_hartid();
u32 interrupt_id, word_index, pending_bit;
u32 target_hart = sbi_hartindex_to_hartid(hart_index);
if (plicsw.hart_count <= target_hart) if (plicsw.hart_count <= hartid)
ebreak(); ebreak();
/* plicsw.source_id[hartid] =
* We assign a single bit for each hart. readl((void *)plicsw.addr + PLICSW_CONTEXT_BASE +
* Bit 0 is hardwired to 0, thus unavailable. PLICSW_CONTEXT_CLAIM + PLICSW_CONTEXT_STRIDE * hartid);
* Bit(X+1) indicates that IPI is sent to hartX.
*/
interrupt_id = target_hart + 1;
word_index = interrupt_id / 32;
pending_bit = interrupt_id % 32;
pending_reg = plicsw.addr + PLICSW_PENDING_BASE + word_index * 4;
/* Set target hart's mip.MSIP */
writel_relaxed(BIT(pending_bit), (void *)pending_reg);
} }
static void plicsw_ipi_clear(u32 hart_index) static inline void plicsw_complete(void)
{ {
u32 target_hart = sbi_hartindex_to_hartid(hart_index); u32 hartid = current_hartid();
ulong reg = plicsw.addr + PLICSW_CONTEXT_BASE + PLICSW_CONTEXT_CLAIM + u32 source = plicsw.source_id[hartid];
PLICSW_CONTEXT_STRIDE * target_hart;
writel(source, (void *)plicsw.addr + PLICSW_CONTEXT_BASE +
PLICSW_CONTEXT_CLAIM +
PLICSW_CONTEXT_STRIDE * hartid);
}
static inline void plic_sw_pending(u32 target_hart)
{
/*
* The pending array registers are w1s type.
* IPI pending array mapping as following:
*
* Pending array start address: base + 0x1000
* ---------------------------------
* | hart3 | hart2 | hart1 | hart0 |
* ---------------------------------
* Each hartX can send IPI to another hart by setting the
* bitY to its own region (see the below).
*
* In each hartX region:
* <---------- PICSW_PENDING_STRIDE -------->
* | bit7 | ... | bit3 | bit2 | bit1 | bit0 |
* ------------------------------------------
* The bitY of hartX region indicates that hartX sends an
* IPI to hartY.
*/
u32 hartid = current_hartid();
u32 word_index = hartid / 4;
u32 per_hart_offset = PLICSW_PENDING_STRIDE * hartid;
u32 val = 1 << target_hart << per_hart_offset;
writel(val, (void *)plicsw.addr + PLICSW_PENDING_BASE + word_index * 4);
}
static void plicsw_ipi_send(u32 target_hart)
{
if (plicsw.hart_count <= target_hart) if (plicsw.hart_count <= target_hart)
ebreak(); ebreak();
/* Claim */ /* Set PLICSW IPI */
u32 source = readl((void *)reg); plic_sw_pending(target_hart);
}
/* A successful claim will clear mip.MSIP */ static void plicsw_ipi_clear(u32 target_hart)
{
if (plicsw.hart_count <= target_hart)
ebreak();
/* Complete */ /* Clear PLICSW IPI */
writel(source, (void *)reg); plicsw_claim();
plicsw_complete();
} }
static struct sbi_ipi_device plicsw_ipi = { static struct sbi_ipi_device plicsw_ipi = {
@@ -78,34 +106,28 @@ int plicsw_warm_ipi_init(void)
int plicsw_cold_ipi_init(struct plicsw_data *plicsw) int plicsw_cold_ipi_init(struct plicsw_data *plicsw)
{ {
int rc; int rc;
u32 interrupt_id, word_index, enable_bit;
ulong enable_reg, priority_reg;
/* Setup source priority */ /* Setup source priority */
for (int i = 0; i < plicsw->hart_count; i++) { uint32_t *priority = (void *)plicsw->addr + PLICSW_PRIORITY_BASE;
priority_reg = plicsw->addr + PLICSW_PRIORITY_BASE + i * 4;
writel(1, (void *)priority_reg); for (int i = 0; i < plicsw->hart_count; i++)
} writel(1, &priority[i]);
/* Setup target enable */
uint32_t enable_mask = PLICSW_HART_MASK;
/*
* Setup enable for each hart, skip non-existent interrupt ID 0
* which is hardwired to 0.
*/
for (int i = 0; i < plicsw->hart_count; i++) { for (int i = 0; i < plicsw->hart_count; i++) {
interrupt_id = i + 1; uint32_t *enable = (void *)plicsw->addr + PLICSW_ENABLE_BASE +
word_index = interrupt_id / 32; PLICSW_ENABLE_STRIDE * i;
enable_bit = interrupt_id % 32; writel(enable_mask, enable);
enable_reg = plicsw->addr + PLICSW_ENABLE_BASE + writel(enable_mask, enable + 1);
PLICSW_ENABLE_STRIDE * i + 4 * word_index; enable_mask <<= 1;
writel(BIT(enable_bit), (void *)enable_reg);
} }
/* Add PLICSW region to the root domain */ /* Add PLICSW region to the root domain */
rc = sbi_domain_root_add_memrange(plicsw->addr, plicsw->size, rc = sbi_domain_root_add_memrange(plicsw->addr, plicsw->size,
PLICSW_REGION_ALIGN, PLICSW_REGION_ALIGN,
SBI_DOMAIN_MEMREGION_MMIO | SBI_DOMAIN_MEMREGION_MMIO);
SBI_DOMAIN_MEMREGION_M_READABLE |
SBI_DOMAIN_MEMREGION_M_WRITABLE);
if (rc) if (rc)
return rc; return rc;

View File

@@ -24,7 +24,7 @@ static int ipi_mswi_cold_init(void *fdt, int nodeoff,
if (!ms) if (!ms)
return SBI_ENOMEM; return SBI_ENOMEM;
rc = fdt_parse_aclint_node(fdt, nodeoff, false, false, rc = fdt_parse_aclint_node(fdt, nodeoff, false,
&ms->addr, &ms->size, NULL, NULL, &ms->addr, &ms->size, NULL, NULL,
&ms->first_hartid, &ms->hart_count); &ms->first_hartid, &ms->hart_count);
if (rc) { if (rc) {
@@ -56,7 +56,6 @@ static const struct fdt_match ipi_mswi_match[] = {
{ .compatible = "riscv,clint0", .data = &clint_offset }, { .compatible = "riscv,clint0", .data = &clint_offset },
{ .compatible = "sifive,clint0", .data = &clint_offset }, { .compatible = "sifive,clint0", .data = &clint_offset },
{ .compatible = "thead,c900-clint", .data = &clint_offset }, { .compatible = "thead,c900-clint", .data = &clint_offset },
{ .compatible = "thead,c900-aclint-mswi" },
{ .compatible = "riscv,aclint-mswi" }, { .compatible = "riscv,aclint-mswi" },
{ }, { },
}; };

View File

@@ -193,7 +193,7 @@ int aplic_cold_irqchip_init(struct aplic_data *aplic)
writel(0, (void *)(aplic->addr + APLIC_DOMAINCFG)); writel(0, (void *)(aplic->addr + APLIC_DOMAINCFG));
/* Disable all interrupts */ /* Disable all interrupts */
for (i = 0; i <= aplic->num_source; i += 32) for (i = 0; i <= aplic->num_source; i++)
writel(-1U, (void *)(aplic->addr + APLIC_CLRIE_BASE + writel(-1U, (void *)(aplic->addr + APLIC_CLRIE_BASE +
(i / 32) * sizeof(u32))); (i / 32) * sizeof(u32)));

View File

@@ -161,7 +161,7 @@ static int imsic_external_irqfn(struct sbi_trap_regs *regs)
return 0; return 0;
} }
static void imsic_ipi_send(u32 hart_index) static void imsic_ipi_send(u32 target_hart)
{ {
unsigned long reloff; unsigned long reloff;
struct imsic_regs *regs; struct imsic_regs *regs;
@@ -169,7 +169,7 @@ static void imsic_ipi_send(u32 hart_index)
struct sbi_scratch *scratch; struct sbi_scratch *scratch;
int file; int file;
scratch = sbi_hartindex_to_scratch(hart_index); scratch = sbi_hartid_to_scratch(target_hart);
if (!scratch) if (!scratch)
return; return;
@@ -186,7 +186,7 @@ static void imsic_ipi_send(u32 hart_index)
} }
if (regs->size && (reloff < regs->size)) if (regs->size && (reloff < regs->size))
writel_relaxed(IMSIC_IPI_ID, writel(IMSIC_IPI_ID,
(void *)(regs->addr + reloff + IMSIC_MMIO_PAGE_LE)); (void *)(regs->addr + reloff + IMSIC_MMIO_PAGE_LE));
} }

View File

@@ -10,9 +10,7 @@
#include <sbi/riscv_io.h> #include <sbi/riscv_io.h>
#include <sbi/riscv_encoding.h> #include <sbi/riscv_encoding.h>
#include <sbi/sbi_bitops.h>
#include <sbi/sbi_console.h> #include <sbi/sbi_console.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_error.h> #include <sbi/sbi_error.h>
#include <sbi/sbi_string.h> #include <sbi/sbi_string.h>
#include <sbi_utils/irqchip/plic.h> #include <sbi_utils/irqchip/plic.h>
@@ -173,7 +171,5 @@ int plic_cold_irqchip_init(const struct plic_data *plic)
for (i = 1; i <= plic->num_src; i++) for (i = 1; i <= plic->num_src; i++)
plic_set_priority(plic, i, 0); plic_set_priority(plic, i, 0);
return sbi_domain_root_add_memrange(plic->addr, plic->size, BIT(20), return 0;
(SBI_DOMAIN_MEMREGION_MMIO |
SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW));
} }

View File

@@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause # SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
# Makefile.libfdt # Makefile.libfdt
# #
# This is not a complete Makefile of itself. Instead, it is designed to # This is not a complete Makefile of itself. Instead, it is designed to

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2006 David Gibson, IBM Corporation. * Copyright (C) 2006 David Gibson, IBM Corporation.

View File

@@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause */ /* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
#ifndef FDT_H #ifndef FDT_H
#define FDT_H #define FDT_H
/* /*

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2014 David Gibson <david@gibson.dropbear.id.au> * Copyright (C) 2014 David Gibson <david@gibson.dropbear.id.au>

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2006 David Gibson, IBM Corporation. * Copyright (C) 2006 David Gibson, IBM Corporation.

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2012 David Gibson, IBM Corporation. * Copyright (C) 2012 David Gibson, IBM Corporation.

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2016 Free Electrons * Copyright (C) 2016 Free Electrons

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2006 David Gibson, IBM Corporation. * Copyright (C) 2006 David Gibson, IBM Corporation.

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2006 David Gibson, IBM Corporation. * Copyright (C) 2006 David Gibson, IBM Corporation.

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2006 David Gibson, IBM Corporation. * Copyright (C) 2006 David Gibson, IBM Corporation.

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2006 David Gibson, IBM Corporation. * Copyright (C) 2006 David Gibson, IBM Corporation.

View File

@@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
/* /*
* libfdt - Flat Device Tree manipulation * libfdt - Flat Device Tree manipulation
* Copyright (C) 2006 David Gibson, IBM Corporation. * Copyright (C) 2006 David Gibson, IBM Corporation.

View File

@@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause */ /* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
#ifndef LIBFDT_H #ifndef LIBFDT_H
#define LIBFDT_H #define LIBFDT_H
/* /*

View File

@@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause */ /* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
#ifndef LIBFDT_ENV_H #ifndef LIBFDT_ENV_H
#define LIBFDT_ENV_H #define LIBFDT_ENV_H
/* /*

View File

@@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause */ /* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
#ifndef LIBFDT_INTERNAL_H #ifndef LIBFDT_INTERNAL_H
#define LIBFDT_INTERNAL_H #define LIBFDT_INTERNAL_H
/* /*

View File

@@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0-or-later OR BSD-2-Clause */ /* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
LIBFDT_1.2 { LIBFDT_1.2 {
global: global:
fdt_next_node; fdt_next_node;

Some files were not shown because too many files have changed in this diff Show More