Initially, ESP-IDF used the do_global_ctors() function to run global
constructors. This was done to accommodate Xtensa targets that emit
.ctors.* sections, which are ordered in descending order.
For RISC-V, compilation used .init_array.* sections, which are designed
to have ascending order. Priority constructors in .init_array.* sections
were correctly processed in ascending order. However, non-priority
.init_array section was processed in descending order, as it was done
for Xtensa .ctors.
Starting with ESP-IDF v6.0, the implementation switched to the standard
LibC behavior (__libc_init_array()), which processes both priority and
non-priority constructors in ascending order.
To achieve this, a breaking changes were introduced:
- Xtensa .ctors.* priority entries converted to .init_array.* format
(ascending), to be passed to __libc_init_array().
- Processing order of non-priority .init_array and .ctors sections was
changed from descending to ascending.
Also, this change introduces .preinit_array for linking. This may be
needed for some C++ or sanitizer features.
Related to https://github.com/espressif/esp-idf/issues/15529
Before this change _invalid_pc_placeholder pointed to address of _init
function from crti.o
This made GDB input a bit confusing:
0x40080400 in _init ()
(gdb) bt
#0 0x40080400 in _init ()
#1 0x400e519a in test_instr_fetch_prohibited () at /home/alex/git/esp-idf/tools/test_apps/system/panic/main/test_panic.c:271
#2 0x400d89a7 in app_main () at /home/alex/git/esp-idf/tools/test_apps/system/panic/main/test_app_main.c:116
#3 0x400e5f22 in main_task (args=0x0) at /home/alex/git/esp-idf/components/freertos/app_startup.c:208
#4 0x400895a8 in vPortTaskWrapper (pxCode=0x400e5eb0 <main_task>, pvParameters=0x0) at /home/alex/git/esp-idf/components/freertos/FreeRTOS-Kernel/portable/xtensa/port.c:139
After the change GDB prints output that contains a hint:
_invalid_pc_placeholder () at /home/alex/git/esp-idf/components/xtensa/xtensa_vectors.S:2235
2235 UNREACHABLE_INSTRUCTION_CHECK_PREVIOUS_FRAMES
(gdb) bt
#0 _invalid_pc_placeholder () at /home/alex/git/esp-idf/components/xtensa/xtensa_vectors.S:2235
#1 0x400e519e in test_instr_fetch_prohibited () at /home/alex/git/esp-idf/tools/test_apps/system/panic/main/test_panic.c:271
#2 0x400d89ab in app_main () at /home/alex/git/esp-idf/tools/test_apps/system/panic/main/test_app_main.c:116
#3 0x400e5f26 in main_task (args=0x0) at /home/alex/git/esp-idf/components/freertos/app_startup.c:208
#4 0x400895a8 in vPortTaskWrapper (pxCode=0x400e5eb4 <main_task>, pvParameters=0x0) at /home/alex/git/esp-idf/components/freertos/FreeRTOS-Kernel/portable/xtensa/port.c:139
ERROR:
A fatal error occurred: Segment loaded at 0x3c01d150 lands in same 64KB flash
mapping as segment loaded at 0x3c018020. Can't generate binary. Suggest
changing linker script or ELF to merge sections.
Seems binary generator does not handle well empty sections that contains
aligning only. I did not investigate much but this change helped.
- move .tbss to NOLOAD section
- remove xtensa-specific entities from riscv scripts
- explicit eh_frame terminator instead of "align magic"
- 80 characters line length limit
- refactor comments
- discard .rela sections (the rela data will go to relates sections)
There is no separate permission control peripheral in C6/H2/P4.
Memory protection is achieved using built-in PMA/PMP and hence
removing permission control specific files.
This commit introduce SOC_MEM_NON_CONTIGUOUS_SRAM flag (that enebled for
esp32p4). If SOC_MEM_NON_CONTIGUOUS_SRAM is enabled:
- LDFLAGS+=--enable-non-contiguous-regions
- ldgen.py replaces "arrays[*]" from sections.ld.in with objects under
SURROUND keyword. (e.g. from linker.lf: data -> dram0_data SURROUND(foo))
- "mapping[*]" - refers to all other data
If SOC_MEM_NON_CONTIGUOUS_SRAM, sections.ld.in file should contain at
least one block of code like this (otherwise it does not make sense):
.dram0.bss (NOLOAD) :
{
arrays[dram0_bss]
mapping[dram0_bss]
} > sram_low
.dram1.bss (NOLOAD) :
{
/* do not place here arrays[dram0_bss] because it may be splited
* between segments */
mapping[dram0_bss]
} > sram_high