bootloader: account for load address when mapping cache pages

Bootloader used to calculate the number of cache pages assuming that
load address was aligned, while in reality load address for DROM and
IROM was offset by 0x20 bytes from the start of 64kB page. This
caused the bootloader to map one less page if the size of the image
was 0x4..0x1c less than a multiple of 64kB.

Reported in https://esp32.com/viewtopic.php?f=13&t=6952.
This commit is contained in:
Ivan Grokhotkov
2018-09-03 18:15:20 +08:00
parent 182e917d78
commit 96d0f7f5e2
3 changed files with 49 additions and 18 deletions

View File

@@ -111,4 +111,21 @@ esp_err_t bootloader_flash_erase_sector(size_t sector);
*/
esp_err_t bootloader_flash_erase_range(uint32_t start_addr, uint32_t size);
/* Cache MMU block size */
#define MMU_BLOCK_SIZE 0x00010000
/* Cache MMU address mask (MMU tables ignore bits which are zero) */
#define MMU_FLASH_MASK (~(MMU_BLOCK_SIZE - 1))
/**
* @brief Calculate the number of cache pages to map
* @param size size of data to map
* @param vaddr virtual address where data will be mapped
* @return number of cache MMU pages required to do the mapping
*/
static inline uint32_t bootloader_cache_pages_to_map(uint32_t size, uint32_t vaddr)
{
return (size + (vaddr - (vaddr & MMU_FLASH_MASK)) + MMU_BLOCK_SIZE - 1) / MMU_BLOCK_SIZE;
}
#endif