summaryrefslogtreecommitdiff
path: root/util/cbmem
diff options
context:
space:
mode:
authorAaron Durbin <adurbin@chromium.org>2017-05-12 09:55:16 -0500
committerAaron Durbin <adurbin@chromium.org>2017-05-15 19:45:49 +0200
commit7ad44eed08b9811d97462b3c990129187f77e2ca (patch)
tree67effd79628fabf547a49b05cc95fee6e900d1f3 /util/cbmem
parenta7092750b2c7856f72016dd0b468f1a1d0585b5a (diff)
downloadcoreboot-7ad44eed08b9811d97462b3c990129187f77e2ca.tar.xz
util/cbmem: mmap underflow on low addresses
There is code to adjust the mapping down if a mmap fails at a physical address. However, if the address is less than the page size of the system then the physical offset will underflow. This can actually cause a kernel panic on when operating on /dev/mem. The failing condition happens when the requested mapping at 0 fails in the kernel. The fallback path is taken and page size is subtracted from 0 making a very large offset. The PAT code in the kernel fails with a BUG_ON in reserve_memtype() checking start >= end. The kernel needs to be fixed as well, but this fallback path is wrong as well. BUG=b:38211793 Change-Id: I32b0c15b2f1aa43fc57656d5d2d5f0e4e90e94ef Signed-off-by: Aaron Durbin <adurbin@chromium.org> Reviewed-on: https://review.coreboot.org/19679 Tested-by: build bot (Jenkins) <no-reply@coreboot.org> Reviewed-by: Patrick Georgi <pgeorgi@google.com> Reviewed-by: Philippe Mathieu-Daudé <philippe.mathieu.daude@gmail.com> Reviewed-by: Julius Werner <jwerner@chromium.org> Reviewed-by: Furquan Shaikh <furquan@google.com> Reviewed-by: Paul Menzel <paulepanter@users.sourceforge.net>
Diffstat (limited to 'util/cbmem')
-rw-r--r--util/cbmem/cbmem.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/util/cbmem/cbmem.c b/util/cbmem/cbmem.c
index c1b1caa572..48a1bc963d 100644
--- a/util/cbmem/cbmem.c
+++ b/util/cbmem/cbmem.c
@@ -165,7 +165,9 @@ static void *map_memory_size(u64 physical, size_t size, uint8_t abort_on_failure
v = mmap(NULL, size, PROT_READ, MAP_SHARED, mem_fd, p);
- if (v == MAP_FAILED) {
+ /* Only try growing down when address exceeds page size so that
+ one doesn't underflow the offset request. */
+ if (v == MAP_FAILED && p >= page) {
/* The mapped area may have overrun the upper cbmem boundary when trying to
* align to the page size. Try growing down instead of up...
*/