summaryrefslogtreecommitdiff
path: root/src/mem/request.hh
diff options
context:
space:
mode:
authorMatt Evans <matt.evans@arm.com>2012-06-29 11:19:05 -0400
committerMatt Evans <matt.evans@arm.com>2012-06-29 11:19:05 -0400
commit579047c76d9c9baa0cb80f7e7603a2e9d3c50376 (patch)
tree37e7de48951adfd5831d037fd097106fac723bd2 /src/mem/request.hh
parent3965ecc36b3d928cf8f6a66e50eed3c6de1a54c0 (diff)
downloadgem5-579047c76d9c9baa0cb80f7e7603a2e9d3c50376.tar.xz
Mem: Fix a livelock resulting in LLSC/locked memory access implementation.
Currently when multiple CPUs perform a load-linked/store-conditional sequence, the loads all create a list of reservations which is then scanned when the stores occur. A reservation matching the context and address of the store is sought, BUT all reservations matching the address are also erased at this point. The upshot is that a store-conditional will remove all reservations even if the store itself does not succeed. A livelock was observed using 7-8 CPUs where a thread would erase the reservations of other threads, not succeed, loop and put its own reservation in again only to have it blown by another thread that unsuccessfully now tries to store-conditional -- no forward progress was made, hanging the system. The correct way to do this is to only blow a reservation when a store (conditional or not) actually /occurs/ to its address. One thread always wins (the one that does the store-conditional first).
Diffstat (limited to 'src/mem/request.hh')
0 files changed, 0 insertions, 0 deletions