diff options
author | Nikos Nikoleris <nikos.nikoleris@arm.com> | 2017-09-04 16:38:36 +0100 |
---|---|---|
committer | Nikos Nikoleris <nikos.nikoleris@arm.com> | 2018-01-09 17:04:32 +0000 |
commit | e8236503ce70ea83f4f61716f54421b32ce009ce (patch) | |
tree | 8f6f575d12036afd7c4f4d39ad25ddf1d415e619 /src/mem/cache/cache.cc | |
parent | 50f9ef0def163969520006d53f40597123fb8ca5 (diff) | |
download | gem5-e8236503ce70ea83f4f61716f54421b32ce009ce.tar.xz |
mem-cache: Prune unnecessary writebacks in exclusive caches
Exclusive caches use the tempBlock to fill for responses from a
downstream cache. The reason for this is that they only pass the block
to the cache above without keeping a copy. When all requests are
serviced the block is immediately invalidated unless it is dirty, in
which case it has to be written back to the memory below.
To avoid unnecessary writebacks, this changeset forces mostly
exclusive caches to issuse requests that can only fetch clean data
when possible.
Reported-by: Quereshi Muhammad Avais <avais@kaist.ac.kr>
Change-Id: I01b377563f5aa3e12d22f425a04db7c023071849
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/5061
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Diffstat (limited to 'src/mem/cache/cache.cc')
-rw-r--r-- | src/mem/cache/cache.cc | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/src/mem/cache/cache.cc b/src/mem/cache/cache.cc index a83f8ab12..421fa5b72 100644 --- a/src/mem/cache/cache.cc +++ b/src/mem/cache/cache.cc @@ -1021,8 +1021,18 @@ Cache::createMissPacket(PacketPtr cpu_pkt, CacheBlk *blk, cmd = MemCmd::SCUpgradeFailReq; } else { // block is invalid + + // If the request does not need a writable there are two cases + // where we need to ensure the response will not fetch the + // block in dirty state: + // * this cache is read only and it does not perform + // writebacks, + // * this cache is mostly exclusive and will not fill (since + // it does not fill it will have to writeback the dirty data + // immediately which generates uneccesary writebacks). + bool force_clean_rsp = isReadOnly || clusivity == Enums::mostly_excl; cmd = needsWritable ? MemCmd::ReadExReq : - (isReadOnly ? MemCmd::ReadCleanReq : MemCmd::ReadSharedReq); + (force_clean_rsp ? MemCmd::ReadCleanReq : MemCmd::ReadSharedReq); } PacketPtr pkt = new Packet(cpu_pkt->req, cmd, blkSize); |