diff options
author | Andreas Hansson <andreas.hansson@arm.com> | 2015-11-06 03:26:41 -0500 |
---|---|---|
committer | Andreas Hansson <andreas.hansson@arm.com> | 2015-11-06 03:26:41 -0500 |
commit | 654266f39cd67055d6176d22a46c7d678f6340c4 (patch) | |
tree | 250cf876eca7a4370ecc3a3e3fa6d9ba695f2830 /src/mem/cache/base.hh | |
parent | f02a9338c1efaf7680f598a57ff6607e9b11120e (diff) | |
download | gem5-654266f39cd67055d6176d22a46c7d678f6340c4.tar.xz |
mem: Add cache clusivity
This patch adds a parameter to control the cache clusivity, that is if
the cache is mostly inclusive or exclusive. At the moment there is no
intention to support strict policies, and thus the options are: 1)
mostly inclusive, or 2) mostly exclusive.
The choice of policy guides the behaviuor on a cache fill, and a new
helper function, allocOnFill, is created to encapsulate the decision
making process. For the timing mode, the decision is annotated on the
MSHR on sending out the downstream packet, and in atomic we directly
pass the decision to handleFill. We (ab)use the tempBlock in cases
where we are not allocating on fill, leaving the rest of the cache
unaffected. Simple and effective.
This patch also makes it more explicit that multiple caches are
allowed to consider a block writable (this is the case
also before this patch). That is, for a mostly inclusive cache,
multiple caches upstream may also consider the block exclusive. The
caches considering the block writable/exclusive all appear along the
same path to memory, and from a coherency protocol point of view it
works due to the fact that we always snoop upwards in zero time before
querying any downstream cache.
Note that this patch does not introduce clean writebacks. Thus, for
clean lines we are essentially removing a cache level if it is made
mostly exclusive. For example, lines from the read-only L1 instruction
cache or table-walker cache are always clean, and simply get dropped
rather than being passed to the L2. If the L2 is mostly exclusive and
does not allocate on fill it will thus never hold the line. A follow
on patch adds the clean writebacks.
The patch changes the L2 of the O3_ARM_v7a CPU configuration to be
mostly exclusive (and stats are affected accordingly).
Diffstat (limited to 'src/mem/cache/base.hh')
-rw-r--r-- | src/mem/cache/base.hh | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/src/mem/cache/base.hh b/src/mem/cache/base.hh index a992583fe..cb1baa3f4 100644 --- a/src/mem/cache/base.hh +++ b/src/mem/cache/base.hh @@ -210,7 +210,8 @@ class BaseCache : public MemObject // overlap assert(addr == blockAlign(addr)); - MSHR *mshr = mq->allocate(addr, size, pkt, time, order++); + MSHR *mshr = mq->allocate(addr, size, pkt, time, order++, + allocOnFill(pkt->cmd)); if (mq->isFull()) { setBlocked((BlockedCause)mq->index); @@ -234,6 +235,15 @@ class BaseCache : public MemObject } /** + * Determine if we should allocate on a fill or not. + * + * @param cmd Packet command being added as an MSHR target + * + * @return Whether we should allocate on a fill or not + */ + virtual bool allocOnFill(MemCmd cmd) const = 0; + + /** * Write back dirty blocks in the cache using functional accesses. */ virtual void memWriteback() = 0; |