summaryrefslogtreecommitdiff
path: root/src/mem/cache/base.cc
diff options
context:
space:
mode:
authorAndreas Hansson <andreas.hansson@arm.com>2016-03-17 09:51:22 -0400
committerAndreas Hansson <andreas.hansson@arm.com>2016-03-17 09:51:22 -0400
commitabcbc4e51e21c95fa241d19ed13978ea25b26982 (patch)
tree40dc2f9b3fa227212c2dde335451122fbf4e8411 /src/mem/cache/base.cc
parent7a40e7864a99140f18049a6f97163eebca2c891e (diff)
downloadgem5-abcbc4e51e21c95fa241d19ed13978ea25b26982.tar.xz
mem: Adjust cache queue reserve to more conservative values
The cache queue reserve is there as an overflow to give us enough headroom based on when we block the cache, and how many transactions we may already have accepted before actually blocking. The previous values were probably chosen to be "big enough", when we actually know that we check the MSHRs after every single allocation, and for the write buffers we know that we implicitly may need one entry for every outstanding MSHR. * * * mem: Adjust cache queue reserve to more conservative values The cache queue reserve is there as an overflow to give us enough headroom based on when we block the cache, and how many transactions we may already have accepted before actually blocking. The previous values were probably chosen to be "big enough", when we actually know that we check the MSHRs after every single allocation, and for the write buffers we know that we implicitly may need one entry for every outstanding MSHR.
Diffstat (limited to 'src/mem/cache/base.cc')
-rw-r--r--src/mem/cache/base.cc10
1 files changed, 8 insertions, 2 deletions
diff --git a/src/mem/cache/base.cc b/src/mem/cache/base.cc
index 1cbfe713b..ecbd3526e 100644
--- a/src/mem/cache/base.cc
+++ b/src/mem/cache/base.cc
@@ -68,8 +68,8 @@ BaseCache::CacheSlavePort::CacheSlavePort(const std::string &_name,
BaseCache::BaseCache(const BaseCacheParams *p, unsigned blk_size)
: MemObject(p),
cpuSidePort(nullptr), memSidePort(nullptr),
- mshrQueue("MSHRs", p->mshrs, 4, p->demand_mshr_reserve),
- writeBuffer("write buffer", p->write_buffers, p->mshrs+1000),
+ mshrQueue("MSHRs", p->mshrs, 0, p->demand_mshr_reserve), // see below
+ writeBuffer("write buffer", p->write_buffers, p->mshrs), // see below
blkSize(blk_size),
lookupLatency(p->hit_latency),
forwardLatency(p->hit_latency),
@@ -85,6 +85,12 @@ BaseCache::BaseCache(const BaseCacheParams *p, unsigned blk_size)
addrRanges(p->addr_ranges.begin(), p->addr_ranges.end()),
system(p->system)
{
+ // the MSHR queue has no reserve entries as we check the MSHR
+ // queue on every single allocation, whereas the write queue has
+ // as many reserve entries as we have MSHRs, since every MSHR may
+ // eventually require a writeback, and we do not check the write
+ // buffer before committing to an MSHR
+
// forward snoops is overridden in init() once we can query
// whether the connected master is actually snooping or not
}