summaryrefslogtreecommitdiff
path: root/src/mem/cache/mshr.cc
AgeCommit message (Collapse)Author
2015-11-06mem: Add cache clusivityAndreas Hansson
This patch adds a parameter to control the cache clusivity, that is if the cache is mostly inclusive or exclusive. At the moment there is no intention to support strict policies, and thus the options are: 1) mostly inclusive, or 2) mostly exclusive. The choice of policy guides the behaviuor on a cache fill, and a new helper function, allocOnFill, is created to encapsulate the decision making process. For the timing mode, the decision is annotated on the MSHR on sending out the downstream packet, and in atomic we directly pass the decision to handleFill. We (ab)use the tempBlock in cases where we are not allocating on fill, leaving the rest of the cache unaffected. Simple and effective. This patch also makes it more explicit that multiple caches are allowed to consider a block writable (this is the case also before this patch). That is, for a mostly inclusive cache, multiple caches upstream may also consider the block exclusive. The caches considering the block writable/exclusive all appear along the same path to memory, and from a coherency protocol point of view it works due to the fact that we always snoop upwards in zero time before querying any downstream cache. Note that this patch does not introduce clean writebacks. Thus, for clean lines we are essentially removing a cache level if it is made mostly exclusive. For example, lines from the read-only L1 instruction cache or table-walker cache are always clean, and simply get dropped rather than being passed to the L2. If the L2 is mostly exclusive and does not allocate on fill it will thus never hold the line. A follow on patch adds the clean writebacks. The patch changes the L2 of the O3_ARM_v7a CPU configuration to be mostly exclusive (and stats are affected accordingly).
2015-10-29mem: Clarify cache MSHR handling on fillAndreas Hansson
This patch addresses the upgrading of deferred targets in the MSHR, and makes it clearer by explicitly calling out what is happening (deferred targets are promoted if we get exclusivity without asking for it).
2015-09-04mem: Avoid setting markPending if not neededAndreas Hansson
In cases where a newly added target does not have any upstream MSHR to mark as downstreamPending, remember that nothing is marked. This allows us to avoid attempting to find the MSHR as part of the clearing of downstreamPending.
2015-07-13mem: Fix (ab)use of emplace to avoid temporary object creationAndreas Hansson
2015-03-17mem: Create a request copy for deferred snoopsStephan Diestelhorst
Sometimes, we need to defer an express snoop in an MSHR, but the original request might complete and deallocate the original pkt->req. In those cases, create a copy of the request so that someone who is inspecting the delayed snoop can also inspect the request still. All of this is rather hacky, but the allocation / linking and general life-time management of Packet and Request is rather tricky. Deleting the copy is another tricky area, testing so far has shown that the right copy is deleted at the right time.
2015-05-05mem: Snoop into caches on uncacheable accessesAndreas Hansson
This patch takes a last step in fixing issues related to uncacheable accesses. We do not separate uncacheable memory from uncacheable devices, and in cases where it is really memory, there are valid scenarios where we need to snoop since we do not support cache maintenance instructions (yet). On snooping an uncacheable access we thus provide data if possible. In essence this makes uncacheable accesses IO coherent. The snoop filter is also queried to steer the snoops, but not updated since the uncacheable accesses do not allocate a block.
2015-03-27mem: Ignore uncacheable MSHRs when finding matchesAndreas Hansson
This patch changes how we search for matching MSHRs, ignoring any MSHR that is allocated for an uncacheable access. By doing so, this patch fixes a corner case in the MSHRs where incorrect data ended up being copied into a (cacheable) read packet due to a first uncacheable MSHR target of size 4, followed by a cacheable target to the same MSHR of size 64. The latter target was filled with nonsense data.
2015-03-27mem: Modernise MSHR iterators to C++11Andreas Hansson
This patch updates the iterators in the MSHR and MSHR queues to use C++11 range-based for loops. It also does a bit of additional house keeping.
2015-03-27mem: Align all MSHR entries to block boundariesAndreas Hansson
This patch aligns all MSHR queue entries to block boundaries to simplify checks for matches. Previously there were corner cases that could lead to existing entries not being identified as matches. There are, rather alarmingly, a few regressions that change with this patch.
2015-03-02mem: Unify all cache DPRINTF address formattingAndreas Hansson
This patch changes all the DPRINTF messages in the cache to use '%#llx' every time a packet address is printed. The inclusion of '#' ensures '0x' is prepended, and since the address type is a uint64_t %x really should be %llx.
2015-02-03mem: Clarify cache behaviour for pending dirty responsesAndreas Hansson
This patch adds a bit of clarification around the assumptions made in the cache when packets are sent out, and dirty responses are pending. As part of the change, the marking of an MSHR as in service is simplified slightly, and comments are added to explain what assumptions are made.
2014-12-02mem: Remove WriteInvalidate supportCurtis Dunham
Prepare for a different implementation following in the next patch
2014-12-02mem: Clean up packet data allocationAndreas Hansson
This patch attempts to make the rules for data allocation in the packet explicit, understandable, and easy to verify. The constructor that copies a packet is extended with an additional flag "alloc_data" to enable the call site to explicitly say whether the newly created packet is short-lived (a zero-time snoop), or has an unknown life-time and therefore should allocate its own data (or copy a static pointer in the case of static data). The tricky case is the static data. In essence this is a copy-avoidance scheme where the original source of the request (DMA, CPU etc) does not ask the memory system to return data as part of the packet, but instead provides a pointer, and then the memory system carries this pointer around, and copies the appropriate data to the location itself. Thus any derived packet actually never copies any data. As the original source does not copy any data from the response packet when arriving back at the source, we must maintain the copy of the original pointer to not break the system. We might want to revisit this one day and pay the price for a few extra memcpy invocations. All in all this patch should make it easier to grok what is going on in the memory system and how data is actually copied (or not).
2014-10-21mem: don't inhibit WriteInv's or defer snoops on their MSHRsCurtis Dunham
WriteInvalidate semantics depend on the unconditional writeback or they won't complete. Also, there's no point in deferring snoops on their MSHRs, as they don't get new data at the end of their life cycle the way other transactions do. Add comment in the cache about a minor inefficiency re: WriteInvalidate.
2014-10-29mem: have WriteInvalidate obsolete MSHRsCurtis Dunham
Since WriteInvalidate directly writes into the cache, it can create tricky timing interleavings with reads and writes to the same cache line that haven't yet completed. This patch ensures that these requests, when completed, don't overwrite the newer data from the WriteInvalidate.
2014-10-09mem: Add packet sanity checks to cache and MSHRsAndreas Hansson
This patch adds a number of asserts to the cache, checking basic assumptions about packets being requests or responses.
2014-01-24mem: Add support for a security bit in the memory systemGiacomo Gabrielli
This patch adds the basic building blocks required to support e.g. ARM TrustZone by discerning secure and non-secure memory accesses.
2013-05-30mem: Spring cleaning of MSHR and MSHRQueueAndreas Hansson
This patch does some minor tidying up of the MSHR and MSHRQueue. The clean up started as part of some ad-hoc tracing and debugging, but seems worthwhile enough to go in as a separate patch. The highlights of the changes are reduced scoping (private) members where possible, avoiding redundant new/delete, and constructor initialisation to please static code analyzers.
2013-05-30mem: Fix MSHR print formatAndreas Hansson
This patch fixes an incorrect print format string by adding an additional string element.
2013-04-22mem: Adding verbose debug output in the memory systemUri Wiener
This patch provides useful printouts throughut the memory system. This includes pretty-printed cache tags and function call messages (call-stack like).
2013-02-19mem: Fix SenderState related cache deadlockSascha Bischoff
This patch fixes a potential deadlock in the caches. This deadlock could occur when more than one cache is used in a system, and pkt->senderState is modified in between the two caches. This happened as the caches relied on the senderState remaining unchanged, and used it for instantaneous upstream communication with other caches. This issue has been addressed by iterating over the linked list of senderStates until we are either able to cast to a MSHR* or senderState is NULL. If the cast is successful, we know that the packet has previously passed through another cache, and therefore update the downstreamPending flag accordingly. Otherwise, we do nothing.
2012-07-09Fix: Address a few benign memory leaksAndreas Hansson
This patch is the result of static analysis identifying a number of memory leaks. The leaks are all benign as they are a result of not deallocating memory in the desctructor. The fix still has value as it removes false positives in the static analysis.
2012-05-10gem5: Fix a number of incorrect case statementsAli Saidi
2012-04-06MEM: Enable multiple distributed generalized memoriesAndreas Hansson
This patch removes the assumption on having on single instance of PhysicalMemory, and enables a distributed memory where the individual memories in the system are each responsible for a single contiguous address range. All memories inherit from an AbstractMemory that encompasses the basic behaviuor of a random access memory, and provides untimed access methods. What was previously called PhysicalMemory is now SimpleMemory, and a subclass of AbstractMemory. All future types of memory controllers should inherit from AbstractMemory. To enable e.g. the atomic CPU and RubyPort to access the now distributed memory, the system has a wrapper class, called PhysicalMemory that is aware of all the memories in the system and their associated address ranges. This class thus acts as an infinitely-fast bus and performs address decoding for these "shortcut" accesses. Each memory can specify that it should not be part of the global address map (used e.g. by the functional memories by some testers). Moreover, each memory can be configured to be reported to the OS configuration table, useful for populating ATAG structures, and any potential ACPI tables. Checkpointing support currently assumes that all memories have the same size and organisation when creating and resuming from the checkpoint. A future patch will enable a more flexible re-organisation. --HG-- rename : src/mem/PhysicalMemory.py => src/mem/AbstractMemory.py rename : src/mem/PhysicalMemory.py => src/mem/SimpleMemory.py rename : src/mem/physical.cc => src/mem/abstract_mem.cc rename : src/mem/physical.hh => src/mem/abstract_mem.hh rename : src/mem/physical.cc => src/mem/simple_mem.cc rename : src/mem/physical.hh => src/mem/simple_mem.hh
2011-04-15trace: reimplement the DTRACE function so it doesn't use a vectorNathan Binkert
At the same time, rename the trace flags to debug flags since they have broader usage than simply tracing. This means that --trace-flags is now --debug-flags and --trace-help is now --debug-help
2011-01-07Replace curTick global variable with accessor functions.Steve Reinhardt
This step makes it easy to replace the accessor functions (which still access a global variable) with ones that access per-thread curTick values.
2010-09-09cache: fail SC when invalidated while waiting for busSteve Reinhardt
Corrects an oversight in cset f97b62be544f. The fix there only failed queued SCUpgradeReq packets that encountered an invalidation, which meant that the upgrade had to reach the L2 cache. To handle pending requests in the L1 we must similarly fail StoreCondReq packets too.
2010-09-09cache: coherence protocol enhancements & bug fixesSteve Reinhardt
Allow lower-level caches (e.g., L2 or L3) to pass exclusive copies to higher levels (e.g., L1). This eliminates a lot of unnecessary upgrade transactions on read-write sequences to non-shared data. Also some cleanup of MSHR coherence handling and multiple bug fixes.
2010-08-25mem: fix dumb typo in copyrightsSteve Reinhardt
2010-06-16cache: fail store conditionals when upgrade loses raceSteve Reinhardt
Requires new "SCUpgradeReq" message that marks upgrades for store conditionals, so downstream caches can fail these when they run into invalidations. See http://www.m5sim.org/flyspray/task/197
2009-05-26types: add a type for thread IDs and try to use it everywhereNathan Binkert
2009-05-17includes: sort includes againNathan Binkert
2009-05-17types: Move stuff for global types into src/base/types.hhNathan Binkert
--HG-- rename : src/sim/host.hh => src/base/types.hh
2009-02-16Fixes to get prefetching working again.Steve Reinhardt
Apparently we broke it with the cache rewrite and never noticed. Thanks to Bao Yungang <baoyungang@gmail.com> for a significant part of these changes (and for inspiring me to work on the rest). Some other overdue cleanup on the prefetch code too.
2008-11-10Cache: Refactor packet forwarding a bit.Steve Reinhardt
Makes adding write-through operations easier.
2008-02-10Fix #include lines for renamed cache files.Steve Reinhardt
--HG-- extra : convert_revision : b5008115dc5b34958246608757e69a3fa43b85c5
2008-02-10Rename cache files for brevity and consistency with rest of tree.Steve Reinhardt
--HG-- rename : src/mem/cache/base_cache.cc => src/mem/cache/base.cc rename : src/mem/cache/base_cache.hh => src/mem/cache/base.hh rename : src/mem/cache/cache_blk.cc => src/mem/cache/blk.cc rename : src/mem/cache/cache_blk.hh => src/mem/cache/blk.hh rename : src/mem/cache/cache_builder.cc => src/mem/cache/builder.cc rename : src/mem/cache/miss/mshr.cc => src/mem/cache/mshr.cc rename : src/mem/cache/miss/mshr.hh => src/mem/cache/mshr.hh rename : src/mem/cache/miss/mshr_queue.cc => src/mem/cache/mshr_queue.cc rename : src/mem/cache/miss/mshr_queue.hh => src/mem/cache/mshr_queue.hh rename : src/mem/cache/prefetch/base_prefetcher.cc => src/mem/cache/prefetch/base.cc rename : src/mem/cache/prefetch/base_prefetcher.hh => src/mem/cache/prefetch/base.hh rename : src/mem/cache/prefetch/ghb_prefetcher.cc => src/mem/cache/prefetch/ghb.cc rename : src/mem/cache/prefetch/ghb_prefetcher.hh => src/mem/cache/prefetch/ghb.hh rename : src/mem/cache/prefetch/stride_prefetcher.cc => src/mem/cache/prefetch/stride.cc rename : src/mem/cache/prefetch/stride_prefetcher.hh => src/mem/cache/prefetch/stride.hh rename : src/mem/cache/prefetch/tagged_prefetcher.cc => src/mem/cache/prefetch/tagged.cc rename : src/mem/cache/prefetch/tagged_prefetcher.hh => src/mem/cache/prefetch/tagged.hh rename : src/mem/cache/tags/base_tags.cc => src/mem/cache/tags/base.cc rename : src/mem/cache/tags/base_tags.hh => src/mem/cache/tags/base.hh rename : src/mem/cache/tags/Repl.py => src/mem/cache/tags/iic_repl/Repl.py rename : src/mem/cache/tags/repl/gen.cc => src/mem/cache/tags/iic_repl/gen.cc rename : src/mem/cache/tags/repl/gen.hh => src/mem/cache/tags/iic_repl/gen.hh rename : src/mem/cache/tags/repl/repl.hh => src/mem/cache/tags/iic_repl/repl.hh extra : convert_revision : ff7a35cc155a8d80317563c45cebe405984eac62