summaryrefslogtreecommitdiff
path: root/src/mem/cache
AgeCommit message (Collapse)Author
2016-08-15mem: Print an MSHR without triggering any assertionsNikos Nikoleris
Previously printing an mshr would trigger an assertion if the MSHR was not in service or if the targets list was empty. This patch changes the print function to bypasses the accessor functions for postInvalidate and postDowngrade and avoid the relevant assertions. It also checks if the targets list is empty before calling print on it. Change-Id: Ic18bee6cb088f63976112eba40e89501237cfe62 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-08-12mem: Update mostly exclusive policy even furtherAndreas Hansson
This patch takes yet another step in maintaining the clusivity, in that it allows a mostly-inclusive cache to hold on to blocks even when responding to a ReadExReq or UpgradeReq. Previously the cache simply invalidated these blocks, but there is no strict need to do so. The most important part of this patch is that we simply mark the block clean when satisfying the upstream request where the cache is allowed to keep the block. The only tricky part of the patch is in the memory management of deferred snoops, where we need to distinguish the cases where only the packet was copied (we expected to respond), and the cases where we created an entirely new packet and request (we kept it only to replay later). The code in satisfyRequest is definitely ready for some refactoring after this. Change-Id: I201ddc7b2582eaa46fb8cff0c7ad09e02d64b0fc Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
2016-08-12mem: Update mostly exclusive cache policy to cover more casesAndreas Hansson
This patch changes how the mostly exclusive policy is enforced to ensure that we drop blocks when we should. As part of this change, the actual invalidation due to the clusivity enforcement is moved outside the hit handling, to a separate method maintainClusivity. For the timing mode that means we can deal with all MSHR targets before taking any action and possibly dropping the block. The method satisfyCpuSideRequest is also renamed satisfyRequest as part of this change (since we only ever see requests from the cpu-side port). Change-Id: If6f3d1e0c3e7be9a67b72a55e4fc2ec4a90fd3d2 Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
2016-08-12mem: Add a FromCache packet attributeAndreas Hansson
This patch adds a FromCache attribute to the packet, and updates a number of the existing request commands to reflect that the request originates from a cache. The attribute simplifies checking if a requests came from a cache or not, and this is used by both the cache and snoop filter in follow-on patches. Change-Id: Ib0a7a080bbe4d6036ddd84b46fd45bc7eb41cd8f Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Jason Lowe-Power <jason@lowepower.com> Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com> Reviewed-by: Steve Reinhardt <stever@gmail.com>
2016-07-11mem: Remove stale argument from a DPRINTF in the cache codeNikos Nikoleris
Change-Id: I70dd11c23b45dfc606ef08233d2e50fcc0817505 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-06-06sim: Call regStats of base-class as wellStephan Diestelhorst
We want to extend the stats of objects hierarchically and thus it is necessary to register the statistics of the base-class(es), as well. For now, these are empty, but generic stats will be added there. Patch originally provided by Akash Bagdia at ARM Ltd.
2016-05-26mem: Fix memory leak in handling of deferred snoopsAndreas Hansson
This patch fixes a memory leak where deferred snoop packets never got deallocated. On the call to MSHR::handleSnoop these snoops were treated as if a response will be sent, as the MSHR was pendingModified. Consequently, a copy of the packet was created and added to the MSHR targets. However, an preceeding target to the same MSHR, originally from a CPU, was serviced before the snoop, and caused the block to be invalidated. This happens for ReadExReq and UpgradeReq. Note that the original snoop will receive a response, just not from the cache in question, but instead from the cache upstream that issued the ReadExReq or UpgradeReq. Change-Id: I4ac012fbc8a46cf693ca390fe9476105d444e6f4 Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
2016-05-26mem: Do not set cacheResponding on MSHR snoop if not respondingAndreas Hansson
This patch changes the flow control for HSHR::handleSnoop to ensure that we only set cacheResponding on the snoop packet if we are actually responding. This avoids situations where a responder is stalling indefinitely on a response that never arrives. Change-Id: I691dd01755b614b30203581aa74fc743b350eacc Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
2016-05-26mem: fix headers include order in the cache related classesNikos Nikoleris
Change-Id: Ia57cc104978861ab342720654e408dbbfcbe4b69 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-05-26mem: remove redudant check whether the cache forwards snoopsNikos Nikoleris
Change-Id: I57b56771086e1e2f512977fb7248d93c171ab925 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-05-26mem: change NULL to nullptr in the cache related classesNikos Nikoleris
Change-Id: I5042410be54935650b7d05c84d8d9efbfcc06e70 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-05-26mem: fix the line length in the cache related classesNikos Nikoleris
Change-Id: I6d1feb164a958dde0da87a1cd2698096112c4a82 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-04-21mem: Include WriteLineReq in cache demand statsAndreas Hansson
Somehow the WriteLineReq were never added to the list of commands considered demand.
2016-04-21mem: Remove unused cache statsAndreas Hansson
Prune cache stats that are never actually used.
2016-04-21mem: Deallocate all write-queue entries when sentAndreas Hansson
This patch removes the write-queue entry tracking previously used for uncacheable writes. The write-queue entry is now deallocated as soon as the packet is sent. As a result we also forego the stats for uncacheable writes. Additionally, there is no longer a need to attach the write-queue entry to the packet.
2016-04-21mem: Align downstream cache packet creation in atomic and timingAndreas Hansson
This patch makes the control flow more uniform in atomic and timing, ultimately making the code easier to understand.
2016-04-07mem: Add priority to QueuedPrefetcherRekai Gonzalez Alberquilla
Queued prefetcher entries now count with a priority field. The idea is to add packets ordered by priority and then by age. For the existing algorithms in which priority doesn't make sense, it is set to 0 for all deferred packets in the queue.
2016-04-07mem: Handful extra features for BasePrefetcherRekai Gonzalez Alberquilla
Some common functionality added to the base prefetcher, mainly dealing with extracting the block address, page address, block index inside the page and some other information that can be inferred from the block address. This is used for some prefetching algorithms, and having the methods in the base, as well as the block size and other information is the sensible way.
2015-05-27mem: Add unused prefetch counter in cachesRekai Gonzalez Alberquilla
Added stat to the cache to account for HardPF'ed blocks that are evicted before being referenced (over-prefetching).
2016-04-07mem: Remove threadId from memory request classMitch Hayenga
In general, the ThreadID parameter is unnecessary in the memory system as the ContextID is what is used for the purposes of locks/wakeups. Since we allocate sequential ContextIDs for each thread on MT-enabled CPUs, ThreadID is unnecessary as the CPUs can identify the requesting thread through sideband info (SenderState / LSQ entries) or ContextID offset from the base ContextID for a cpu. This is a re-spin of 20264eb after the revert (bd1c6789) and includes some fixes of that commit.
2016-04-06Revert power patch sets with unexpected interactionsAndreas Sandberg
The following patches had unexpected interactions with the current upstream code and have been reverted for now: e07fd01651f3: power: Add support for power models 831c7f2f9e39: power: Low-power idle power state for idle CPUs 4f749e00b667: power: Add power states to ClockedObject Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com> --HG-- extra : amend_source : 0b6fb073c6bbc24be533ec431eb51fbf1b269508
2016-04-05mem: Remove threadId from memory request classMitch Hayenga
In general, the ThreadID parameter is unnecessary in the memory system as the ContextID is what is used for the purposes of locks/wakeups. Since we allocate sequential ContextIDs for each thread on MT-enabled CPUs, ThreadID is unnecessary as the CPUs can identify the requesting thread through sideband info (SenderState / LSQ entries) or ContextID offset from the base ContextID for a cpu.
2016-03-17mem: Adjust cache queue reserve to more conservative valuesAndreas Hansson
The cache queue reserve is there as an overflow to give us enough headroom based on when we block the cache, and how many transactions we may already have accepted before actually blocking. The previous values were probably chosen to be "big enough", when we actually know that we check the MSHRs after every single allocation, and for the write buffers we know that we implicitly may need one entry for every outstanding MSHR. * * * mem: Adjust cache queue reserve to more conservative values The cache queue reserve is there as an overflow to give us enough headroom based on when we block the cache, and how many transactions we may already have accepted before actually blocking. The previous values were probably chosen to be "big enough", when we actually know that we check the MSHRs after every single allocation, and for the write buffers we know that we implicitly may need one entry for every outstanding MSHR.
2016-03-17mem: Create a separate class for the cache write bufferAndreas Hansson
This patch breaks out the cache write buffer into a separate class, without affecting any stats. The goal of the patch is to avoid encumbering the much-simpler write queue with the complex MSHR handling. In a follow on patch this simplification allows us to implement write combining. The WriteQueue gets its own class, but shares a common ancestor, the generic Queue, with the MSHRQueue.
2015-08-10mem, cpu: Add assertions to snoop invalidation logicStephan Diestelhorst
This patch adds assertions that enforce that only invalidating snoops will ever reach into the logic that tracks in-order load completion and also invalidation of LL/SC (and MONITOR / MWAIT) monitors. Also adds some comments to MSHR::replaceUpgrades().
2016-02-24mem: Ensure that InvalidateReq is not forwarded as ReadExReqAndreas Hansson
This patch fixes an issue where an InvalidationReq only traversed one level of the cache hierarchy, and was subsequently turned into a ReadExReq due to it needing writable, and the command not being checked for explicitly.
2016-02-15mem: Avoid using invalid iterator in cache lock list traversalAndreas Hansson
Fix up issue highlighted by Valgrind and the clang Address Sanitizer.
2016-02-10mem: Be less conservative in clearing load locks in the cacheAndreas Hansson
Avoid being overly conservative in clearing load locks in the cache, and allow writes to the line if they are from the same context. This is in line with ALPHA and ARM.
2016-02-10mem: Move the point of coherency to the coherent crossbarAndreas Hansson
This patch introduces the ability of making the coherent crossbar the point of coherency. If so, the crossbar does not forward packets where a cache with ownership has already committed to responding, and also does not forward any coherency-related packets that are not intended for a downstream memory controller. Thus, invalidations and upgrades are turned around in the crossbar, and the memory controller only sees normal reads and writes. In addition this patch moves the express snoop promotion of a packet to the crossbar, thus allowing the downstream cache to check the express snoop flag (as it should) for bypassing any blocking, rather than relying on whether a cache is responding or not.
2016-02-10mem: Align cache behaviour in atomic when upstream is respondingAndreas Hansson
Adopt the same flow as in timing mode, where the caches on the path to memory get to keep the line (if present), and we use the responderHadWritable flag to determine if we need to forward the (invalidating) packet or not.
2016-02-10mem: Align how snoops are handled when hitting writebacksAndreas Hansson
This patch unifies the snoop handling in case of hitting writebacks with how we handle snoops hitting in the tags. As a result, we end up using the same optimisation as the normal snoops, where we inform the downstream cache if we encounter a line in Modified (writable and dirty) state, which enables us to avoid sending out express snoops to invalidate any Shared copies of the line. A few regressions consequently change, as some transactions are sunk higher up in the cache hierarchy.
2016-02-10mem: Deduce if cache should forward snoopsAndreas Hansson
This patch changes how the cache determines if snoops should be forwarded from the memory side to the CPU side. Instead of having a parameter, the cache now looks at the port connected on the CPU side, and if it is a snooping port, then snoops are forwarded. Less error prone, and less parameters to worry about. The patch also tidies up the CPU classes to ensure that their I-side port is not snooping by removing overrides to the snoop request handler, such that snoop requests will panic via the default MasterPort implement
2016-02-06style: fix missing spaces in control statementsSteve Reinhardt
Result of running 'hg m5style --skip-all --fix-control -a'.
2015-12-31mem: add CacheVerbose debug flag, filter noisy DPRINTFsSteve Reinhardt
Some of the DPRINTFs added to the classic cache in cset 45df88079f04, while useful to those unfamiliar with the cache code, end up being noise when you're familiar with the code but are trying to debug tricky protocol issues. (Particularly getting two messages from each cache as it receives a snoop request then declares that there was no match.) This patch introduces a CacheVerbose debug flag, and moves a subset of the added DPRINTFs into that category, so that Cache by itself returns to being a more succinct summary of cache activity. Also added a CacheAll compound flag to turn on all the cache-related debug flags (other than CacheTags, which you *really* have to want badly to turn it on, IMO).
2015-12-31mem: Do not allocate space for packet data if not neededAndreas Hansson
This patch looks at the request and response command to determine if either actually has any data payload, and if not, we do not allocate any space for packet data. The only tricky case is where the command type is changed as part of the MSHR functionality. In these cases where the original packet had no data, but the new packet does, we need to explicitly call allocate().
2015-12-31mem: Do not alter cache block state on uncacheable snoopsAndreas Hansson
This patch ensures we do not respond with a Modified (dirty and writable) line if the request is uncacheable, and that the cache responding retains the line without modifying the state (even if responding).
2015-12-31mem: Make cache terminology easier to understandAndreas Hansson
This patch changes the name of a bunch of packet flags and MSHR member functions and variables to make the coherency protocol easier to understand. In addition the patch adds and updates lots of descriptions, explicitly spelling out assumptions. The following name changes are made: * the packet memInhibit flag is renamed to cacheResponding * the packet sharedAsserted flag is renamed to hasSharers * the packet NeedsExclusive attribute is renamed to NeedsWritable * the packet isSupplyExclusive is renamed responderHadWritable * the MSHR pendingDirty is renamed to pendingModified The cache states, Modified, Owned, Exclusive, Shared are also called out in the cache and MSHR code to make it easier to understand.
2015-12-28mem: Explicitly check MSHR snoops for cases not dealt withAndreas Hansson
Add a sanity check to make it explicit that we currently do not allow an I/O coherent agent to directly issue writes into the coherent part of the memory system (it has to go via a cache, and get transformed into a read ex, upgrade or invalidation).
2015-12-28mem: Remove unused cache squash functionalityAndreas Hansson
This patch removes the unused squash function from the MSHR queue, and the associated (and also unused) threadNum member from the MSHR.
2015-12-28mem: Avoid unecessary checks when creating HardPFReq in cacheAndreas Hansson
The checks made before sending out a HardPFReq were unecessarily complex, and checked for cases that never occur. This patch tidies it up.
2015-12-28mem: Do not use sender state to track forwarded snoops in cacheAndreas Hansson
This patch changes how the cache tracks which snoops are forwarded, and which ones are created locally. Previously the identification was based on an empty sender state of a specific class, but this method fails to distinguish which cache actually attached the sender state. Instead we use the same mechanism as the crossbar, and keep track of the requests that have outstanding snoops.
2015-12-28mem: Fix cache sender state handling and add clarificationAndreas Hansson
This patch addresses a bug in how the cache attached the MSHR as a sender state. Rather than overwriting any existing sender state it now pushes a new one. The handling of upward snoops is also clarified.
2015-12-17mem: Fix memory allocation bug in deferred snoop handlingAndreas Hansson
This patch fixes a corner case in the deferred snoop handling, where requests ended up being used by multiple packets with different lifetimes, and inadvertently got deleted while they were still in use.
2015-11-15arm: Add missing explicit overrides for classic cachesAndreas Sandberg
Make clang when compiling on OSX.
2015-11-06mem: Add an option to perform clean writebacks from cachesAndreas Hansson
This patch adds the necessary commands and cache functionality to allow clean writebacks. This functionality is crucial, especially when having exclusive (victim) caches. For example, if read-only L1 instruction caches are not sending clean writebacks, there will never be any spills from the L1 to the L2. At the moment the cache model defaults to not sending clean writebacks, and this should possibly be re-evaluated. The implementation of clean writebacks relies on a new packet command WritebackClean, which acts much like a Writeback (renamed WritebackDirty), and also much like a CleanEvict. On eviction of a clean block the cache either sends a clean evict, or a clean writeback, and if any copies are still cached upstream the clean evict/writeback is dropped. Similarly, if a clean evict/writeback reaches a cache where there are outstanding MSHRs for the block, the packet is dropped. In the typical case though, the clean writeback allocates a block in the downstream cache, and marks it writable if the evicted block was writable. The patch changes the O3_ARM_v7a L1 cache configuration and the default L1 caches in config/common/Caches.py
2015-11-06mem: Add cache clusivityAndreas Hansson
This patch adds a parameter to control the cache clusivity, that is if the cache is mostly inclusive or exclusive. At the moment there is no intention to support strict policies, and thus the options are: 1) mostly inclusive, or 2) mostly exclusive. The choice of policy guides the behaviuor on a cache fill, and a new helper function, allocOnFill, is created to encapsulate the decision making process. For the timing mode, the decision is annotated on the MSHR on sending out the downstream packet, and in atomic we directly pass the decision to handleFill. We (ab)use the tempBlock in cases where we are not allocating on fill, leaving the rest of the cache unaffected. Simple and effective. This patch also makes it more explicit that multiple caches are allowed to consider a block writable (this is the case also before this patch). That is, for a mostly inclusive cache, multiple caches upstream may also consider the block exclusive. The caches considering the block writable/exclusive all appear along the same path to memory, and from a coherency protocol point of view it works due to the fact that we always snoop upwards in zero time before querying any downstream cache. Note that this patch does not introduce clean writebacks. Thus, for clean lines we are essentially removing a cache level if it is made mostly exclusive. For example, lines from the read-only L1 instruction cache or table-walker cache are always clean, and simply get dropped rather than being passed to the L2. If the L2 is mostly exclusive and does not allocate on fill it will thus never hold the line. A follow on patch adds the clean writebacks. The patch changes the L2 of the O3_ARM_v7a CPU configuration to be mostly exclusive (and stats are affected accordingly).
2015-11-06mem: Enforce insertion order on the cache response pathAli Jafri
This patch enforces insertion order transmission of packets on the response path in the cache. Note that the logic to enforce order is already present in the packet queue, this patch simply turns it on for queues in the response path. Without this patch, there are corner cases where a request-response is faster than a response-response forwarded through the cache. This violation of queuing order causes problems in the snoop filter leaving it with inaccurate information. This causes assert failures in the snoop filter later on. A follow on patch relaxes the order enforcement in the packet queue to limit the performance impact.
2015-11-06mem: Do not treat CleanEvict as a write operationAndreas Hansson
This patch changes the CleanEvict command type to not be considered a write. Initially it was made a zero-sized write to match the writeback command, but as things developed it became clear that it causes more problems than it solves. For example, the memory modules (and bridge) should not consider the CleanEvict as a write, but instead discard it. With this patch it will be neither a read, nor write, and as it does not need a response the slave will simply sink it.
2015-11-06mem: Unify delayed packet deletionAndreas Hansson
This patch unifies how we deal with delayed packet deletion, where the receiving slave is responsible for deleting the packet, but the sending agent (e.g. a cache) is still relying on the pointer until the call to sendTimingReq completes. Previously we used a mix of a deletion vector and a construct using unique_ptr. With this patch we ensure all slaves use the latter approach.
2015-11-06misc: Appease clang static analyzerAndreas Hansson
A few minor fixes to issues identified by the clang static analyzer.