summaryrefslogtreecommitdiff
path: root/src/mem/cache/cache.hh
AgeCommit message (Collapse)Author
2018-12-05mem-cache: Remove writebacks parameter from serviceMSHRTargetsDaniel R. Carvalho
Change 8ba77ae8fc98a355082da2bd9fdc6ecf4928f725 introduced the writebacks parameter, but it was never used. Change-Id: I225e5b399de42d77c72fc0012d3dc93ef39b8853 Signed-off-by: Daniel R. Carvalho <odanrc@yahoo.com.br> Reviewed-on: https://gem5-review.googlesource.com/c/14896 Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2018-10-22mem-cache: Move evictBlock(CacheBlk*, PacketList&) to baseDaniel R. Carvalho
Move evictBlock(CacheBlk*, PacketList&) to base cache, as it is both sub-classes implementations are equal. Change-Id: I80fbd16813bfcc4938fb01ed76abe29b3f8b3018 Signed-off-by: Daniel R. Carvalho <odanrc@yahoo.com.br> Reviewed-on: https://gem5-review.googlesource.com/c/13656 Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2018-10-18mem: Restructure whole-line writes to simplify write mergingNikos Nikoleris
This patch changes how we deal with whole-line writes their responses. With these changes, we use the MSHR tracking to determine if a whole-line is written, and on a fill we simply handle the invalidation response, with the actual writes taking place as part of satisfying the CPU-side hit. Change-Id: I9a18e41a95db3c20b97f8bca7d95ff33d35a578b Reviewed-on: https://gem5-review.googlesource.com/c/12905 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com> Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2018-09-13mem-cache: Fix bug in handleAtomicReqMissNikos Nikoleris
"4976ff5 mem-cache: Refactor the recvAtomic function" introduced a bug where if an atomic request that fills in using the tempBlock it will not evict it when it finishes handling the request as it should. This triggers an assertion. This change fixes this bug. Change-Id: I73c808a7e15237eddb36b5448ef6728f7bcf7fd9 Reviewed-by: Giacomo Travaglini <giacomo.travaglini@arm.com> Reviewed-on: https://gem5-review.googlesource.com/12644 Reviewed-by: Jason Lowe-Power <jason@lowepower.com> Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2018-05-31mem-cache: Adopt a more sensible cache class hierarchyNikos Nikoleris
This patch changes what goes into the BaseCache and what goes into the Cache, to make it easier to add a NoncoherentCache with as much re-use as possible. A number of redundant members and definitions are also removed in the process. This is a modified version of a changeset put together by Andreas Hansson <andreas.hansson@arm.com> Change-Id: Ie9dd73c4ec07732e778e7416b712dad8b4bd5d4b Reviewed-on: https://gem5-review.googlesource.com/10431 Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2018-05-31mem-cache: Add helper function to perform evictionsNikos Nikoleris
Change-Id: I2df24eb1a8516220bec9b685c8c09bf55be18681 Reviewed-on: https://gem5-review.googlesource.com/10430 Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
2018-05-31mem-cache: Refactor the recvAtomic functionNikos Nikoleris
The recvAtomic function in the cache handles atomic requests. Over time, recvAtomic has grown in complexity and code size. This change factors out some of its functionality in a separate functiona. The new functions handles atomic requests that miss. Change-Id: If77d2de1e3e802e1da37f889f68910e700c59209 Reviewed-on: https://gem5-review.googlesource.com/10425 Reviewed-by: Jason Lowe-Power <jason@lowepower.com> Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2018-05-31mem-cache: Refactor the cache recvTimingReq functionNikos Nikoleris
The recvTimingReq function in the cache handles timing requests. Over time, recvTimingReq has grown in complexity and code size. This change factors out some of its functionality in two separate functions. The new functions handle timing requests that hit and timing requests that miss separately. Change-Id: I09902d648d7272f0f9ec2851fa6376f7305ba418 Reviewed-on: https://gem5-review.googlesource.com/10424 Reviewed-by: Jason Lowe-Power <jason@lowepower.com> Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2018-05-31mem-cache: Refactor the cache recvTimingResp functionNikos Nikoleris
The recvTimingResp function in the cache handles timing responses. Over time, recvTimingResp has grown in complexity and code size. This change factors out some of its functionality to a separate function. The new function iterates through the in-service targets and handles them accordingly. Change-Id: I0ef28288640f6be1b30452b0664d32432e692ea6 Reviewed-on: https://gem5-review.googlesource.com/10423 Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2018-03-30mem-cache: Remove unused return value from the recvTimingReq funcNikos Nikoleris
The recvTimingReq function in the cache always returns true. This changeset removes the return value. Change-Id: I00dddca65ee7224ecfa579ea5195c841dac02972 Reviewed-on: https://gem5-review.googlesource.com/8289 Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
2017-12-05mem: Co-ordination of CMOs in the xbarNikos Nikoleris
A clean packet request serving a cache maintenance operation (CMO) visits all memories down to the specified xbar. The visited caches invalidate their copy (if the CMO is invalidating) and if a dirty copy is found a write packet writes the dirty data to the memory level below the specified xbar. A response is send back when all the caches are clean and/or invalidated and the specified xbar has seen the write packet. This patch adds the following functionality in the xbar: 1) Accounts for the cache clean requests that go through the xbar 2) Generates the cache clean response when both the cache clean request and the corresponding writeclean packet has crossed the destination xbar. Previously transactions in the xbar were identified using the pointer of the original request. Cache clean transactions comprise of two different packets, the clean request and the writeclean, and therefore have different request pointers. This patch adds support for custom transaction IDs that by default take the value of the request pointer but can be overriden by the contructor. This allows the clean request and writeclean share the same id which the coherent xbar uses to co-ordinate them and send the response in a timely manner. Change-Id: I80db76386a1caded38dc66e6e18f930c3bb800ff Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com> Reviewed-on: https://gem5-review.googlesource.com/5051 Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
2017-12-05mem: Support for specifying the destination of a WriteCleanNikos Nikoleris
Previously, WriteClean packets would always write to the first memory below unless the memory was unable to allocate in which case it would be forwarded further below. This change adds support for specifying the destination of a WriteClean packet. The cache annotates the request with the specified destination and marks the packet as write-through upon its creation. The coherent xbar checks packets for their destination and resets the write-through flag when necessary e.g., the coherent xbar that is set as the PoC will reset the write-through flag for packets to the PoC. Change-Id: I84b653f5cb6e46e97e09508649a3725d72d94606 Reviewed-by: Curtis Dunham <curtis.dunham@arm.com> Reviewed-by: Anouk Van Laer <anouk.vanlaer@arm.com> Reviewed-on: https://gem5-review.googlesource.com/5046 Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
2017-12-05mem: Add support for WriteClean packets in the memory systemNikos Nikoleris
This change adds support for creating and handling WriteClean packets. The WriteClean operation is almost identical to a WritebackDirty with the exception that the cache generating a WriteClean retains a copy of the block. Change-Id: I63c8de62919fad0f9547d412f8266aa4292ebecd Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com> Reviewed-by: Curtis Dunham <curtis.dunham@arm.com> Reviewed-by: Anouk Van Laer <anouk.vanlaer@arm.com> Reviewed-on: https://gem5-review.googlesource.com/5045 Reviewed-by: Jason Lowe-Power <jason@lowepower.com> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2017-12-05mem-cache: Add support for checking whether a cache is busyNikos Nikoleris
This changeset adds support for checking whether the cache is currently busy and a timing request would be rejected. Change-Id: I5e37b011b2387b1fa1c9e687b9be545f06ffb5f5 Reviewed-on: https://gem5-review.googlesource.com/5042 Reviewed-by: Jason Lowe-Power <jason@lowepower.com> Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2017-12-04misc: Rename misc.(hh|cc) to logging.(hh|cc)Gabe Black
These files aren't a collection of miscellaneous stuff, they're the definition of the Logger interface, and a few utility macros for calling into that interface (panic, warn, etc.). Change-Id: I84267ac3f45896a83c0ef027f8f19c5e9a5667d1 Reviewed-on: https://gem5-review.googlesource.com/6226 Reviewed-by: Brandon Potter <Brandon.Potter@amd.com> Maintainer: Gabe Black <gabeblack@google.com>
2017-06-20mem: Replace EventWrapper use with EventFunctionWrapperSean Wilson
NOTE: With this change there is a possibility for `DRAMCtrl::Rank`s event names to not properly match the rank they were generated by. This could occur if the public rank member is modified after the Rank's construction. A patch would mean refactoring Rank and `DRAMCtrl`b to privatize many of the members of Rank behind getters. Change-Id: I7b8bd15086f4ffdfd3f40be4aeddac5e786fd78e Signed-off-by: Sean Wilson <spwilson2@wisc.edu> Reviewed-on: https://gem5-review.googlesource.com/3745 Reviewed-by: Jason Lowe-Power <jason@lowepower.com> Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com> Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
2017-02-21mem: Remove unused type BlkList from the cache and the tagsNikos Nikoleris
Change-Id: If9ebb8488e8db587482ecfa99d2c12cfe5734fb9 Reviewed-by: Andreas Hansson <andreas.hansson@arm.com> Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
2017-02-19sim: Ensure draining is deterministicAndreas Hansson
The traversal of drainable objects could potentially be non-deterministic when using an unordered set containing object pointers. To ensure that the iteration is deterministic, we switch to a vector. Note that the lookup and traversal of the drainable objects is not performance critical, so the change has no negative consequences.
2016-08-12mem: Update mostly exclusive cache policy to cover more casesAndreas Hansson
This patch changes how the mostly exclusive policy is enforced to ensure that we drop blocks when we should. As part of this change, the actual invalidation due to the clusivity enforcement is moved outside the hit handling, to a separate method maintainClusivity. For the timing mode that means we can deal with all MSHR targets before taking any action and possibly dropping the block. The method satisfyCpuSideRequest is also renamed satisfyRequest as part of this change (since we only ever see requests from the cpu-side port). Change-Id: If6f3d1e0c3e7be9a67b72a55e4fc2ec4a90fd3d2 Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
2016-05-26mem: change NULL to nullptr in the cache related classesNikos Nikoleris
Change-Id: I5042410be54935650b7d05c84d8d9efbfcc06e70 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-04-21mem: Align downstream cache packet creation in atomic and timingAndreas Hansson
This patch makes the control flow more uniform in atomic and timing, ultimately making the code easier to understand.
2016-03-17mem: Create a separate class for the cache write bufferAndreas Hansson
This patch breaks out the cache write buffer into a separate class, without affecting any stats. The goal of the patch is to avoid encumbering the much-simpler write queue with the complex MSHR handling. In a follow on patch this simplification allows us to implement write combining. The WriteQueue gets its own class, but shares a common ancestor, the generic Queue, with the MSHRQueue.
2015-12-31mem: Make cache terminology easier to understandAndreas Hansson
This patch changes the name of a bunch of packet flags and MSHR member functions and variables to make the coherency protocol easier to understand. In addition the patch adds and updates lots of descriptions, explicitly spelling out assumptions. The following name changes are made: * the packet memInhibit flag is renamed to cacheResponding * the packet sharedAsserted flag is renamed to hasSharers * the packet NeedsExclusive attribute is renamed to NeedsWritable * the packet isSupplyExclusive is renamed responderHadWritable * the MSHR pendingDirty is renamed to pendingModified The cache states, Modified, Owned, Exclusive, Shared are also called out in the cache and MSHR code to make it easier to understand.
2015-12-28mem: Do not use sender state to track forwarded snoops in cacheAndreas Hansson
This patch changes how the cache tracks which snoops are forwarded, and which ones are created locally. Previously the identification was based on an empty sender state of a specific class, but this method fails to distinguish which cache actually attached the sender state. Instead we use the same mechanism as the crossbar, and keep track of the requests that have outstanding snoops.
2015-11-15arm: Add missing explicit overrides for classic cachesAndreas Sandberg
Make clang when compiling on OSX.
2015-11-06mem: Add an option to perform clean writebacks from cachesAndreas Hansson
This patch adds the necessary commands and cache functionality to allow clean writebacks. This functionality is crucial, especially when having exclusive (victim) caches. For example, if read-only L1 instruction caches are not sending clean writebacks, there will never be any spills from the L1 to the L2. At the moment the cache model defaults to not sending clean writebacks, and this should possibly be re-evaluated. The implementation of clean writebacks relies on a new packet command WritebackClean, which acts much like a Writeback (renamed WritebackDirty), and also much like a CleanEvict. On eviction of a clean block the cache either sends a clean evict, or a clean writeback, and if any copies are still cached upstream the clean evict/writeback is dropped. Similarly, if a clean evict/writeback reaches a cache where there are outstanding MSHRs for the block, the packet is dropped. In the typical case though, the clean writeback allocates a block in the downstream cache, and marks it writable if the evicted block was writable. The patch changes the O3_ARM_v7a L1 cache configuration and the default L1 caches in config/common/Caches.py
2015-11-06mem: Add cache clusivityAndreas Hansson
This patch adds a parameter to control the cache clusivity, that is if the cache is mostly inclusive or exclusive. At the moment there is no intention to support strict policies, and thus the options are: 1) mostly inclusive, or 2) mostly exclusive. The choice of policy guides the behaviuor on a cache fill, and a new helper function, allocOnFill, is created to encapsulate the decision making process. For the timing mode, the decision is annotated on the MSHR on sending out the downstream packet, and in atomic we directly pass the decision to handleFill. We (ab)use the tempBlock in cases where we are not allocating on fill, leaving the rest of the cache unaffected. Simple and effective. This patch also makes it more explicit that multiple caches are allowed to consider a block writable (this is the case also before this patch). That is, for a mostly inclusive cache, multiple caches upstream may also consider the block exclusive. The caches considering the block writable/exclusive all appear along the same path to memory, and from a coherency protocol point of view it works due to the fact that we always snoop upwards in zero time before querying any downstream cache. Note that this patch does not introduce clean writebacks. Thus, for clean lines we are essentially removing a cache level if it is made mostly exclusive. For example, lines from the read-only L1 instruction cache or table-walker cache are always clean, and simply get dropped rather than being passed to the L2. If the L2 is mostly exclusive and does not allocate on fill it will thus never hold the line. A follow on patch adds the clean writebacks. The patch changes the L2 of the O3_ARM_v7a CPU configuration to be mostly exclusive (and stats are affected accordingly).
2015-11-06mem: Unify delayed packet deletionAndreas Hansson
This patch unifies how we deal with delayed packet deletion, where the receiving slave is responsible for deleting the packet, but the sending agent (e.g. a cache) is still relying on the pointer until the call to sendTimingReq completes. Previously we used a mix of a deletion vector and a construct using unique_ptr. With this patch we ensure all slaves use the latter approach.
2015-10-12misc: Add explicit overrides and fix other clang >= 3.5 issuesAndreas Hansson
This patch adds explicit overrides as this is now required when using "-Wall" with clang >= 3.5, the latter now part of the most recent XCode. The patch consequently removes "virtual" for those methods where "override" is added. The latter should be enough of an indication. As part of this patch, a few minor issues that clang >= 3.5 complains about are also resolved (unused methods and variables).
2015-10-12misc: Remove redundant compiler-specific definesAndreas Hansson
This patch moves away from using M5_ATTR_OVERRIDE and the m5::hashmap (and similar) abstractions, as these are no longer needed with gcc 4.7 and clang 3.1 as minimum compiler versions.
2015-09-25mem: Add snoops for CleanEvicts and Writebacks in atomic modeAli Jafri
This patch mirrors the logic in timing mode which sends up snoops to check for cached copies before sending CleanEvicts and Writebacks down the memory hierarchy. In case there is a copy in a cache above, discard CleanEvicts and set the BLOCK_CACHED flag in Writebacks so that writebacks do not reset the cache residency bit in the snoop filter below.
2015-09-25mem: Make the coherent crossbar account for timing snoopsAndreas Hansson
This patch introduces the concept of a snoop latency. Given the requirement to snoop and forward packets in zero time (due to the coherency mechanism), the latency is accounted for later. On a snoop, we establish the latency, and later add it to the header delay of the packet. To allow multiple caches to contribute to the snoop latency, we use a separate variable in the packet, and then take the maximum before adding it to the header delay.
2015-08-21mem: Remove unused cache squash functionalityAndreas Hansson
Tidying up.
2015-08-21mem: Add explicit Cache subclass and make BaseCache abstractAndreas Hansson
Open up for other subclasses to BaseCache and transition to using the explicit Cache subclass. --HG-- rename : src/mem/cache/BaseCache.py => src/mem/cache/Cache.py
2015-08-21mem: Move cache_impl.hh to cache.ccAndreas Hansson
There is no longer any need to keep the implementation in a header.
2015-07-07sim: Refactor the serialization base classAndreas Sandberg
Objects that are can be serialized are supposed to inherit from the Serializable class. This class is meant to provide a unified API for such objects. However, so far it has mainly been used by SimObjects due to some fundamental design limitations. This changeset redesigns to the serialization interface to make it more generic and hide the underlying checkpoint storage. Specifically: * Add a set of APIs to serialize into a subsection of the current object. Previously, objects that needed this functionality would use ad-hoc solutions using nameOut() and section name generation. In the new world, an object that implements the interface has the methods serializeSection() and unserializeSection() that serialize into a named /subsection/ of the current object. Calling serialize() serializes an object into the current section. * Move the name() method from Serializable to SimObject as it is no longer needed for serialization. The fully qualified section name is generated by the main serialization code on the fly as objects serialize sub-objects. * Add a scoped ScopedCheckpointSection helper class. Some objects need to serialize data structures, that are not deriving from Serializable, into subsections. Previously, this was done using nameOut() and manual section name generation. To simplify this, this changeset introduces a ScopedCheckpointSection() helper class. When this class is instantiated, it adds a new /subsection/ and subsequent serialization calls during the lifetime of this helper class happen inside this section (or a subsection in case of nested sections). * The serialize() call is now const which prevents accidental state manipulation during serialization. Objects that rely on modifying state can use the serializeOld() call instead. The default implementation simply calls serialize(). Note: The old-style calls need to be explicitly called using the serializeOld()/serializeSectionOld() style APIs. These are used by default when serializing SimObjects. * Both the input and output checkpoints now use their own named types. This hides underlying checkpoint implementation from objects that need checkpointing and makes it easier to change the underlying checkpoint storage code.
2015-07-03mem: Add clean evicts to improve snoop filter trackingAli Jafri
This patch adds eviction notices to the caches, to provide accurate tracking of cache blocks in snoop filters. We add the CleanEvict message to the memory heirarchy and use both CleanEvicts and Writebacks with BLOCK_CACHED flags to propagate notice of clean and dirty evictions respectively, down the memory hierarchy. Note that the BLOCK_CACHED flag indicates whether there exist any copies of the evicted block in the caches above the evicting cache. The purpose of the CleanEvict message is to notify snoop filters of silent evictions in the relevant caches. The CleanEvict message behaves much like a Writeback. CleanEvict is a write and a request but unlike a Writeback, CleanEvict does not have data and does not need exclusive access to the block. The cache generates the CleanEvict message on a fill resulting in eviction of a clean block. Before travelling downwards CleanEvict requests generate zero-time snoop requests to check if the same block is cached in upper levels of the memory heirarchy. If the block exists, the cache discards the CleanEvict message. The snoops check the tags, writeback queue and the MSHRs of upper level caches in a manner similar to snoops generated from HardPFReqs. Currently CleanEvicts keep travelling towards main memory unless they encounter the block corresponding to their address or reach main memory (since we have no well defined point of serialisation). Main memory simply discards CleanEvict messages. We have modified the behavior of Writebacks, such that they generate snoops to check for the presence of blocks in upper level caches. It is possible in our current implmentation for a lower level cache to be writing back a block while a shared copy of the same block exists in the upper level cache. If the snoops find the same block in upper level caches, we set the BLOCK_CACHED flag in the Writeback message. We have also added logic to account for interaction of other message types with CleanEvicts waiting in the writeback queue. A simple example is of a response arriving at a cache removing any CleanEvicts to the same address from the cache's writeback queue.
2015-05-05mem: Remove templates in cache modelDavid Guillen
This patch changes the cache implementation to rely on virtual methods rather than using the replacement policy as a template argument. There is no impact on the simulation performance, and overall the changes make it easier to modify (and subclass) the cache and/or replacement policy.
2015-03-27mem: Allocate cache writebacks before new MSHRsAndreas Hansson
This patch changes the order of writeback allocation such that any writebacks resulting from a tag lookup (e.g. for an uncacheable access), are added to the writebuffer before any new MSHR entries are allocated. This ensures that the writebacks logically precedes the new allocations. The patch also changes the uncacheable flush to use proper timed (or atomic) writebacks, as opposed to functional writes.
2015-03-02mem: Split port retry for all different packet classesAndreas Hansson
This patch fixes a long-standing isue with the port flow control. Before this patch the retry mechanism was shared between all different packet classes. As a result, a snoop response could get stuck behind a request waiting for a retry, even if the send/recv functions were split. This caused message-dependent deadlocks in stress-test scenarios. The patch splits the retry into one per packet (message) class. Thus, sendTimingReq has a corresponding recvReqRetry, sendTimingResp has recvRespRetry etc. Most of the changes to the code involve simply clarifying what type of request a specific object was accepting. The biggest change in functionality is in the cache downstream packet queue, facing the memory. This queue was shared by requests and snoop responses, and it is now split into two queues, each with their own flow control, but the same physical MasterPort. These changes fixes the previously seen deadlocks.
2015-02-03mem: Clarify cache behaviour for pending dirty responsesAndreas Hansson
This patch adds a bit of clarification around the assumptions made in the cache when packets are sent out, and dirty responses are pending. As part of the change, the marking of an MSHR as in service is simplified slightly, and comments are added to explain what assumptions are made.
2014-12-02mem: Add const getters for write packet dataAndreas Hansson
This patch takes a first step in tightening up how we use the data pointer in write packets. A const getter is added for the pointer itself (getConstPtr), and a number of member functions are also made const accordingly. In a range of places throughout the memory system the new member is used. The patch also removes the unused isReadWrite function.
2014-10-09mem: Add packet sanity checks to cache and MSHRsAndreas Hansson
This patch adds a number of asserts to the cache, checking basic assumptions about packets being requests or responses.
2014-06-27mem: write streaming support via WriteInvalidate promotionCurtis Dunham
Support full-block writes directly rather than requiring RMW: * a cache line is allocated in the cache upon receipt of a WriteInvalidateReq, not the WriteInvalidateResp. * only top-level caches allocate the line; the others just pass the request along and invalidate as necessary. * to close a timing window between the *Req and the *Resp, a new metadata bit tracks whether another cache has read a copy of the new line before the writeback to memory.
2014-05-13cpu, mem: Make software prefetches non-blockingCurtis Dunham
Previously, they were treated so much like loads that they could stall at the head of the ROB. Now they are always treated like L1 hits. If they actually miss, a new request is created at the L1 and tracked from the MSHRs there if necessary (i.e. if it didn't coalesce with an existing outstanding load).
2014-01-28mem: Remove redundant findVictim() input argumentAmin Farmahini
The patch (1) removes the redundant writeback argument from findVictim() (2) fixes the description of access() function Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2014-01-24mem: Add support for a security bit in the memory systemGiacomo Gabrielli
This patch adds the basic building blocks required to support e.g. ARM TrustZone by discerning secure and non-secure memory accesses.
2013-07-18mem: Set the cache line size on a system levelAndreas Hansson
This patch removes the notion of a peer block size and instead sets the cache line size on the system level. Previously the size was set per cache, and communicated through the interconnect. There were plenty checks to ensure that everyone had the same size specified, and these checks are now removed. Another benefit that is not yet harnessed is that the cache line size is now known at construction time, rather than after the port binding. Hence, the block size can be locally stored and does not have to be queried every time it is used. A follow-on patch updates the configuration scripts accordingly.
2013-07-18mem: Add cache class destructor to avoid memory leaksXiangyu Dong
Make valgrind a little bit happier
2013-06-27mem: Reorganize cache tags and make them a SimObjectPrakash Ramrakhyani
This patch reorganizes the cache tags to allow more flexibility to implement new replacement policies. The base tags class is now a clocked object so that derived classes can use a clock if they need one. Also having deriving from SimObject allows specialized Tag classes to be swapped in/out in .py files. The cache set is now templatized to allow it to contain customized cache blocks with additional informaiton. This involved moving code to the .hh file and removing cacheset.cc. The statistics belonging to the cache tags are now including ".tags" in their name. Hence, the stats need an update to reflect the change in naming.