summaryrefslogtreecommitdiff
path: root/src/mem
AgeCommit message (Collapse)Author
2015-12-31mem: Make cache terminology easier to understandAndreas Hansson
This patch changes the name of a bunch of packet flags and MSHR member functions and variables to make the coherency protocol easier to understand. In addition the patch adds and updates lots of descriptions, explicitly spelling out assumptions. The following name changes are made: * the packet memInhibit flag is renamed to cacheResponding * the packet sharedAsserted flag is renamed to hasSharers * the packet NeedsExclusive attribute is renamed to NeedsWritable * the packet isSupplyExclusive is renamed responderHadWritable * the MSHR pendingDirty is renamed to pendingModified The cache states, Modified, Owned, Exclusive, Shared are also called out in the cache and MSHR code to make it easier to understand.
2015-07-20ruby: slicc: have a static MachineTypeTony Gutierrez
This patch is imported from reviewboard patch 2551 by Nilay. This patch moves from a dynamically defined MachineType to a statically defined one. The need for this patch was felt since a dynamically defined type prevents us from having types for which no machine definition may exist. The following changes have been made: i. each machine definition now uses a type from the MachineType enumeration instead of any random identifier. This required changing the grammar and the *.sm files. ii. MachineType enumeration defined statically in RubySlicc_Exports.sm. * * * normal protocol fixes for nilay's parser machine type fix
2015-07-20ruby: slicc: remove support for single machine, multiple typesTony Gutierrez
This patch is imported from reviewboard patch 2550 by Nilay. It was possible to specify multiple machine types with a single state machine. This seems unnecessary and is being removed.
2015-12-28mem: Explicitly check MSHR snoops for cases not dealt withAndreas Hansson
Add a sanity check to make it explicit that we currently do not allow an I/O coherent agent to directly issue writes into the coherent part of the memory system (it has to go via a cache, and get transformed into a read ex, upgrade or invalidation).
2015-12-28mem: Remove unused cache squash functionalityAndreas Hansson
This patch removes the unused squash function from the MSHR queue, and the associated (and also unused) threadNum member from the MSHR.
2015-12-28mem: Avoid unecessary checks when creating HardPFReq in cacheAndreas Hansson
The checks made before sending out a HardPFReq were unecessarily complex, and checked for cases that never occur. This patch tidies it up.
2015-12-28mem: Do not use sender state to track forwarded snoops in cacheAndreas Hansson
This patch changes how the cache tracks which snoops are forwarded, and which ones are created locally. Previously the identification was based on an empty sender state of a specific class, but this method fails to distinguish which cache actually attached the sender state. Instead we use the same mechanism as the crossbar, and keep track of the requests that have outstanding snoops.
2015-12-28mem: Fix cache sender state handling and add clarificationAndreas Hansson
This patch addresses a bug in how the cache attached the MSHR as a sender state. Rather than overwriting any existing sender state it now pushes a new one. The handling of upward snoops is also clarified.
2015-12-17mem: Fix memory allocation bug in deferred snoop handlingAndreas Hansson
This patch fixes a corner case in the deferred snoop handling, where requests ended up being used by multiple packets with different lifetimes, and inadvertently got deleted while they were still in use.
2015-07-20mem: add request types for acquire and releaseDavid Hashe
Add support for acquire and release requests. These synchronization operations are commonly supported by several modern instruction sets.
2015-07-20ruby: more flexible ruby tester supportBrad Beckmann
This patch allows the ruby random tester to use ruby ports that may only support instr or data requests. This patch is similar to a previous changeset (8932:1b2c17565ac8) that was unfortunately broken by subsequent changesets. This current patch implements the support in a more straight-forward way. Since retries are now tested when running the ruby random tester, this patch splits up the retry and drain check behavior so that RubyPort children, such as the GPUCoalescer, can perform those operations correctly without having to duplicate code. Finally, the patch also includes better DPRINTFs for debugging the tester.
2015-12-09mem: remove acq/rel cmds from packet and add mem fence reqTony Gutierrez
2015-12-07cpu: Support virtual addr in elastic tracesRadhika Jagtap
This patch adds support to optionally capture the virtual address and asid for load/store instructions in the elastic traces. If they are present in the traces, Trace CPU will set those fields of the request during replay.
2015-12-07mem: Add instruction sequence number to requestRadhika Jagtap
This patch adds the instruction sequence number to the request and provides a request constructor that accepts a sequence number for initialization.
2015-11-25mem: Fix search-replace issues in DRAMPower wrapper licenseAndreas Hansson
Fix a number of unintentional insertions of 'const'.
2015-11-15arm: Add missing explicit overrides for classic cachesAndreas Sandberg
Make clang when compiling on OSX.
2015-07-20ruby: added stl vector of ints to be used by SLICCBrad Beckmann
2015-11-13slicc: fixes for the Address to Addr changeset (11025)Tony Gutierrez
misc changes now that Address has become Addr including int to address util function
2015-11-13ruby: add BoolVecJoe Gross
The BoolVec typedef and insertion operator overload function simplify usage of vectors of type bool
2015-07-20mem: add boolean to disable PacketQueue's size sanity checkBrad Beckmann
the sanity check, while generally useful for exposing memory system bugs, may be spurious with respect to GPU workloads, which may generate many more requests than typical CPU workloads. the large number of requests generated by the GPU may cause the req/resp queues to back up, thus queueing more than 100 packets.
2015-11-06mem: Add an option to perform clean writebacks from cachesAndreas Hansson
This patch adds the necessary commands and cache functionality to allow clean writebacks. This functionality is crucial, especially when having exclusive (victim) caches. For example, if read-only L1 instruction caches are not sending clean writebacks, there will never be any spills from the L1 to the L2. At the moment the cache model defaults to not sending clean writebacks, and this should possibly be re-evaluated. The implementation of clean writebacks relies on a new packet command WritebackClean, which acts much like a Writeback (renamed WritebackDirty), and also much like a CleanEvict. On eviction of a clean block the cache either sends a clean evict, or a clean writeback, and if any copies are still cached upstream the clean evict/writeback is dropped. Similarly, if a clean evict/writeback reaches a cache where there are outstanding MSHRs for the block, the packet is dropped. In the typical case though, the clean writeback allocates a block in the downstream cache, and marks it writable if the evicted block was writable. The patch changes the O3_ARM_v7a L1 cache configuration and the default L1 caches in config/common/Caches.py
2015-11-06mem: Add cache clusivityAndreas Hansson
This patch adds a parameter to control the cache clusivity, that is if the cache is mostly inclusive or exclusive. At the moment there is no intention to support strict policies, and thus the options are: 1) mostly inclusive, or 2) mostly exclusive. The choice of policy guides the behaviuor on a cache fill, and a new helper function, allocOnFill, is created to encapsulate the decision making process. For the timing mode, the decision is annotated on the MSHR on sending out the downstream packet, and in atomic we directly pass the decision to handleFill. We (ab)use the tempBlock in cases where we are not allocating on fill, leaving the rest of the cache unaffected. Simple and effective. This patch also makes it more explicit that multiple caches are allowed to consider a block writable (this is the case also before this patch). That is, for a mostly inclusive cache, multiple caches upstream may also consider the block exclusive. The caches considering the block writable/exclusive all appear along the same path to memory, and from a coherency protocol point of view it works due to the fact that we always snoop upwards in zero time before querying any downstream cache. Note that this patch does not introduce clean writebacks. Thus, for clean lines we are essentially removing a cache level if it is made mostly exclusive. For example, lines from the read-only L1 instruction cache or table-walker cache are always clean, and simply get dropped rather than being passed to the L2. If the L2 is mostly exclusive and does not allocate on fill it will thus never hold the line. A follow on patch adds the clean writebacks. The patch changes the L2 of the O3_ARM_v7a CPU configuration to be mostly exclusive (and stats are affected accordingly).
2015-11-06mem: Avoid unnecessary snoops on writebacks and clean evictionsAli Jafri
This patch optimises the handling of writebacks and clean evictions when using a snoop filter. Instead of snooping into the caches to determine if the block is cached or not, simply set the status based on the snoop-filter result.
2015-11-06mem: Order packet queue only on matching addressesAndreas Hansson
Instead of conservatively enforcing order for all packets, which may negatively impact the simulated-system performance, this patch updates the packet queue such that it only applies the restriction if there are already packets with the same address in the queue. The basic need for the order enforcement is due to coherency interactions where requests/responses to the same cache line must not over-take each other. We rely on the fact that any packet that needs order enforcement will have a block-aligned address. Thus, there is no need for the queue to know about the cacheline size.
2015-11-06mem: Enforce insertion order on the cache response pathAli Jafri
This patch enforces insertion order transmission of packets on the response path in the cache. Note that the logic to enforce order is already present in the packet queue, this patch simply turns it on for queues in the response path. Without this patch, there are corner cases where a request-response is faster than a response-response forwarded through the cache. This violation of queuing order causes problems in the snoop filter leaving it with inaccurate information. This causes assert failures in the snoop filter later on. A follow on patch relaxes the order enforcement in the packet queue to limit the performance impact.
2015-11-06mem: Use the packet delays and do not just zero them outAndreas Hansson
This patch updates the I/O devices, bridge and simple memory to take the packet header and payload delay into account in their latency calculations. In all cases we add the header delay, i.e. the accumulated pipeline delay of any crossbars, and the payload delay needed for deserialisation of any payload. Due to the additional unknown latency contribution, the packet queue of the simple memory is changed to use insertion sorting based on the time stamp. Moreover, since the memory hands out exclusive (non shared) responses, we also need to ensure ordering for reads to the same address.
2015-11-06mem: Align rules for sinking inhibited packets at the slaveAndreas Hansson
This patch aligns how the memory-system slaves, i.e. the various memory controllers and the bridge, identify and deal with sinking of inhibited packets that are only useful within the coherent part of the memory system. In the future we could shift the onus to the crossbar, and add a parameter "is_point_of_coherence" that would allow it to sink the aforementioned packets.
2015-11-06mem: Do not treat CleanEvict as a write operationAndreas Hansson
This patch changes the CleanEvict command type to not be considered a write. Initially it was made a zero-sized write to match the writeback command, but as things developed it became clear that it causes more problems than it solves. For example, the memory modules (and bridge) should not consider the CleanEvict as a write, but instead discard it. With this patch it will be neither a read, nor write, and as it does not need a response the slave will simply sink it.
2015-11-06mem: Unify delayed packet deletionAndreas Hansson
This patch unifies how we deal with delayed packet deletion, where the receiving slave is responsible for deleting the packet, but the sending agent (e.g. a cache) is still relying on the pointer until the call to sendTimingReq completes. Previously we used a mix of a deletion vector and a construct using unique_ptr. With this patch we ensure all slaves use the latter approach.
2015-11-06misc: Appease clang static analyzerAndreas Hansson
A few minor fixes to issues identified by the clang static analyzer.
2015-11-06mem: Check the XBar's port queues on functional snoopsAndreas Sandberg
The CoherentXBar currently doesn't check its queued slave ports when receiving a functional snoop. This caused data corruption in cases when a modified cache lines is forwarded between two caches. Add the required functional calls into the queued slave ports.
2015-11-03mem: hmc: minor fixesErfan Azarkhish
This patch performs two minor fixes to DRAMCtrl.py and xbar.hh in favor of the HMC patch series. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-11-03mem: hmc: serial link modelErfan Azarkhish
This changeset adds a serial link model for the Hybrid Memory Cube (HMC). SerialLink is a simple variation of the Bridge class, with the ability to account for the latency of packet serialization. Also trySendTiming has been modified to correctly model bandwidth. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-11-03mem: hmc: adds controllerErfan Azarkhish
This patch models a simple HMC Controller. It simply schedules the incoming packets to HMC Serial Links using a round robin mechanism. This patch should be applied in series with other patches modeling a complete HMC device. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-10-29mem: Clarify cache MSHR handling on fillAndreas Hansson
This patch addresses the upgrading of deferred targets in the MSHR, and makes it clearer by explicitly calling out what is happening (deferred targets are promoted if we get exclusivity without asking for it).
2015-10-23x86: Add missing explicit overrides for X86 devicesAndreas Hansson
Make clang >= 3.5 happy when compiling build/X86/gem5.opt on OSX.
2015-10-14mem: Pass snoop retries through the CommMonitorAndreas Hansson
Allow the monitor to be placed after a snooping port, and do not fail on snoop retries, but instead pass them on to the slave port.
2015-10-14ruby: profiler: provide the number of vnets through ruby systemNilay Vaish
The aim is to ultimately do away with the static function Network::getNumberOfVirtualNetworks().
2015-10-14ruby: remove unused functionalRead() function.Nilay Vaish
Not required since functional reads cannot rely on messages that are inflight.
2015-10-14ruby: garnet: flexible: refactor flitNilay Vaish
2015-10-12misc: Add explicit overrides and fix other clang >= 3.5 issuesAndreas Hansson
This patch adds explicit overrides as this is now required when using "-Wall" with clang >= 3.5, the latter now part of the most recent XCode. The patch consequently removes "virtual" for those methods where "override" is added. The latter should be enough of an indication. As part of this patch, a few minor issues that clang >= 3.5 complains about are also resolved (unused methods and variables).
2015-10-12misc: Remove redundant compiler-specific definesAndreas Hansson
This patch moves away from using M5_ATTR_OVERRIDE and the m5::hashmap (and similar) abstractions, as these are no longer needed with gcc 4.7 and clang 3.1 as minimum compiler versions.
2015-09-30cpu,isa,mem: Add per-thread wakeup logicMitch Hayenga
Changes wakeup functionality so that only specific threads on SMT capable cpus are woken.
2015-09-29ruby: Fix CacheMemory allocate leakJoel Hestness
If a cache entry permission was previously set to NotPresent, but the entry was not deleted, a following cache allocation can cause the entry to be leaked by setting the entry pointer to a newly allocated entry. To eliminate this possibility, check if the new entry is different from the old one, and if so, delete the old one.
2015-09-29ruby: RubyPort delete snoop requestsJoel Hestness
In RubyPort::ruby_eviction_callback, prior changes fixed a memory leak caused by instantiating separate packets for each port that the eviction was forwarded to. That change, however, left the instantiated request to also leak. Allocate it on the stack to avoid the leak.
2015-09-29ruby: Fix memory leak in AbstractControllerJoel Hestness
Recent changes to memory access queuing allocate requests for packets sent to memory controllers, but did not free the requests. Delete them to avoid leaks.
2015-09-29ruby: RubyMemoryControl delete requestsJoel Hestness
Changes to the RubyMemoryControl removed the dequeue function, which deleted MemoryNode instances. This results in leaked MemoryNode instances. Correctly delete these instances.
2015-09-25mem: Add PacketInfo to be used for packet probe pointsAndreas Hansson
This patch fixes a use-after-delete issue in the packet probe points by adding a PacketInfo struct to retain the key fields before passing the packet onwards. We want to probe the packet after it is successfully sent, but by that time the fields may be modified, and the packet may even be deleted. Amazingly enough the issue has gone undetected for months, and only recently popped up in our regressions.
2015-09-25mem: Add check for block status on WriteLineReq fillAndreas Hansson
More checks to help with understanding of functionality.
2015-09-25mem: Fix WriteLineReq fill behaviourAndreas Hansson
This patch fixes issues in the interactions between deferred snoops and WriteLineReq. More specifically, the patch addresses an issue where deferred snoops caused assertion failures when being serviced on the arrival of an InvalidateResp. The response packet was perceived to be invalidating, when actually it is not for the cache that sent out the original invalidation request.