summaryrefslogtreecommitdiff
path: root/src/mem
AgeCommit message (Collapse)Author
2015-11-15arm: Add missing explicit overrides for classic cachesAndreas Sandberg
Make clang when compiling on OSX.
2015-07-20ruby: added stl vector of ints to be used by SLICCBrad Beckmann
2015-11-13slicc: fixes for the Address to Addr changeset (11025)Tony Gutierrez
misc changes now that Address has become Addr including int to address util function
2015-11-13ruby: add BoolVecJoe Gross
The BoolVec typedef and insertion operator overload function simplify usage of vectors of type bool
2015-07-20mem: add boolean to disable PacketQueue's size sanity checkBrad Beckmann
the sanity check, while generally useful for exposing memory system bugs, may be spurious with respect to GPU workloads, which may generate many more requests than typical CPU workloads. the large number of requests generated by the GPU may cause the req/resp queues to back up, thus queueing more than 100 packets.
2015-11-06mem: Add an option to perform clean writebacks from cachesAndreas Hansson
This patch adds the necessary commands and cache functionality to allow clean writebacks. This functionality is crucial, especially when having exclusive (victim) caches. For example, if read-only L1 instruction caches are not sending clean writebacks, there will never be any spills from the L1 to the L2. At the moment the cache model defaults to not sending clean writebacks, and this should possibly be re-evaluated. The implementation of clean writebacks relies on a new packet command WritebackClean, which acts much like a Writeback (renamed WritebackDirty), and also much like a CleanEvict. On eviction of a clean block the cache either sends a clean evict, or a clean writeback, and if any copies are still cached upstream the clean evict/writeback is dropped. Similarly, if a clean evict/writeback reaches a cache where there are outstanding MSHRs for the block, the packet is dropped. In the typical case though, the clean writeback allocates a block in the downstream cache, and marks it writable if the evicted block was writable. The patch changes the O3_ARM_v7a L1 cache configuration and the default L1 caches in config/common/Caches.py
2015-11-06mem: Add cache clusivityAndreas Hansson
This patch adds a parameter to control the cache clusivity, that is if the cache is mostly inclusive or exclusive. At the moment there is no intention to support strict policies, and thus the options are: 1) mostly inclusive, or 2) mostly exclusive. The choice of policy guides the behaviuor on a cache fill, and a new helper function, allocOnFill, is created to encapsulate the decision making process. For the timing mode, the decision is annotated on the MSHR on sending out the downstream packet, and in atomic we directly pass the decision to handleFill. We (ab)use the tempBlock in cases where we are not allocating on fill, leaving the rest of the cache unaffected. Simple and effective. This patch also makes it more explicit that multiple caches are allowed to consider a block writable (this is the case also before this patch). That is, for a mostly inclusive cache, multiple caches upstream may also consider the block exclusive. The caches considering the block writable/exclusive all appear along the same path to memory, and from a coherency protocol point of view it works due to the fact that we always snoop upwards in zero time before querying any downstream cache. Note that this patch does not introduce clean writebacks. Thus, for clean lines we are essentially removing a cache level if it is made mostly exclusive. For example, lines from the read-only L1 instruction cache or table-walker cache are always clean, and simply get dropped rather than being passed to the L2. If the L2 is mostly exclusive and does not allocate on fill it will thus never hold the line. A follow on patch adds the clean writebacks. The patch changes the L2 of the O3_ARM_v7a CPU configuration to be mostly exclusive (and stats are affected accordingly).
2015-11-06mem: Avoid unnecessary snoops on writebacks and clean evictionsAli Jafri
This patch optimises the handling of writebacks and clean evictions when using a snoop filter. Instead of snooping into the caches to determine if the block is cached or not, simply set the status based on the snoop-filter result.
2015-11-06mem: Order packet queue only on matching addressesAndreas Hansson
Instead of conservatively enforcing order for all packets, which may negatively impact the simulated-system performance, this patch updates the packet queue such that it only applies the restriction if there are already packets with the same address in the queue. The basic need for the order enforcement is due to coherency interactions where requests/responses to the same cache line must not over-take each other. We rely on the fact that any packet that needs order enforcement will have a block-aligned address. Thus, there is no need for the queue to know about the cacheline size.
2015-11-06mem: Enforce insertion order on the cache response pathAli Jafri
This patch enforces insertion order transmission of packets on the response path in the cache. Note that the logic to enforce order is already present in the packet queue, this patch simply turns it on for queues in the response path. Without this patch, there are corner cases where a request-response is faster than a response-response forwarded through the cache. This violation of queuing order causes problems in the snoop filter leaving it with inaccurate information. This causes assert failures in the snoop filter later on. A follow on patch relaxes the order enforcement in the packet queue to limit the performance impact.
2015-11-06mem: Use the packet delays and do not just zero them outAndreas Hansson
This patch updates the I/O devices, bridge and simple memory to take the packet header and payload delay into account in their latency calculations. In all cases we add the header delay, i.e. the accumulated pipeline delay of any crossbars, and the payload delay needed for deserialisation of any payload. Due to the additional unknown latency contribution, the packet queue of the simple memory is changed to use insertion sorting based on the time stamp. Moreover, since the memory hands out exclusive (non shared) responses, we also need to ensure ordering for reads to the same address.
2015-11-06mem: Align rules for sinking inhibited packets at the slaveAndreas Hansson
This patch aligns how the memory-system slaves, i.e. the various memory controllers and the bridge, identify and deal with sinking of inhibited packets that are only useful within the coherent part of the memory system. In the future we could shift the onus to the crossbar, and add a parameter "is_point_of_coherence" that would allow it to sink the aforementioned packets.
2015-11-06mem: Do not treat CleanEvict as a write operationAndreas Hansson
This patch changes the CleanEvict command type to not be considered a write. Initially it was made a zero-sized write to match the writeback command, but as things developed it became clear that it causes more problems than it solves. For example, the memory modules (and bridge) should not consider the CleanEvict as a write, but instead discard it. With this patch it will be neither a read, nor write, and as it does not need a response the slave will simply sink it.
2015-11-06mem: Unify delayed packet deletionAndreas Hansson
This patch unifies how we deal with delayed packet deletion, where the receiving slave is responsible for deleting the packet, but the sending agent (e.g. a cache) is still relying on the pointer until the call to sendTimingReq completes. Previously we used a mix of a deletion vector and a construct using unique_ptr. With this patch we ensure all slaves use the latter approach.
2015-11-06misc: Appease clang static analyzerAndreas Hansson
A few minor fixes to issues identified by the clang static analyzer.
2015-11-06mem: Check the XBar's port queues on functional snoopsAndreas Sandberg
The CoherentXBar currently doesn't check its queued slave ports when receiving a functional snoop. This caused data corruption in cases when a modified cache lines is forwarded between two caches. Add the required functional calls into the queued slave ports.
2015-11-03mem: hmc: minor fixesErfan Azarkhish
This patch performs two minor fixes to DRAMCtrl.py and xbar.hh in favor of the HMC patch series. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-11-03mem: hmc: serial link modelErfan Azarkhish
This changeset adds a serial link model for the Hybrid Memory Cube (HMC). SerialLink is a simple variation of the Bridge class, with the ability to account for the latency of packet serialization. Also trySendTiming has been modified to correctly model bandwidth. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-11-03mem: hmc: adds controllerErfan Azarkhish
This patch models a simple HMC Controller. It simply schedules the incoming packets to HMC Serial Links using a round robin mechanism. This patch should be applied in series with other patches modeling a complete HMC device. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-10-29mem: Clarify cache MSHR handling on fillAndreas Hansson
This patch addresses the upgrading of deferred targets in the MSHR, and makes it clearer by explicitly calling out what is happening (deferred targets are promoted if we get exclusivity without asking for it).
2015-10-23x86: Add missing explicit overrides for X86 devicesAndreas Hansson
Make clang >= 3.5 happy when compiling build/X86/gem5.opt on OSX.
2015-10-14mem: Pass snoop retries through the CommMonitorAndreas Hansson
Allow the monitor to be placed after a snooping port, and do not fail on snoop retries, but instead pass them on to the slave port.
2015-10-14ruby: profiler: provide the number of vnets through ruby systemNilay Vaish
The aim is to ultimately do away with the static function Network::getNumberOfVirtualNetworks().
2015-10-14ruby: remove unused functionalRead() function.Nilay Vaish
Not required since functional reads cannot rely on messages that are inflight.
2015-10-14ruby: garnet: flexible: refactor flitNilay Vaish
2015-10-12misc: Add explicit overrides and fix other clang >= 3.5 issuesAndreas Hansson
This patch adds explicit overrides as this is now required when using "-Wall" with clang >= 3.5, the latter now part of the most recent XCode. The patch consequently removes "virtual" for those methods where "override" is added. The latter should be enough of an indication. As part of this patch, a few minor issues that clang >= 3.5 complains about are also resolved (unused methods and variables).
2015-10-12misc: Remove redundant compiler-specific definesAndreas Hansson
This patch moves away from using M5_ATTR_OVERRIDE and the m5::hashmap (and similar) abstractions, as these are no longer needed with gcc 4.7 and clang 3.1 as minimum compiler versions.
2015-09-30cpu,isa,mem: Add per-thread wakeup logicMitch Hayenga
Changes wakeup functionality so that only specific threads on SMT capable cpus are woken.
2015-09-29ruby: Fix CacheMemory allocate leakJoel Hestness
If a cache entry permission was previously set to NotPresent, but the entry was not deleted, a following cache allocation can cause the entry to be leaked by setting the entry pointer to a newly allocated entry. To eliminate this possibility, check if the new entry is different from the old one, and if so, delete the old one.
2015-09-29ruby: RubyPort delete snoop requestsJoel Hestness
In RubyPort::ruby_eviction_callback, prior changes fixed a memory leak caused by instantiating separate packets for each port that the eviction was forwarded to. That change, however, left the instantiated request to also leak. Allocate it on the stack to avoid the leak.
2015-09-29ruby: Fix memory leak in AbstractControllerJoel Hestness
Recent changes to memory access queuing allocate requests for packets sent to memory controllers, but did not free the requests. Delete them to avoid leaks.
2015-09-29ruby: RubyMemoryControl delete requestsJoel Hestness
Changes to the RubyMemoryControl removed the dequeue function, which deleted MemoryNode instances. This results in leaked MemoryNode instances. Correctly delete these instances.
2015-09-25mem: Add PacketInfo to be used for packet probe pointsAndreas Hansson
This patch fixes a use-after-delete issue in the packet probe points by adding a PacketInfo struct to retain the key fields before passing the packet onwards. We want to probe the packet after it is successfully sent, but by that time the fields may be modified, and the packet may even be deleted. Amazingly enough the issue has gone undetected for months, and only recently popped up in our regressions.
2015-09-25mem: Add check for block status on WriteLineReq fillAndreas Hansson
More checks to help with understanding of functionality.
2015-09-25mem: Fix WriteLineReq fill behaviourAndreas Hansson
This patch fixes issues in the interactions between deferred snoops and WriteLineReq. More specifically, the patch addresses an issue where deferred snoops caused assertion failures when being serviced on the arrival of an InvalidateResp. The response packet was perceived to be invalidating, when actually it is not for the cache that sent out the original invalidation request.
2015-09-25mem: Comment clean-up for the snoop filterAndreas Hansson
Merely fixing up some style issues and adding more comments.
2015-09-25mem: Avoid adding and then removing empty snoop-filter itemsAndreas Hansson
This patch tidies up how we access the snoop filter for snoops, and avoids adding items only to later remove them.
2015-09-25mem: Only track snooping ports in the snoop filterAndreas Hansson
This patch changes the tracking of ports in the snoop filter to use local dense port IDs so that we can have 64 snooping ports (rather than crossbar slave ports). This is achieved by adding a simple remapping vector that translates the actal port IDs into the local slave IDs used in the SnoopMask. Ultimately this patch allows us to scale to much larger systems without introducing a hierarchy of crossbars.
2015-09-25mem: Add snoop filters to L2 crossbars, and check sizeAli Jafri
This patch adds a snoop filter to the L2XBar. For now we refrain from globally adding a snoop filter to the SystemXBar, since the latter is also used in systems without caches. In scenarios without caches the snoop filter will not see any writeback/clean evicts from the CPU ports, despite the fact that they are snooping. To avoid inadvertent use of the snoop filter in these cases we leave it out for now. A size check is added to the snoop filter, merely to ensure it does not grow beyond the total capacity of the caches above it. The size has to be set manually, and a value of 8 MByte is choosen as suitably high default.
2015-09-25mem: Store snoop filter lookup result to avoid second lookupAndreas Hansson
This patch introduces a private member storing the iterator from the lookupRequest call, such that it can be re-used when the request eventually finishes. The method previously called updateRequest is renamed finishRequest to make it more clear that the two functions must be called together.
2015-09-25mem: Add snoops for CleanEvicts and Writebacks in atomic modeAli Jafri
This patch mirrors the logic in timing mode which sends up snoops to check for cached copies before sending CleanEvicts and Writebacks down the memory hierarchy. In case there is a copy in a cache above, discard CleanEvicts and set the BLOCK_CACHED flag in Writebacks so that writebacks do not reset the cache residency bit in the snoop filter below.
2015-09-25mem: Add CleanEvict and Writeback support to snoop filtersAli Jafri
This patch adds the functionality to properly track CleanEvicts and Writebacks in the snoop filter. Previously there were no CleanEvicts, and Writebacks did not send up snoops to ensure there were no copies in caches above. Hence a writeback could never erase an entry from the snoop filter. When a CleanEvict message reaches a snoop filter, it confirms that the BLOCK_CACHED flag is not set and resets the bits corresponding to the CleanEvict address and port it arrived on. If none of the other peer caches have (or have requested) the block, the snoop filter forwards the CleanEvict to lower levels of memory. In case of a Writeback message, the snoop filter checks if the BLOCK_CACHED flag is not set and only then resets the bits corresponding to the Writeback address. If any of the other peer caches have (or has requested) the same block, the snoop filter sets the BLOCK_CACHED flag in the Writeback before forwarding it to lower levels of memory heirarachy.
2015-09-25mem: Add check for snooping ports in the snoop filterAli Jafri
This patch prevents the snoop filter from creating items for requests originating from non-snooping ports. The allocation decision is thus based both on the cacheability of the line, and the snooping status of the source port. Ultimately we should check if the source of the packet is caching, since also the CPU ports are snooping (but not allocating). Thus, at the moment we rely on the snoop filter being used together with caches. The patch also transitions to use the Packet::getBlockAddr in determining the line address.
2015-09-25mem: Make the coherent crossbar account for timing snoopsAndreas Hansson
This patch introduces the concept of a snoop latency. Given the requirement to snoop and forward packets in zero time (due to the coherency mechanism), the latency is accounted for later. On a snoop, we establish the latency, and later add it to the header delay of the packet. To allow multiple caches to contribute to the snoop latency, we use a separate variable in the packet, and then take the maximum before adding it to the header delay.
2015-09-25mem: Do not include snoop-filter latency in crossbar occupancyAndreas Hansson
This patch ensures that the snoop-filter latency only contributes to the packet latency, and not to the crossbar throughput/occupancy. In essence we treat the snoop-filter lookup as pipelined.
2015-09-24ruby: simple network: refactor codeNilay Vaish
Drops an unused variable and marks three variables as const.
2015-09-23ruby: garnet: refactor code in network linksNilay Vaish
2015-09-23ruby: bloom filters: refactor codeNilay Vaish
2015-09-23ruby: abstract controller: mark some variables as constNilay Vaish
2015-09-22mem: Add initial HBM configurationsWendy Elsasser
Created the following HBM configurations: 1) HBM gen1 (x128/CH), 2Gb die, 4H stack, 1Gbps, 8 channels 2) HBM gen2 (x64/PC), 8Gb die, 4H stack, 1Gbps, 16 pseudo-channels The configuration values are based on: - The HBM gen1 public JEDEC spec - Publically released data from MemCon presentations - Timing extrapolated from existing LPDDR configurations Will adjust once specs become available.