summaryrefslogtreecommitdiff
path: root/src/mem
AgeCommit message (Collapse)Author
2013-06-24ruby: MessageBuffer: Remove unused m_size variableJoel Hestness ext:(%2C%20Nilay%20Vaish%20%3Cnilay%40cs.wisc.edu%3E)
The m_size variable attempted to track m_prio_heap.size(), but it did so incorrectly due to the functions reanalyzeMessages and reanalyzeAllMessages(). Since this variable is intended to track m_prio_heap.size(), we can simply replace instances where m_size is referenced with m_prio_heap.size(), which has the added bonus of removing the need for m_size. Note: This patch also removes an extraneous DPRINTF format string designator from reanalyzeAllMessages() Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2013-06-20ruby: fix typo in MOESI_CMP_token protocolLena Olson
2013-06-18ruby: Fix prefetching for MESI_CMP_DirectoryLena Olson
Transitions from present on PF_Ifetch were missing, causing a crash when prefetching is enabled. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2013-06-18ruby: fix slicc compiler to complain about duplicate symbolsLena Olson
Previously, .sm files were allowed to use the same name for a type and a variable. This is unnecessarily confusing and has some bad side effects, like not being able to declare later variables in the same scope with the same type. This causes the compiler to complain and die on things like Address Address. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2013-06-18ruby: restrict Address to being a type and not a variable nameLena Olson
Change all occurrances of Address as a variable name to instead use Addr. Address is an allowed name in slicc even when Address is also being used as a type, leading to declarations of "Address Address". While this works, it prevents adding another field of type Address because the compiler then thinks Address is a variable name, not type. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2013-06-18kvm: Use the address finalization code in the TLBAndreas Sandberg
Reuse the address finalization code in the TLB instead of replicating it when handling MMIO. This patch also adds support for injecting memory mapped IPR requests into the memory system.
2013-06-09ruby: remove several unused variables in ProfilerNilay Vaish
This patch removes per processor cycle count, histogram for filter stats, histogram for multicasts, histogram for prefetch wait, some function prototypes that do not have definitions.
2013-06-09ruby: remove periodic event from ProfilerNilay Vaish
The Profiler class does not need an event for dumping statistics periodically. This is because there is a method for dumping statistics for all the sim objects periodically. Since Ruby is a sim object, its statistics are also included.
2013-06-09ruby: stats: use gem5's stats for cache and memory controllersNilay Vaish
This moves event and transition count statistics for cache controllers to gem5's statistics. It does the same for the statistics associated with the memory controller in ruby. All the cache/directory/dma controllers individually collect the event and transition counts. A callback function, collateStats(), has been added that is invoked on the controller version 0 of each controller class. This function adds all the individual controller statistics to a vector variables. All the code for registering the statistical variables and collating them is generated by SLICC. The patch removes the files *_Profiler.{cc,hh} and *_ProfileDumper.{cc,hh} which were earlier used for collecting and dumping statistics respectively.
2013-06-09ruby: remove undefined functions in Address classNilay Vaish
2013-05-30mem: More descriptive DRAM config namesAndreas Hansson
This patch changes the class names of the variuos DRAM configurations to better reflect what memory they are based on. The speed and interface width is now part of the name, and also the alias that is used to select them on the command line. Some minor changes are done to the actual parameters, to better reflect the named configurations. As a result of these changes the regressions change slightly and the stats will be bumped in a separate patch.
2013-05-30mem: Add bytes per activate DRAM controller statAndreas Hansson
This patch adds a histogram to track how many bytes are accessed in an open row before it is closed. This metric is useful in characterising a workload and the efficiency of the DRAM scheduler. For example, a DDR3-1600 device requires 44 cycles (tRC) before it can activate another row in the same bank. For a x32 interface (8 bytes per cycle) that means 8 x 44 = 352 bytes must be transferred to hide the preparation time.
2013-05-30mem: Add static latency to the DRAM controllerAndreas Hansson
This patch adds a frontend and backend static latency to the DRAM controller by delaying the responses. Two parameters expressing the frontend and backend contributions in absolute time are added to the controller, and the appropriate latency is added to the responses when adding them to the (infinite) queued port for sending. For writes and reads that hit in the write buffer, only the frontend latency is added. For reads that are serviced by the DRAM, the static latency is the sum of the pipeline latencies of the entire frontend, backend and PHY. The default values are chosen based on having roughly 10 pipeline stages in total at 500 MHz. In the future, it would be sensible to make the controller use its clock and convert these latencies (and a few of the DRAM timings) to cycles.
2013-05-30mem: Spring cleaning of MSHR and MSHRQueueAndreas Hansson
This patch does some minor tidying up of the MSHR and MSHRQueue. The clean up started as part of some ad-hoc tracing and debugging, but seems worthwhile enough to go in as a separate patch. The highlights of the changes are reduced scoping (private) members where possible, avoiding redundant new/delete, and constructor initialisation to please static code analyzers.
2013-05-30mem: Fix MSHR print formatAndreas Hansson
This patch fixes an incorrect print format string by adding an additional string element.
2013-05-30mem: Make returning snoop responses occupy response layerAndreas Hansson
This patch introduces a mirrored internal snoop port to facilitate easy addition of flow control for the snoop responses that are turned into normal responses on their return. To perform this, the slave ports of the coherent bus are wrapped in internal master ports that are passed as the source ports to the response layer in question. As a result of this patch, there is more contention for the response resources, and as such system performance will decrease slightly. A consequence of the mirrored internal port is that the port the bus tells to retry (the internal one) and the port actually retrying (the mirrored) one are not the same. Thus, the existing check in tryTiming is not longer correct. In fact, the test is redundant as the layer is only in the retry state while calling sendRetry on the waiting port, and if the latter does not immediately call the bus then the retry state is left. Consequently the check is removed.
2013-05-30mem: Make the buses multi layeredAndreas Hansson
This patch makes the buses multi layered, and effectively creates a crossbar structure with distributed contention ports at the destination ports. Before this patch, a bus could have a single request, response and snoop response in flight at any time, and with these changes there can be as many requests as connected slaves (bus master ports), and as many responses as connected masters (bus slave ports). Together with address interleaving, this patch enables us to create high-throughput memory interconnects, e.g. 50+ GByte/s.
2013-05-30mem: Separate the two snoop response cases in the busAndreas Hansson
This patch makes the flow control and state updates of the coherent bus more clear by separating the two cases, i.e. forward as a snoop response, or turn it into a normal response. With this change it is also more clear what resources are being occupied, and that we effectively bypass the busy check for the second case. As a result of the change in resource usage some stats change.
2013-05-30mem: Tidy up a few variables in the busAndreas Hansson
This patch does some minor housekeeping on the bus code, removing redundant code, and moving the extraction of the destination id to the top of the functions using it.
2013-05-30mem: Add basic stats to the busesUri Wiener
This patch adds a basic set of stats which are hard to impossible to implement using only communication monitors, and are needed for insight such as bus utilization, transactions through the bus etc. Stats added include throughput and transaction distribution, and also a two-dimensional vector capturing how many packets and how much data is exchanged between the masters and slaves connected to the bus.
2013-05-30mem: Use unordered set in bus request trackingAndreas Hansson
This patch changes the set used to track outstanding requests to an unordered set (part of C++11 STL). There is no need to maintain the order, and hopefully there might even be a small performance benefit.
2013-05-30mem: Check for waiting state in bus drainingAndreas Hansson
This patch fixes a bug in the bus where the bus transitions from busy to idle and still has a port that is waiting for a retry from a peer.
2013-05-30mem: Add a LPDDR3-1600 configurationAndreas Hansson
This patch adds a typical (leaning towards fast) LPDDR3 configuration based on publically available data. As expected, it looks very similar to the LPDDR2-S4 configuration, only with a slightly lower burst time.
2013-05-30mem: Adapt the LPDDR2 to match a single x32 channelAndreas Hansson
This patch adapts the existing LPDDR2 configuration to make use of the multi-channel functionality. Thus, to get a x64 interface two controllers should be instantiated using the makeMultiChannel method. The page size and ranks are also adapted to better suit with a typical LPDDR2 part.
2013-05-30mem: Avoid explicitly zeroing the memory backing storeAndreas Hansson
This patch removes the explicit memset as it is redundant and causes the simulator to touch the entire space, forcing the host system to allocate the pages. Anonymous pages are mapped on the first access, and the page-fault handler is responsible for zeroing them. Thus, the pages are still zeroed, but we avoid touching the entire allocated space which enables us to use much larger memory sizes as long as not all the memory is actually used.
2013-05-21ruby: slicc: fix error msg in TypeFieldMemberAST.pyMalek Musleh
2013-05-21ruby: moesi hammer: cosmetic changesNilay Vaish
Updates copyright years, removes space at the end of lines, shortens variable names.
2013-05-21ruby: mesi cmp directory: cosmetic changesNilay Vaish
Updates copyright years, removes space at the end of lines, shortens variable names.
2013-05-21ruby: moesi cmp token: cosmetic changesNilay Vaish
Updates copyright years, removes space at the end of lines, shortens variable names.
2013-05-21ruby: moesi cmp directory: cosmetic changesNilay Vaish
Updates copyright years, removes space at the end of lines, shortens variable names.
2013-05-21ruby: add stats to .sm files, remove cache profilerNilay Vaish ext:(%2C%20Malek%20Musleh%20%3Cmalek.musleh%40gmail.com%3E)
This patch changes the way cache statistics are collected in ruby. As of now, there is separate entity called CacheProfiler which holds statistical variables for caches. The CacheMemory class defines different functions for accessing the CacheProfiler. These functions are then invoked in the .sm files. I find this approach opaque and prone to error. Secondly, we probably should not be paying the cost of a function call for recording statistics. Instead, this patch allows for accessing statistical variables in the .sm files. The collection would become transparent. Secondly, it would happen in place, so no function calls. The patch also removes the CacheProfiler class. --HG-- rename : src/mem/slicc/ast/InfixOperatorExprAST.py => src/mem/slicc/ast/OperatorExprAST.py
2013-04-23sim: Fix two bugs relating to software caching of PageTable entries.Mitch Hayenga
The existing implementation can read uninitialized data or stale information from the cached PageTable entries. 1) Add a valid bit for the cache entries. Simply using zero for the virtual address to signify invalid entries is not sufficient. Speculative, wrong-path accesses frequently access page zero. The current implementation would return a uninitialized TLB entry when address zero was accessed and the PageTable cache entry was invalid. 2) When unmapping/mapping/remaping a page, invalidate the corresponding PageTable cache entry if one already exists.
2013-04-23ruby: mesi coherence protocol: remove unused state M_MBNilay Vaish
2013-04-23ruby: patch checkpoint restore with garnetNilay Vaish
Due to recent changes to clocking system in Ruby and the way Ruby restores state from a checkpoint, garnet was failing to run from a checkpointed state. The problem is that Ruby resets the time to zero while warming up the caches. If any component records a local copy of the time (read calls curCycle()) before the simulation has started, then that component will not operate until that time is reached. In the context of this particular patch, the Garnet Network class calls curCycle() at multiple places. Any non-operational component can block in requests in the memory system, which the system interprets as a deadlock. This patch makes changes so that Garnet can successfully run from checkpointed state. It adds a globally visible time at which the actual execution started. This time is initialized in RubySystem::startup() function. This variable is only meant for components with in Ruby. This replaces the private variable that was maintained within Garnet since it is not possible to figure out the correct time when the value of this variable can be set. The patch also does away with all cases where curCycle() is called with in some Ruby component before the system has actually started executing. This is required due to the quirky manner in which ruby restores from a checkpoint.
2013-04-22mem: Address mapping with fine-grained channel interleavingAndreas Hansson
This patch adds an address mapping scheme where the channel interleaving takes place on a cache line granularity. It is similar to the existing RaBaChCo that interleaves on a DRAM page, but should give higher performance when there is less locality in the address stream.
2013-04-22mem: More descriptive enum names for address mappingAndreas Hansson
This patch changes the slightly ambigious names used for the address mapping scheme to be more descriptive, and actually spell out what they do. With this patch we also open up for adding more flavours of open- and close-type mappings, i.e. interleaving across channels with the open map.
2013-04-22mem: Add a WideIO DRAM configurationAndreas Hansson
This patch adds a WideIO 200 MHz configuration that can be used as a baseline to compare with DDRx and LPDDRx. Note that it is a single channel and that it should be replicated 4 times. It is based on publically available information and attempts to capture an envisioned 8 Gbit single-die part (i.e. without TSVs).
2013-04-22mem: Adding verbose debug output in the memory systemUri Wiener
This patch provides useful printouts throughut the memory system. This includes pretty-printed cache tags and function call messages (call-stack like).
2013-04-22mem: Replace check with panic where inhibited should not happenAndreas Hansson
This patch changes the SimpleTimingPort and RubyPort to panic on inhibited requests as this should never happen in either of the cases. The SimpleTimingPort is only used for the I/O devices PIO port and the DMA devices config port and should thus never see an inhibited request. Similarly, the SimpleTimingPort is also used for the MessagePort in x86, and there should also not be any cases where the port sees an inhibited request.
2013-04-22sim: separate nextCycle() and clockEdge() in clockedObjectsDam Sunwoo
Previously, nextCycle() could return the *current* cycle if the current tick was already aligned with the clock edge. This behavior is not only confusing (not quite what the function name implies), but also caused problems in the drainResume() function. When exiting/re-entering the sim loop (e.g., to take checkpoints), the CPUs will drain and resume. Due to the previous behavior of nextCycle(), the CPU tick events were being rescheduled in the same ticks that were already processed before draining. This caused divergence from runs that did not exit/re-entered the sim loop. (Initially a cycle difference, but a significant impact later on.) This patch separates out the two behaviors (nextCycle() and clockEdge()), uses nextCycle() in drainResume, and uses clockEdge() everywhere else. Nothing (other than name) should change except for the drainResume timing.
2013-04-17ruby: moesi cmp directory: add copyright noticeNilay Vaish
2013-04-09Ruby: Fix RubyPort evict packet memory leakJoel Hestness
When using the o3 or inorder CPUs with many Ruby protocols, the caches may need to forward invalidations to the CPUs. The RubyPort was instantiating a packet to be sent to the CPUs to signal the eviction, but the packets were not being freed by the CPUs. Consistent with the classic memory model, stack allocate the packet and heap allocate the request so on ruby_eviction_callback() completion, the packet deconstructor is called, and deletes the request (*Note: stack allocating the request causes double deletion, since it will be deleted in the packet destructor). This results in the least memory allocations without memory errors.
2013-04-09Ruby: Delete packet requests during warmupJoel Hestness
When warming up caches in Ruby, the CacheRecorder sends fetch requests into Ruby Sequencers with packet types that require responses. Since responses are never generated for these CacheRecorder requests, the requests are not deleted in the packet destructor called from the Ruby hit callback. Free the request.
2013-04-09Ruby: Add field to slicc machine for generic typeJoel Hestness
This allows you to have (i.e.) an L2 cache that is not named "L2Cache" but is still a GenericMachineType_L2Cache. This is particularly helpful if the protocol has multiple L2 controllers.
2013-04-09Ruby: Order profilers based on versionJoel Hestness
When Ruby stats are printed for events and transitions, they include stats for all of the controllers of the same type, but they are not necessarily printed in order of the controller ID "version", because of the way the profilers were added to the profiler vector. This patch fixes the push order problem so that the stats are printed in ascending order 0->(# controllers), so statistics parsers may correctly assume the controller to which the stats belong.
2013-04-09Ruby: More descriptive message buffer connection fatalJason Power
When connecting message buffers between Ruby controllers, it is easy to mistakenly connect multiple controllers to the same message buffer. This patch prints a more descriptive fatal message than the previous assert statement in order to facilitate easier debugging.
2013-04-09Ruby: Fix typo in Slicc if-statement AST errorJason Power
The error in the SLICC code was hidden by the python error in SLICC parser before this patch
2013-04-07Ruby System, Cache Recorder: Use delete [] for trace varsJoel Hestness
The cache trace variables are array allocated uint8_t* in the RubySystem and the Ruby CacheRecorder, but the code used delete to free the memory, resulting in Valgrind memory errors. Change these deletes to delete [] to get rid of the errors.
2013-03-27mem: Fix cache latency bugMitch Hayenga
Fixes a latency calculation bug for accesses during a cache line fill. Under a cache miss, before the line is filled, accesses to the cache are associated with a MSHR and marked as targets. Once the line fill completes, MSHR target packets pay an additional latency of "responseLatency + busSerializationLatency". However, the "whenReady" field of the cache line is only set to an additional delay of "busSerializationLatency". This lacks the responseLatency component of the fill. It is possible for accesses that occur on the cycle of (or briefly after) the line fill to respond without properly paying the responseLatency. This also creates the situation where two accesses to the same address may be serviced in an order opposite of how they were received by the cache. For stores to the same address, this means that although the cache performs the stores in the order they were received, acknowledgements may be sent in a different order. Adding the responseLatency component to the whenReady field preserves the penalty that should be paid and prevents these ordering issues. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2013-03-26mem: Cancel cache retry event when blocking portRene de Jong
This patch solves the corner case scenario where the sendRetryEvent could be scheduled twice, when an io device stresses the IOcache in the system. This should not be possible in the cache system.