summaryrefslogtreecommitdiff
path: root/src/mem
AgeCommit message (Collapse)Author
2014-09-20mem: Simple Snoop FilterStephan Diestelhorst
This is a first cut at a simple snoop filter that tracks presence of lines in the caches "above" it. The snoop filter can be applied at any given cache hierarchy and will then handle the caches above it appropriately; there is no need to use this only in the last-level bus. This design currently has some limitations: missing stats, no notion of clean evictions (these will not update the underlying snoop filter, because they are not sent from the evicting cache down), no notion of capacity for the snoop filter and thus no need for invalidations caused by capacity pressure in the snoop filter. These are planned to be added on top with future change sets.
2014-09-20mem: Add DDR4 bank group timingWendy Elsasser
Added the following parameter to the DRAMCtrl class: - bank_groups_per_rank This defaults to 1. For the DDR4 case, the default is overridden to indicate bank group architecture, with multiple bank groups per rank. Added the following delays to the DRAMCtrl class: - tCCD_L : CAS-to-CAS, same bank group delay - tRRD_L : RAS-to-RAS, same bank group delay These parameters are only applied when bank group timing is enabled. Bank group timing is currently enabled only for DDR4 memories. For all other memories, these delays will default to '0 ns' In the DRAM controller model, applied the bank group timing to the per bank parameters actAllowedAt and colAllowedAt. The actAllowedAt will be updated based on bank group when an ACT is issued. The colAllowedAt will be updated based on bank group when a RD/WR burst is issued. At the moment no modifications are made to the scheduling.
2014-09-20mem: Add memory rank-to-rank delayWendy Elsasser
Add the following delay to the DRAM controller: - tCS : Different rank bus turnaround delay This will be applied for 1) read-to-read, 2) write-to-write, 3) write-to-read, and 4) read-to-write command sequences, where the new command accesses a different rank than the previous burst. The delay defaults to 2*tCK for each defined memory class. Note that this does not correspond to one particular timing constraint, but is a way of modelling all the associated constraints. The DRAM controller has some minor changes to prioritize commands to the same rank. This prioritization will only occur when the command stream is not switching from a read to write or vice versa (in the case of switching we have a gap in any case). To prioritize commands to the same rank, the model will determine if there are any commands queued (same type) to the same rank as the previous command. This check will ensure that the 'same rank' command will be able to execute without adding bubbles to the command flow, e.g. any ACT delay requirements can be done under the hoods, allowing the burst to issue seamlessly.
2014-09-20mem: Remove the GHB prefetcher from the source treeMitch Hayenga
There are two primary issues with this code which make it deserving of deletion. 1) GHB is a way to structure a prefetcher, not a definitive type of prefetcher 2) This prefetcher isn't even structured like a GHB prefetcher. It's basically a worse version of the stride prefetcher. It primarily serves to confuse new gem5 users and most functionality is already present in the stride prefetcher.
2014-09-19misc: Use safe_cast when assumptions are made about return valueAndreas Hansson
This patch changes two dynamic_cast to safe_cast as we assume the return value is not NULL (without checking).
2014-09-19misc: Remove assertions ensuring unsigned values >= 0Andreas Hansson
2014-09-19mem: Check return value of checkFunctional in SimpleMemoryAndreas Hansson
Simple fix to ensure we only iterate until we are done.
2014-09-19mem: Add checks to sendTimingReq in cacheAndreas Hansson
A small fix to ensure the return value is not ignored.
2014-09-15ruby: network: revert some of the changes from ad9c042dce54Nilay Vaish
The changeset ad9c042dce54 made changes to the structures under the network directory to use a map of buffers instead of vector of buffers. The reasoning was that not all vnets that are created are used and we needlessly allocate more buffers than required and then iterate over them while processing network messages. But the move to map resulted in a slow down which was pointed out by Andreas Hansson. This patch moves things back to using vector of message buffers.
2014-09-09mem: Add accessor function for vaddrMitch Hayenga
Determine if a request has an associated virtual address.
2014-09-09misc: Fix a number of unitialised variables and membersAndreas Hansson
Static analysis unearther a bunch of uninitialised variables and members, and this patch addresses the problem. In all cases these omissions seem benign in the end, but at least fixing them means less false positives next time round.
2014-09-03base: Use the global Mersenne twister throughoutAndreas Hansson
This patch tidies up random number generation to ensure that it is done consistently throughout the code base. In essence this involves a clean-up of Ruby, and some code simplifications in the traffic generator. As part of this patch a bunch of skewed distributions (off-by-one etc) have been fixed. Note that a single global random number generator is used, and that the object instantiation order will impact the behaviour (the sequence of numbers will be unaffected, but if module A calles random before module B then they would obviously see a different outcome). The dependency on the instantiation order is true in any case due to the execution-model of gem5, so we leave it as is. Also note that the global ranom generator is not thread safe at this point. Regressions using the memtest, TrafficGen or any Ruby tester are affected and will be updated accordingly.
2014-09-03mem: Avoid unecessary retries when bus peer is not readyAndreas Hansson
This patch removes unecessary retries that happened when the bus layer itself was no longer busy, but the the peer was not yet ready. Instead of sending a retry that will inevitably not succeed, the bus now silenty waits until the peer sends a retry.
2014-06-27mem: write streaming support via WriteInvalidate promotionCurtis Dunham
Support full-block writes directly rather than requiring RMW: * a cache line is allocated in the cache upon receipt of a WriteInvalidateReq, not the WriteInvalidateResp. * only top-level caches allocate the line; the others just pass the request along and invalidate as necessary. * to close a timing window between the *Req and the *Resp, a new metadata bit tracks whether another cache has read a copy of the new line before the writeback to memory.
2014-09-03mem: Fix a bug in the cache port flow controlAndreas Hansson
This patch fixes a bug in the cache port where the retry flag was reset too early, allowing new requests to arrive before the retry was actually sent, but with the event already scheduled. This caused a deadlock in the interactions with the O3 LSQ. The patche fixes the underlying issue by shifting the resetting of the flag to be done by the event that also calls sendRetry(). The patch also tidies up the flow control in recvTimingReq and ensures that we also check if we already have a retry outstanding.
2014-05-13cpu, mem: Make software prefetches non-blockingCurtis Dunham
Previously, they were treated so much like loads that they could stall at the head of the ROB. Now they are always treated like L1 hits. If they actually miss, a new request is created at the L1 and tracked from the MSHRs there if necessary (i.e. if it didn't coalesce with an existing outstanding load).
2014-05-13mem: Refactor assignment of Packet typesCurtis Dunham
Put the packet type swizzling (that is currently done in a lot of places) into a refineCommand() member function.
2014-09-03cache: Fix handling of LL/SC requests under contentionGeoffrey Blake
If a set of LL/SC requests contend on the same cache block we can get into a situation where CPUs will deadlock if they expect a failed SC to supply them data. This case happens where 3 or more cores are contending for a cache block using LL/SC and the system is configured where 2 cores are connected to a local bus and the third is connected to a remote bus. If a core on the local bus sends an SCUpgrade and the core on the remote bus sends and SCUpgrade they will race to see who will win the SC access. In the meantime if the other core appends a read to one of the SCUpgrades it will expect to be supplied data by that SCUpgrade transaction. If it happens that the SCUpgrade that was picked to supply the data is failed, it will drop the appended request for data and never respond, leaving the requesting core to deadlock. This patch makes all SC's behave as normal stores to prevent this case but still makes sure to check whether it can perform the update.
2014-09-03mem: Packet queue clean upAndreas Hansson
No change in functionality, just a bit of tidying up.
2014-09-03arch: Cleanup unused ISA traits constantsAndreas Hansson
This patch prunes unused values, and also unifies how the values are defined (not using an enum for ALPHA), aligning the use of int vs Addr etc. The patch also removes the duplication of PageBytes/PageShift and VMPageSize/LogVMPageSize. For all ISAs the two pairs had identical values and the latter has been removed.
2014-09-01ruby: remove typedef of Index as int64Nilay Vaish
The Index type defined as typedef int64 does not really provide any help since in most places we use primitive types instead of Index. Also, the name Index is very generic that it does not merit being used as a typename.
2014-09-01ruby: PerfectSwitch: moves code to a per vnet helper functionNilay Vaish
This patch moves code from the wakeup() function to a operateVnet(). The aim is to improve the readiblity of the code.
2014-09-01ruby: message buffers: significant changesNilay Vaish
This patch is the final patch in a series of patches. The aim of the series is to make ruby more configurable than it was. More specifically, the connections between controllers are not at all possible (unless one is ready to make significant changes to the coherence protocol). Moreover the buffers themselves are magically connected to the network inside the slicc code. These connections are not part of the configuration file. This patch makes changes so that these connections will now be made in the python configuration files associated with the protocols. This requires each state machine to expose the message buffers it uses for input and output. So, the patch makes these buffers configurable members of the machines. The patch drops the slicc code that usd to connect these buffers to the network. Now these buffers are exposed to the python configuration system as Master and Slave ports. In the configuration files, any master port can be connected any slave port. The file pyobject.cc has been modified to take care of allocating the actual message buffer. This is inline with how other port connections work.
2014-09-01build opts: add MI_example to NULL ISANilay Vaish
A later changeset changes the file src/python/swig/pyobject.cc to include a header file that includes a header file generated at build time depending on the PROTOCOL in use. Since NULL ISA was not specifying any protocol, this resulted in compilation problems. Hence, the changeset.
2014-09-01mem: change the namespace Message to ProtoMessageNilay Vaish
The namespace Message conflicts with the Message data type used extensively in Ruby. Since Ruby is being moved to the same Master/Slave ports based configuration style as the rest of gem5, this conflict needs to be resolved. Hence, the namespace is being renamed to ProtoMessage.
2014-09-01ruby: slicc: change the way configurable members are specifiedNilay Vaish
There are two changes this patch makes to the way configurable members of a state machine are specified in SLICC. The first change is that the data member declarations will need to be separated by a semi-colon instead of a comma. Secondly, the default value to be assigned would now use SLICC's assignment operator i.e. ':='.
2014-09-01ruby: slicc: improve the grammarNilay Vaish
This patch changes the grammar for SLICC so as to remove some of the redundant / duplicate rules. In particular rules for object/variable declaration and class member declaration have been unified. Similarly, the rules for a general function and a class method have been unified. One more change is in the priority of two rules. The first rule is on declaring a function with all the params typed and named. The second rule is on declaring a function with all the params only typed. Earlier the second rule had a higher priority. Now the first rule has a higher priority.
2014-09-01ruby: mesi three level: slight naming changes.Nilay Vaish
2014-09-01ruby: slicc: donot prefix machine name to variablesNilay Vaish
This changeset does away with prefixing of member variables of state machines with the identity of the machine itself.
2014-09-01ruby: remove unused toString() from AbstractControllerNilay Vaish
2014-09-01ruby: network: move getNumNodes() to base classNilay Vaish
All the implementations were doing the same things.
2014-09-01ruby: eliminate type TimeNilay Vaish
There is another type Time in src/base class which results in a conflict.
2014-09-01ruby: move files from ruby/system to ruby/structuresNilay Vaish
The directory ruby/system is crowded and unorganized. Hence, the files the hold actual physical structures, are being moved to the directory ruby/structures. This includes Cache Memory, Directory Memory, Memory Controller, Wire Buffer, TBE Table, Perfect Cache Memory, Timer Table, Bank Array. The directory ruby/systems has the glue code that holds these structures together. --HG-- rename : src/mem/ruby/system/MachineID.hh => src/mem/ruby/common/MachineID.hh rename : src/mem/ruby/buffers/MessageBuffer.cc => src/mem/ruby/network/MessageBuffer.cc rename : src/mem/ruby/buffers/MessageBuffer.hh => src/mem/ruby/network/MessageBuffer.hh rename : src/mem/ruby/buffers/MessageBufferNode.cc => src/mem/ruby/network/MessageBufferNode.cc rename : src/mem/ruby/buffers/MessageBufferNode.hh => src/mem/ruby/network/MessageBufferNode.hh rename : src/mem/ruby/system/AbstractReplacementPolicy.hh => src/mem/ruby/structures/AbstractReplacementPolicy.hh rename : src/mem/ruby/system/BankedArray.cc => src/mem/ruby/structures/BankedArray.cc rename : src/mem/ruby/system/BankedArray.hh => src/mem/ruby/structures/BankedArray.hh rename : src/mem/ruby/system/Cache.py => src/mem/ruby/structures/Cache.py rename : src/mem/ruby/system/CacheMemory.cc => src/mem/ruby/structures/CacheMemory.cc rename : src/mem/ruby/system/CacheMemory.hh => src/mem/ruby/structures/CacheMemory.hh rename : src/mem/ruby/system/DirectoryMemory.cc => src/mem/ruby/structures/DirectoryMemory.cc rename : src/mem/ruby/system/DirectoryMemory.hh => src/mem/ruby/structures/DirectoryMemory.hh rename : src/mem/ruby/system/DirectoryMemory.py => src/mem/ruby/structures/DirectoryMemory.py rename : src/mem/ruby/system/LRUPolicy.hh => src/mem/ruby/structures/LRUPolicy.hh rename : src/mem/ruby/system/MemoryControl.cc => src/mem/ruby/structures/MemoryControl.cc rename : src/mem/ruby/system/MemoryControl.hh => src/mem/ruby/structures/MemoryControl.hh rename : src/mem/ruby/system/MemoryControl.py => src/mem/ruby/structures/MemoryControl.py rename : src/mem/ruby/system/MemoryNode.cc => src/mem/ruby/structures/MemoryNode.cc rename : src/mem/ruby/system/MemoryNode.hh => src/mem/ruby/structures/MemoryNode.hh rename : src/mem/ruby/system/MemoryVector.hh => src/mem/ruby/structures/MemoryVector.hh rename : src/mem/ruby/system/PerfectCacheMemory.hh => src/mem/ruby/structures/PerfectCacheMemory.hh rename : src/mem/ruby/system/PersistentTable.cc => src/mem/ruby/structures/PersistentTable.cc rename : src/mem/ruby/system/PersistentTable.hh => src/mem/ruby/structures/PersistentTable.hh rename : src/mem/ruby/system/PseudoLRUPolicy.hh => src/mem/ruby/structures/PseudoLRUPolicy.hh rename : src/mem/ruby/system/RubyMemoryControl.cc => src/mem/ruby/structures/RubyMemoryControl.cc rename : src/mem/ruby/system/RubyMemoryControl.hh => src/mem/ruby/structures/RubyMemoryControl.hh rename : src/mem/ruby/system/RubyMemoryControl.py => src/mem/ruby/structures/RubyMemoryControl.py rename : src/mem/ruby/system/SparseMemory.cc => src/mem/ruby/structures/SparseMemory.cc rename : src/mem/ruby/system/SparseMemory.hh => src/mem/ruby/structures/SparseMemory.hh rename : src/mem/ruby/system/TBETable.hh => src/mem/ruby/structures/TBETable.hh rename : src/mem/ruby/system/TimerTable.cc => src/mem/ruby/structures/TimerTable.cc rename : src/mem/ruby/system/TimerTable.hh => src/mem/ruby/structures/TimerTable.hh rename : src/mem/ruby/system/WireBuffer.cc => src/mem/ruby/structures/WireBuffer.cc rename : src/mem/ruby/system/WireBuffer.hh => src/mem/ruby/structures/WireBuffer.hh rename : src/mem/ruby/system/WireBuffer.py => src/mem/ruby/structures/WireBuffer.py rename : src/mem/ruby/recorder/CacheRecorder.cc => src/mem/ruby/system/CacheRecorder.cc rename : src/mem/ruby/recorder/CacheRecorder.hh => src/mem/ruby/system/CacheRecorder.hh
2014-08-28mem: adding architectural page table support for SE modeAlexandru
This patch enables the use of page tables that are stored in system memory and respect x86 specification, in SE mode. It defines an architectural page table for x86 as a MultiLevelPageTable class and puts a placeholder class for other ISAs page tables, giving the possibility for future implementation.
2014-04-01mem: adding a multi-level page table classAlexandru
This patch defines a multi-level page table class that stores the page table in system memory, consistent with ISA specifications. In this way, cpu models that use the actual hardware to execute (e.g. KvmCPU), are able to traverse the page table.
2014-08-26mem: Fix DRAMSim2 cycle check when restoring from checkpointAndreas Hansson
This patch ensures the cycle check is still valid even restoring from a checkpoint. In this case the DRAMSim2 cycle count is relative to the startTick rather than 0.
2014-08-26mem: Update DRAM controller commentsAndreas Hansson
Update comments and add a reference for more information.
2014-08-26mem: Fix address interleaving bug in DRAM controllerAndreas Hansson
This patch fixes a bug in the DRAM controller address decoding. In cases where the DRAM burst size (e.g. 32 bytes in a rank with a single LPDDR3 x32) was smaller than the channel interleaving size (e.g. systems with a 64-byte cache line) one address bit effectively got used as a channel bit when it should have been a low-order column bit. This patch adds a notion of "columns per stripe", and more clearly deals with the low-order column bits and high-order column bits. The patch also relaxes the granularity check such that it is possible to use interleaving granularities other than the cache line size. The patch also adds a missing M5_CLASS_VAR_USED to the tCK member as it is only used in the debug build for now.
2014-08-13mem: Properly set cache block status fields on writebacksMitch Hayenga
When a cacheline is written back to a lower-level cache, tags->insertBlock() sets various status parameters. However these status bits were cleared immediately after calling. This patch makes it so that these status fields are not cleared by moving them outside of the tags->insertBlock() call.
2014-07-28mem: refactor LRU cache tags and add random replacement tagsAnthony Gutierrez
this patch implements a new tags class that uses a random replacement policy. these tags prefer to evict invalid blocks first, if none are available a replacement candidate is chosen at random. this patch factors out the common code in the LRU class and creates a new abstract class: the BaseSetAssoc class. any set associative tag class must implement the functionality related to the actual replacement policy in the following methods: accessBlock() findVictim() insertBlock() invalidate()
2014-06-30mem: DRAMPower trace outputAndreas Hansson
This patch adds a DRAMPower flag to enable off-line DRAM power analysis using the DRAMPower tool. A new DRAMPower flag is added and a follow-on patch adds a Python script to post-process the output and order it based on time stamps. The long-term goal is to link DRAMPower as a library and provide the commands through function calls to the model rather than first printing and then parsing the commands. At the moment it is also up to the user to ensure that the same DRAM configuration is used by the gem5 controller model and DRAMPower.
2014-06-30mem: Add bank and rank indices as fields to the DRAM bankAndreas Hansson
This patch adds the index of the bank and rank as a field so that we can determine the identity of a given bank (reference or pointer) for the power tracing. We also grab the opportunity of cleaning up the arguments used for identifying the bank when activating.
2014-06-30mem: Extend DRAM row bits from 16 to 32 for larger densitiesAndreas Hansson
This patch extends the DRAM row bits to 32 to support larger density memories. Additional checks are also added to ensure the row fits in the 32 bits.
2014-05-31style: eliminate equality tests with true and falseSteve Reinhardt
Using '== true' in a boolean expression is totally redundant, and using '== false' is pretty verbose (and arguably less readable in most cases) compared to '!'. It's somewhat of a pet peeve, perhaps, but I had some time waiting for some tests to run and decided to clean these up. Unfortunately, SLICC appears not to have the '!' operator, so I had to leave the '== false' tests in the SLICC code.
2014-05-23ruby: slicc: remove unused ids DNUCA*Nilay Vaish
2014-05-23ruby: remove old protocol documentationNilay Vaish
2014-05-23ruby: message buffer: drop dequeue_getDelayCycles()Nilay Vaish
The functionality of updating and returning the delay cycles would now be performed by the dequeue() function itself.
2014-05-09mem: Update DDR3 and DDR4 based on datasheetsAndreas Hansson
This patch makes a more firm connection between the DDR3-1600 configuration and the corresponding datasheet, and also adds a DDR3-2133 and a DDR4-2400 configuration. At the moment there is also an ongoing effort to align the choice of datasheets to what is available in DRAMPower.
2014-05-09mem: Add DRAM cycle timeAndreas Hansson
This patch extends the current timing parameters with the DRAM cycle time. This is needed as the DRAMPower tool expects timestamps in DRAM cycles. At the moment we could get away with doing this in a post-processing step as the DRAMPower execution is separate from the simulation run. However, in the long run we want the tool to be called during the simulation, and then the cycle time is needed.
2014-05-09mem: Simplify DRAM response schedulingAndreas Hansson
This patch simplifies the DRAM response scheduling based on the assumption that they are always returned in order.