summaryrefslogtreecommitdiff
path: root/src/mem/ruby
AgeCommit message (Collapse)Author
2015-08-14ruby: rename variables Addr to addrNilay Vaish
Avoid clash between type Addr and variable name Addr.
2015-08-14ruby: Expose MessageBuffers as SimObjectsJoel Hestness
Expose MessageBuffers from SLICC controllers as SimObjects that can be manipulated in Python. This patch has numerous benefits: 1) First and foremost, it exposes MessageBuffers as SimObjects that can be manipulated in Python code. This allows parameters to be set and checked in Python code to avoid obfuscating parameters within protocol files. Further, now as SimObjects, MessageBuffer parameters are printed to config output files as a way to track parameters across simulations (e.g. buffer sizes) 2) Cleans up special-case code for responseFromMemory buffers, and aligns their instantiation and use with mandatoryQueue buffers. These two special buffers are the only MessageBuffers that are exposed to components outside of SLICC controllers, and they're both slave ends of these buffers. They should be exposed outside of SLICC in the same way, and this patch does it. 3) Distinguishes buffer-specific parameters from buffer-to-network parameters. Specifically, buffer size, randomization, ordering, recycle latency, and ports are all specific to a MessageBuffer, while the virtual network ID and type are intrinsics of how the buffer is connected to network ports. The former are specified in the Python object, while the latter are specified in the controller *.sm files. Unlike buffer-specific parameters, which may need to change depending on the simulated system structure, buffer-to-network parameters can be specified statically for most or all different simulated systems.
2015-08-14ruby: Change PerfectCacheMemory::lookup to return pointerJoel Hestness
CacheMemory and DirectoryMemory lookup functions return pointers to entries stored in the memory. Bring PerfectCacheMemory in line with this convention, and clean up SLICC code generation that was in place solely to handle references like that which was returned by PerfectCacheMemory::lookup.
2015-08-14ruby: Remove the RubyCache/CacheMemory latencyJoel Hestness
The RubyCache (CacheMemory) latency parameter is only used for top-level caches instantiated for Ruby coherence protocols. However, the top-level cache hit latency is assessed by the Sequencer as accesses flow through to the cache hierarchy. Further, protocol state machines should be enforcing these cache hit latencies, but RubyCaches do not expose their latency to any existng state machines through the SLICC/C++ interface. Thus, the RubyCache latency parameter is superfluous for all caches. This is confusing for users. As a step toward pushing L0/L1 cache hit latency into the top-level cache controllers, move their latencies out of the RubyCache declarations and over to their Sequencers. Eventually, these Sequencer parameters should be exposed as parameters to the top-level cache controllers, which should assess the latency. NOTE: Assessing these latencies in the cache controllers will require modifying each to eliminate instantaneous Ruby hit callbacks in transitions that finish accesses, which is likely a large undertaking.
2015-08-07base: Declare a type for context IDsAndreas Sandberg
Context IDs used to be declared as ad hoc (usually as int). This changeset introduces a typedef for ContextIDs and a constant for invalid context IDs.
2015-08-03uby: Fix checkpointing and restoreTimothy Jones
There are 2 problems with the existing checkpoint and restore code in ruby. The first is that when the event queue is altered by ruby during serialization, some events that are currently scheduled cannot be found (e.g. the event to stop simulation that always lives on the queue), causing a panic. The second is that ruby is sometimes serialized after the memory system, meaning that the dirty data in its cache is flushed back to memory too late and so isn't included in the checkpoint. These are fixed by implementing memory writeback in ruby, using the same technique of hijacking the event queue, but first descheduling all events that are currently on it. They are saved, along with their scheduled time, so that the event queue can be faithfully reconstructed after writeback has finished. Events with the AutoDelete flag set will delete themselves when they are descheduled, causing an error when attempting to schedule them again. This is fixed by simply not recording them when taking them off the queue. Writeback is still implemented using flushing, so the cache recorder object, that is created to generate the trace and manage flushing, is kept around and used during serialization to write the trace to disk. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-08-01ruby: removed invalid assert in message comparitorBrad Beckmann
It is perfectly valid to compare the same message and the greater than operator should work correctly.
2015-07-20ruby: improved stall and wait debuggingBrad Beckmann
Added dprintfs and asserts for identifying stall and wait bugs.
2015-07-20ruby: change router pipeline stages to 2David Hashe
This patch changes the router pipeline stages from 4 to 2. The canonical 4-stage router is conservative while a lower-latency router with look ahead routing and speculative allocation is well acknowledged.
2015-07-20ruby: change advance_stage for flit_dDavid Hashe
Sets m_stage.second to the second parameter of the function. Then, for every place where advance_stage is called, adds a cycle to the argument being passed.
2015-07-20ruby: expose access permission to replacement policiesDavid Hashe
This patch adds support that allows the replacement policy to identify each cache block's access permission. This information can be useful when making replacement decisions.
2015-07-20ruby: adds size and empty apis to the msg buffer stallmapDavid Hashe
2015-07-20ruby: fix deadlock bug in banked array resource checksDavid Hashe
The Ruby banked array resource checks (initiated from SLICC) did a check and allocate at the same time. If a transition needs more than one resource, then it might check/allocate resource #1, then fail to get resource #2. Another transition might then try to get the same resources, but in reverse order. Deadlock. This patch separates resource checking and resource reservation into two steps to avoid deadlock.
2015-07-20ruby: Fix for stallAndWait bugDavid Hashe
It was previously possible for a stalled message to be reordered after an incomming message. This patch ensures that any stalled message stays in its original request order.
2015-07-20ruby: allocate a block in CacheMemory without updating LRU stateDavid Hashe
2015-07-20ruby: speed up function used for cache walksDavid Hashe
This patch adds a few helpful functions that allow .sm files to directly invalidate all cache blocks using a trigger queue rather than rely on each individual cache block to be invalidated via requests from the mandatory queue.
2015-07-20ruby: initialize replacement policies with their own simobjsDavid Hashe
this is in preparation for other replacement policies that take additional parameters.
2015-07-20ruby: give access to cache tag/data latencies from SLICCDavid Hashe
This patch exposes the tag and data array latencies to the SLICC state machines so that it can be used to determine the correct enqueue latency for response messages.
2015-07-20slicc: support for multiple message types on the same bufferDavid Hashe
This patch allows SLICC protocols to use more than one message type with a message buffer. For example, you can declare two in ports as such: in_port(ResponseQueue_in, ResponseMsg, responseFromDir, rank=3) { ... } in_port(tgtResponseQueue_in, TgtResponseMsg, responseFromDir, rank=2) { ... }
2015-07-20mem: Hit callback delay fixDavid Hashe
This patch was created by Bihn Pham during his internship at AMD. There is no need to delay hit callback response messages by a cycle because the response latency is already incurred in the Ruby protocol. This ensures correct timing of memory instructions.
2015-07-20ruby: re-added the addressToInt slicc interface functionBrad Beckmann
This helper function is very useful converting address offsets to integers that can be used for protocol specific destination mapping.
2015-07-20ruby: add useful dprints to sequencerBrad Beckmann
Added two data block dprints that are useful when tracking down data check failures in the ruby random tester.
2015-07-24ruby: dma sequencer: removes redundant codeBrandon Potter
2015-07-22ruby: network: NetworkLink inherits from Consumer now.Nilay Vaish
2015-07-10ruby: replace global g_abs_controls with per-RubySystem varBrandon Potter
This is another step in the process of removing global variables from Ruby to enable multiple RubySystem instances in a single simulation. The list of abstract controllers is per-RubySystem and should be represented that way, rather than as a global. Since this is the last remaining Ruby global variable, the src/mem/ruby/Common/Global.* files are also removed.
2015-07-10ruby: replace global g_system_ptr with per-object pointersBrandon Potter
This is another step in the process of removing global variables from Ruby to enable multiple RubySystem instances in a single simulation. With possibly multiple RubySystem objects, we can no longer use a global variable to find "the" RubySystem object. Instead, each Ruby component has to carry a pointer to the RubySystem object to which it belongs.
2015-07-10ruby: replace g_ruby_start with per-RubySystem m_start_cycleBrandon Potter
This patch begins the process of removing global variables from the Ruby source with the goal of eventually allowing users to create multiple Ruby instances in a single simulation. Currently, users cannot do so because several global variables and static members are referenced by the RubySystem object in a way that assumes that there will only ever be a single RubySystem. These need to be replaced with per-RubySystem equivalents. This specific patch replaces the global var g_ruby_start, which is used to calculate throughput statistics for Throttles in simple networks and links in Garnet networks, with a RubySystem instance var m_start_cycle.
2015-07-10ruby: remove extra whitespace and correct misspelled wordsBrandon Potter
2015-07-07sim: Refactor and simplify the drain APIAndreas Sandberg
The drain() call currently passes around a DrainManager pointer, which is now completely pointless since there is only ever one global DrainManager in the system. It also contains vestiges from the time when SimObjects had to keep track of their child objects that needed draining. This changeset moves all of the DrainState handling to the Drainable base class and changes the drain() and drainResume() calls to reflect this. Particularly, the drain() call has been updated to take no parameters (the DrainManager argument isn't needed) and return a DrainState instead of an unsigned integer (there is no point returning anything other than 0 or 1 any more). Drainable objects should return either DrainState::Draining (equivalent to returning 1 in the old system) if they need more time to drain or DrainState::Drained (equivalent to returning 0 in the old system) if they are already in a consistent state. Returning DrainState::Running is considered an error. Drain done signalling is now done through the signalDrainDone() method in the Drainable class instead of using the DrainManager directly. The new call checks if the state of the object is DrainState::Draining before notifying the drain manager. This means that it is safe to call signalDrainDone() without first checking if the simulator has requested draining. The intention here is to reduce the code needed to implement draining in simple objects.
2015-07-07sim: Decouple draining from the SimObject hierarchyAndreas Sandberg
Draining is currently done by traversing the SimObject graph and calling drain()/drainResume() on the SimObjects. This is not ideal when non-SimObjects (e.g., ports) need draining since this means that SimObjects owning those objects need to be aware of this. This changeset moves the responsibility for finding objects that need draining from SimObjects and the Python-side of the simulator to the DrainManager. The DrainManager now maintains a set of all objects that need draining. To reduce the overhead in classes owning non-SimObjects that need draining, objects inheriting from Drainable now automatically register with the DrainManager. If such an object is destroyed, it is automatically unregistered. This means that drain() and drainResume() should never be called directly on a Drainable object. While implementing the new functionality, the DrainManager has now been made thread safe. In practice, this means that it takes a lock whenever it manipulates the set of Drainable objects since SimObjects in different threads may create Drainable objects dynamically. Similarly, the drain counter is now an atomic_uint, which ensures that it is manipulated correctly when objects signal that they are done draining. A nice side effect of these changes is that it makes the drain state changes stricter, which the simulation scripts can exploit to avoid redundant drains.
2015-07-07sim: Make the drain state a global typed enumAndreas Sandberg
The drain state enum is currently a part of the Drainable interface. The same state machine will be used by the DrainManager to identify the global state of the simulator. Make the drain state a global typed enum to better cater for this usage scenario.
2015-07-07sim: Refactor the serialization base classAndreas Sandberg
Objects that are can be serialized are supposed to inherit from the Serializable class. This class is meant to provide a unified API for such objects. However, so far it has mainly been used by SimObjects due to some fundamental design limitations. This changeset redesigns to the serialization interface to make it more generic and hide the underlying checkpoint storage. Specifically: * Add a set of APIs to serialize into a subsection of the current object. Previously, objects that needed this functionality would use ad-hoc solutions using nameOut() and section name generation. In the new world, an object that implements the interface has the methods serializeSection() and unserializeSection() that serialize into a named /subsection/ of the current object. Calling serialize() serializes an object into the current section. * Move the name() method from Serializable to SimObject as it is no longer needed for serialization. The fully qualified section name is generated by the main serialization code on the fly as objects serialize sub-objects. * Add a scoped ScopedCheckpointSection helper class. Some objects need to serialize data structures, that are not deriving from Serializable, into subsections. Previously, this was done using nameOut() and manual section name generation. To simplify this, this changeset introduces a ScopedCheckpointSection() helper class. When this class is instantiated, it adds a new /subsection/ and subsequent serialization calls during the lifetime of this helper class happen inside this section (or a subsection in case of nested sections). * The serialize() call is now const which prevents accidental state manipulation during serialization. Objects that rely on modifying state can use the serializeOld() call instead. The default implementation simply calls serialize(). Note: The old-style calls need to be explicitly called using the serializeOld()/serializeSectionOld() style APIs. These are used by default when serializing SimObjects. * Both the input and output checkpoints now use their own named types. This hides underlying checkpoint implementation from objects that need checkpointing and makes it easier to change the underlying checkpoint storage code.
2015-07-04ruby: drop NetworkMessage classNilay Vaish
This patch drops the NetworkMessage class. The relevant data members and functions have been moved to the Message class, which was the parent of NetworkMessage.
2015-07-04ruby: remove message buffer nodeNilay Vaish
This structure's only purpose was to provide a comparison function for ordering messages in the MessageBuffer. The comparison function is now being moved to the Message class itself. So we no longer require this structure.
2015-07-03mem: Split WriteInvalidateReq into write and invalidateAndreas Hansson
WriteInvalidateReq ensures that a whole-line write does not incur the cost of first doing a read exclusive, only to later overwrite the data. This patch splits the existing WriteInvalidateReq into a WriteLineReq, which is done locally, and an InvalidateReq that is sent out throughout the memory system. The WriteLineReq re-uses the normal WriteResp. The change allows us to better express the difference between the cache that is performing the write, and the ones that are merely invalidating. As a consequence, we no longer have to rely on the isTopLevel flag. Moreover, the actual memory in the system does not see the intitial write, only the writeback. We were marking the written line as dirty already, so there is really no need to also push the write all the way to the memory. The overall flow of the write-invalidate operation remains the same, i.e. the operation is only carried out once the response for the invalidate comes back. This patch adds the InvalidateResp for this very reason.
2015-06-25ruby: message: remove a data member added by mistakeNilay Vaish
I (Nilay) had mistakenly added a data member to the Message class in revision c1694b4032a6. The data member is being removed.
2015-06-25Ruby: Remove assert in RubyPort retry list logicJason Power
Remove the assert when adding a port to the RubyPort retry list. Instead of asserting, just ignore the added port, since it's already on the list. Without this patch, Ruby+detailed fails for even the simplest tests
2015-05-26ruby: Deprecation warning for RubyMemoryControlAndreas Hansson
A step towards removing RubyMemoryControl and shift users to DRAMCtrl. The latter is faster, more representative, very versatile, and is integrated with power models.
2015-05-19ruby: Fix RubySystem warm-up and cool-down scopeJoel Hestness
The processes of warming up and cooling down Ruby caches are simulation-wide processes, not just RubySystem instance-specific processes. Thus, the warm-up and cool-down variables should be globally visible to any Ruby components participating in either process. Make these variables static members and track the warm-up and cool-down processes as appropriate. This patch also has two side benefits: 1) It removes references to the RubySystem g_system_ptr, which are problematic for allowing multiple RubySystem instances in a single simulation. Warmup and cooldown variables being static (global) reduces the need for instance-specific dereferences through the RubySystem. 2) From the AbstractController, it removes local RubySystem pointers, which are used inconsistently with other uses of the RubySystem: 11 other uses reference the RubySystem with the g_system_ptr. Only sequencers have local pointers.
2015-04-29ruby: set: replace long by unsigned longNilay Vaish
UBSan complains about negative value being shifted
2015-04-13ruby: allow restoring from checkpoint when using DRAMCtrlLena Olson
Restoring from a checkpoint with ruby + the DRAMCtrl memory model was not working, because ruby and DRAMCtrl disagreed on the current tick during warmup. Since there is no reason to do timing requests during warmup, use functional requests instead. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-03-23mem: rename Locked/LOCKED to LockedRMW/LOCKED_RMWSteve Reinhardt
Makes x86-style locked operations even more distinct from LLSC operations. Using "locked" by itself should be obviously ambiguous now.
2015-03-02mem: Split port retry for all different packet classesAndreas Hansson
This patch fixes a long-standing isue with the port flow control. Before this patch the retry mechanism was shared between all different packet classes. As a result, a snoop response could get stuck behind a request waiting for a retry, even if the send/recv functions were split. This caused message-dependent deadlocks in stress-test scenarios. The patch splits the retry into one per packet (message) class. Thus, sendTimingReq has a corresponding recvReqRetry, sendTimingResp has recvRespRetry etc. Most of the changes to the code involve simply clarifying what type of request a specific object was accepting. The biggest change in functionality is in the cache downstream packet queue, facing the memory. This queue was shared by requests and snoop responses, and it is now split into two queues, each with their own flow control, but the same physical MasterPort. These changes fixes the previously seen deadlocks.
2015-02-26Ruby: Update backing store option to propagate through to all RubyPortsJason Power
Previously, the user would have to manually set access_backing_store=True on all RubyPorts (Sequencers) in the config files. Now, instead there is one global option that each RubyPort checks on initialization. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-01-22mem: Always use SenderState for response routing in RubyPortAndreas Hansson
This patch aligns how the response routing is done in the RubyPort, using the SenderState for both memory and I/O accesses. Before this patch, only the I/O used the SenderState, whereas the memory accesses relied on the src field in the packet. With this patch we shift to using SenderState in both cases, thus not relying on the src field any longer.
2015-01-22mem: Clean up Request initialisationAndreas Hansson
This patch tidies up how we create and set the fields of a Request. In essence it tries to use the constructor where possible (as opposed to setPhys and setVirt), thus avoiding spreading the information across a number of locations. In fact, setPhys is made private as part of this patch, and a number of places where we callede setVirt instead uses the appropriate constructor.
2014-12-02mem: Add const getters for write packet dataAndreas Hansson
This patch takes a first step in tightening up how we use the data pointer in write packets. A const getter is added for the pointer itself (getConstPtr), and a number of member functions are also made const accordingly. In a range of places throughout the memory system the new member is used. The patch also removes the unused isReadWrite function.
2014-12-02mem: Remove null-check bypassing in Packet::getPtrAndreas Hansson
This patch removes the parameter that enables bypassing the null check in the Packet::getPtr method. A number of call sites assume the value to be non-null. The one odd case is the RubyTester, which issues zero-sized prefetches(!), and despite being reads they had no valid data pointer. This is now fixed, but the size oddity remains (unless anyone object or has any good suggestions). Finally, in the Ruby Sequencer, appropriate checks are made for flush packets as they have no valid data pointer.
2014-11-12mem: Delete unused variable in Garnet NetworkLinkMitch Hayenga
With recent changes OSX clang compilation fails due to an unused variable.
2014-11-06ruby: provide a backing storeNilay Vaish
Ruby's functional accesses are not guaranteed to succeed as of now. While this is not a problem for the protocols that are currently in the mainline repo, it seems that coherence protocols for gpus rely on a backing store to supply the correct data. The aim of this patch is to make this backing store configurable i.e. it comes into play only when a particular option: --access-backing-store is invoked. The backing store has been there since M5 and GEMS were integrated. The only difference is that earlier the system used to maintain the backing store and ruby's copy was write-only. Sometime last year, we moved to data being supplied supplied by ruby in SE mode simulations. And now we have patches on the reviewboard, which remove ruby's copy of memory altogether and rely completely on the system's memory to supply data. This patch adds back a SimpleMemory member to RubySystem. This member is used only if the option: access-backing-store is set to true. By default, the memory would not be accessed.