Age | Commit message (Collapse) | Author |
|
|
|
Only a small quantity of prefetches were issued, as the positive
feedback mechanism was not implemented. This commit adds a new
action po_observeHit, which notifies the RubyPrefetcher of
successful prefetches and resets the prefetch flag.
When a cache line was replaced by a prefetch, the wrong queue could
be stalled. This commit adds a new event PF_L1_Replacement, which
stalls the correct queue.
The behavior when receiving a prefetch or instruction fetch while
in PF_IS_I (prefetch caused GETs, but got invalidated before the
response was received) was undefined. This was changed to drop the
prefetch request or change the state to non-prefetch, respectively.
This behavior is analogous to IS_I (non-prefetch caused GETs, but
got invalidated before the response was received) and the data case,
respectively.
In my local branch a major (20+%) performance increase can be
observed in SPEC2006 gobmk and leslie3d when enabling the
prefetcher. Some other benchmarks like bwaves, GemsFDTD, sphinx and
wrf show smaller (~10%) performance increases. Unfortunately, the
performance in most other SPEC benchmarks is still poor, most likely
as the prefetcher does not detect strides fast/often enough. In
order to push the change timely (most benchmarks have runtimes in
the order of days on my machine even with the smallest parameters)
after checkout, I have only run gobmk with the base repository
+ this commit. The results match those of my local branch.
Change-Id: I9903a2fcd02060ea5e619b409f31f7d6fac47ae8
Reviewed-on: https://gem5-review.googlesource.com/8801
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Swapnil Haria <swapnilster@gmail.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
This changeset fixes a bug that was affecting the MOESI_CMP_token
protocol where setting the next timeout required an absolute tick in
the future.
Change-Id: Ibfdb59354e13c7e552cb3389e71bda010f333249
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/7163
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
The function map_Address_to_DMA was used to route responses to the
first (and assumed to be the only) DMA engine in the system. This
function is now unused as protocols handle responses and route them to
the right DMA engine.
Change-Id: I2fba913cf2f12321d1a1e38e7ee85bdf26b8a47a
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/7162
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
Previously the MESI_Two_Level protocol supported systems with a single
DMA engine and responses from the directory to DMA requests were
routed back to the only DMA engine. This changeset adds support for
multiple DMA engines in the system by routing the response to the DMA
engine that originally sent the request.
Change-Id: I10ceda682ea29746636862ec8ef2a9c4220ca045
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/7161
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
Change-Id: Ica08e93f3873a7eafd02fe7d44c3bdbf0ce7f6b7
Reviewed-on: https://gem5-review.googlesource.com/5565
Reviewed-by: Gabe Black <gabeblack@google.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
Previously the directory covered a flat address range that always
started from address 0. This change adds a vector of address ranges
with interleaving and hashing that each directory keeps track of and
the necessary flexibility to support systems with non continuous
memory ranges.
Change-Id: I6ea1c629bdf4c5137b7d9c89dbaf6c826adfd977
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/2903
Reviewed-by: Bradford Beckmann <brad.beckmann@amd.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
Multiple outstanding DMA requests introduced new DMA states that didn't
be considered into slicc code. This patch implements the missed DMA state
changes on MOESI_CMP_directory protocol.
Change-Id: I700d441d76556b7e77e0d507904af6ec6ba59cc2
Signed-off-by: Michael LeBeane <michael.lebeane@amd.com>
Reviewed-on: https://gem5-review.googlesource.com/2380
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Michael LeBeane <Michael.Lebeane@amd.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
SequencerMsg is autogenerated by slicc scripts and the MessageSizeType is
initialized to the max enume value by default. The DMASequencer pushes this
message to the mandatory queue and since the MessageSizeType is unitialized,
string_to_MessageSizeType() function used by traces to print the message fails
with a panic. This patch avoids this problem by initializing MessageSizeType
of SequencerMsg to Request_Control.
|
|
DMA sequencers and protocols can currently only issue one DMA access at
a time. This patch implements the necessary functionality to support
multiple outstanding DMA requests in Ruby.
|
|
Over the past 6 years, we realized that the protocol is essentially used
to run the garnet network in a standalone manner, and feed standard synthetic
traffic patterns through it.
|
|
Allow usage of packet class in ruby for convenience purposes. This may be
used to access members of the packet/request class (e.g., via helper
functions) and/or push protocol specific information to the packets
SenderState without needing to modify SLICC types and protocols in multiple
locations.
|
|
|
|
|
|
This patch adds support for write-combining in ruby.
|
|
mem: support for gpu-style RMWs in ruby
This patch adds support for GPU-style read-modify-write (RMW) operations in
ruby. Such atomic operations are traditionally executed at the memory controller
(instead of through an L1 cache using cache-line locking).
Currently, this patch works by propogating operation functors through the memory
system.
|
|
This patch add support to mark memory requests/packets with attributes defined
in HSA, such as memory order and scope.
|
|
This patch is imported from reviewboard patch 2551 by Nilay.
This patch moves from a dynamically defined MachineType to a statically
defined one. The need for this patch was felt since a dynamically defined
type prevents us from having types for which no machine definition may
exist.
The following changes have been made:
i. each machine definition now uses a type from the MachineType enumeration
instead of any random identifier. This required changing the grammar and the
*.sm files.
ii. MachineType enumeration defined statically in RubySlicc_Exports.sm.
* * *
normal protocol fixes for nilay's parser machine type fix
|
|
This patch is imported from reviewboard patch 2550 by Nilay.
It was possible to specify multiple machine types with a single state machine.
This seems unnecessary and is being removed.
|
|
misc changes now that Address has become Addr including int to address util
function
|
|
The BoolVec typedef and insertion operator overload function simplify usage of
vectors of type bool
|
|
|
|
Changeset 4872dbdea907 replaced Address by Addr, but did not make changes to
print statements. So the addresses which were being printed in hex earlier
along with their line address, were now being printed in decimals. This patch
adds a function printAddress(Addr) that can be used to print the address in hex
along with the lines address. This function has been put to use in some of the
places. At other places, change has been made to print just the address in
hex.
|
|
Some blocks in MOESI hammer were not getting deallocated when they were set to
an idle state (e.g. by invalidate or other_getx/s messages). While
functionally correct, this caused some bad effects on performance, such as
blocks in I in the L1s getting sent to the L2 upon eviction, in turn evicting
valid blocks. Also, if a valid block was in LRU, that block could be evicted
rather than a block in I. This patch adds in the missing deallocations.
Committed by: Nilay Vaish<nilay@cs.wisc.edu>
|
|
This patch changes MessageBuffer and TimerTable, two structures used for
buffering messages by components in ruby. These structures would no longer
maintain pointers to clock objects. Functions in these structures have been
changed to take as input current time in Tick. Similarly, these structures
will not operate on Cycle valued latencies for different operations. The
corresponding functions would need to be provided with these latencies by
components invoking the relevant functions. These latencies should also be
in Ticks.
I felt the need for these changes while trying to speed up ruby. The ultimate
aim is to eliminate Consumer class and replace it with an EventManager object in
the MessageBuffer and TimerTable classes. This object would be used for
scheduling events. The event itself would contain information on the object and
function to be invoked.
In hindsight, it seems I should have done this while I was moving away from use
of a single global clock in the memory system. That change led to introduction
of clock objects that replaced the global clock object. It never crossed my
mind that having clock object pointers is not a good design. And now I really
don't like the fact that we have separate consumer, receiver and sender
pointers in message buffers.
|
|
|
|
Currently the sequencer calls the function setMRU that updates the replacement
policy structures with the first level caches. While functionally this is
correct, the problem is that this requires calling findTagInSet() which is an
expensive function. This patch removes the calls to setMRU from the sequencer.
All controllers should now update the replacement policy on their own.
The set and the way index for a given cache entry can be found within the
AbstractCacheEntry structure. Use these indicies to update the replacement
policy structures.
|
|
MessageBuffer is a SimObject now. There were protocols that still declared
some of the message buffers are variables of the controller, but not as input
parameters. Special handling was required for these variables in the SLICC
compiler. This patch changes this. Now all message buffers are declared as
input parameters.
|
|
|
|
|
|
Currently the sequencer calls the function setMRU that updates the replacement
policy structures with the first level caches. While functionally this is
correct, the problem is that this requires calling findTagInSet() which is an
expensive function. This patch removes the calls to setMRU from the sequencer.
All controllers should now update the replacement policy on their own.
The set and the way index for a given cache entry can be found within the
AbstractCacheEntry structure. Use these indicies to update the replacement
policy structures.
|
|
This is in preparation for adding a second arugment to the lookup
function for the CacheMemory class. The change to *.sm files was made using
the following sed command:
sed -i 's/\[\([0-9A-Za-z._()]*\)\]/.lookup(\1)/' src/mem/protocol/*.sm
|
|
This patch eliminates the type Address defined by the ruby memory system.
This memory system would now use the type Addr that is in use by the
rest of the system.
|
|
Avoid clash between type Addr and variable name Addr.
|
|
|
|
Expose MessageBuffers from SLICC controllers as SimObjects that can be
manipulated in Python. This patch has numerous benefits:
1) First and foremost, it exposes MessageBuffers as SimObjects that can be
manipulated in Python code. This allows parameters to be set and checked in
Python code to avoid obfuscating parameters within protocol files. Further, now
as SimObjects, MessageBuffer parameters are printed to config output files as a
way to track parameters across simulations (e.g. buffer sizes)
2) Cleans up special-case code for responseFromMemory buffers, and aligns their
instantiation and use with mandatoryQueue buffers. These two special buffers
are the only MessageBuffers that are exposed to components outside of SLICC
controllers, and they're both slave ends of these buffers. They should be
exposed outside of SLICC in the same way, and this patch does it.
3) Distinguishes buffer-specific parameters from buffer-to-network parameters.
Specifically, buffer size, randomization, ordering, recycle latency, and ports
are all specific to a MessageBuffer, while the virtual network ID and type are
intrinsics of how the buffer is connected to network ports. The former are
specified in the Python object, while the latter are specified in the
controller *.sm files. Unlike buffer-specific parameters, which may need to
change depending on the simulated system structure, buffer-to-network
parameters can be specified statically for most or all different simulated
systems.
|
|
CacheMemory and DirectoryMemory lookup functions return pointers to entries
stored in the memory. Bring PerfectCacheMemory in line with this convention,
and clean up SLICC code generation that was in place solely to handle
references like that which was returned by PerfectCacheMemory::lookup.
|
|
1. Eliminate state NP in L0 and L1 Caches: The two states 'NP' and 'I' both
mean that the cache block is not present in the cache. 'I' also means that the
cache entry has been allocated. This causes problems when we do not correctly
initialize the cache entry when it is re-used. Hence, this patch eliminates
the state NP altogether. Everytime a new block comes into the cache, a cache
entry is allocated. Everytime a block leaves, the corresponding entry is
deallocated.
2. Separate transient state for instruction fetches: purely for accouting
purposes.
3. Drop state IS_I in L1 Cache and the message type STALE_DATA: when
invalidation is received for a block in IS, the block used to be moved to IS_I.
This meant that the data that would arrive in future would be used but not
stored since the controller lost the permissions after gaining them. This
state is being dropped and now invalidation messages would not processed till
the data has arrived. This also means that STALE_DATA type is not longer
required.
|
|
The level 2 controller has a bug. In one particular action, the data block was
copied from a message irrespective whether the block is dirty or not. In cases
when L1 sends no data, the data value copied was incorrect.
|
|
For many years the slicc symbol table has supported overloaded functions in
external classes. This patch extends that support to functions that are not
part of classes (a.k.a. no parent). For example, this support allows slicc
to understand that mapAddressToRange is overloaded and the NodeID is an
optional parameter.
|
|
|
|
The Ruby banked array resource checks (initiated from SLICC) did a check and
allocate at the same time. If a transition needs more than one resource, then
it might check/allocate resource #1, then fail to get resource #2. Another
transition might then try to get the same resources, but in reverse order.
Deadlock.
This patch separates resource checking and resource reservation into two
steps to avoid deadlock.
|
|
Add support for acquire and release requests. These synchronization operations
are commonly supported by several modern instruction sets.
|
|
|
|
This patch adds a few helpful functions that allow .sm files to directly
invalidate all cache blocks using a trigger queue rather than rely on each
individual cache block to be invalidated via requests from the mandatory
queue.
|
|
This patch exposes the tag and data array latencies to the SLICC state machines
so that it can be used to determine the correct enqueue latency for response
messages.
|
|
This patch allows SLICC protocols to use more than one message type with a
message buffer. For example, you can declare two in ports as such:
in_port(ResponseQueue_in, ResponseMsg, responseFromDir, rank=3) { ... }
in_port(tgtResponseQueue_in, TgtResponseMsg, responseFromDir, rank=2) { ... }
|
|
This helper function is very useful converting address offsets to integers
that can be used for protocol specific destination mapping.
|
|
This patch drops the NetworkMessage class. The relevant data members and functions
have been moved to the Message class, which was the parent of NetworkMessage.
|
|
The accessor function getDestination() for Destination variable in the
coherence message clashes with the getDestination() that is part of the Message
class. Hence the name change.
|