Age | Commit message (Collapse) | Author |
|
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
1. Delete unused variable from struct LinkEntry
2. Correct GarnetExtLink and GarnetIntLink inheritance
|
|
not all uses of MachineID initialize its fields, so here we add a default
ctor.
|
|
SequencerMsg is autogenerated by slicc scripts and the MessageSizeType is
initialized to the max enume value by default. The DMASequencer pushes this
message to the mandatory queue and since the MessageSizeType is unitialized,
string_to_MessageSizeType() function used by traces to print the message fails
with a panic. This patch avoids this problem by initializing MessageSizeType
of SequencerMsg to Request_Control.
|
|
fixes to appease clang++. tested on:
Ubuntu clang version 3.5.0-4ubuntu2~trusty2
(tags/RELEASE_350/final) (based on LLVM 3.5.0)
Ubuntu clang version 3.6.0-2ubuntu1~trusty1
(tags/RELEASE_360/final) (based on LLVM 3.6.0)
the fixes address the following five issues:
1) the exec continuations in gpu_static_inst.hh were marked
as protected when they should be public. here we mark
them as public
2) the Abs instruction uses std::abs() in its execute method.
because Abs is templated, it can also operate on U32 and U64,
types, which cause Abs::execute() to pass uint32_t and uint64_t
types to std::abs() respectively. this triggers a warning
because std::abs() has no effect in this case. to rememdy this
we add template specialization for the execute() method of Abs
when its template paramter is U32 or U64.
3) Some potocols that utilize the code in cprintf.hh were missing
includes to BoolVec.hh, which defines operator<< for the BoolVec
type. This would cause issues when the generated code would try
to pass a BoolVec type to a method in cprintf.hh that used
operator<< on an instance of a BoolVec.
4) Surprise, clang doesn't like it when you clobber all the bits
in a newly allocated object. I.e., this code:
tlb = new GpuTlbEntry\[size\];
std::memset(tlb, 0, sizeof(GpuTlbEntry) \* size);
Let's use std::vector to track the TLB entries in the GpuTlb now...
5) There were a few variables used only in DPRINTFs, so we mark them
with M5_VAR_USED.
|
|
DMA sequencers and protocols can currently only issue one DMA access at
a time. This patch implements the necessary functionality to support
multiple outstanding DMA requests in Ruby.
|
|
the RequestDesc was previously implemented as a std::pair, which made
the implementation overly complex and error prone. here we encapsulate the
packet, primary, and secondary types all in a single data structure with
all members properly intialized in a ctor
|
|
Change-Id: I763cffe0c69f5ebbbf6a6eb12bec5c13d5d0161d
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
|
|
Added power-down state transitions to the DRAM controller model.
Added per rank parameter, outstandingEvents, which tracks the number
of outstanding command events and is used to determine when the
controller should transition to a low power state.
The controller will only transition when there are no outstanding events
scheduled and the number of command entries for the given rank is 0.
The outstandingEvents parameter is incremented for every RD/WR burst,
PRE, and REF event scheduled. ACT is implicitly covered by RD/WR
since burst will always issue and complete after a required ACT.
The parameter is decremented when the event is serviced (completed).
The controller will automatically transition to ACT power down,
PRE power down, or SREF.
Transition to ACT power down state scheduled from:
1) The RespondEvent, where read data is received from the memory.
ACT power-down entry will be scheduled when one or more banks is
open, all commands for the rank have completed (no more commands
scheduled), and there are no commands in queue for the rank
Transition to PRE power down scheduled from:
1) respondEvent, when all banks are closed, all commands have
completed, and there are no commands in queue for the rank
2) prechargeEvent when all banks are closed, all commands have
completed, and there are no commands in queue for the rank
3) refreshEvent, after the refresh is complete when the previous
state was ACT power-down
4) refreshEvent, after the refresh is complete when the previous
state was PRE power-down and there are commands in the queue.
Transition to SREF will be scheduled from:
1) refreshEvent, after the refresh is completes when the previous
state was PRE power-down with no commands in queue
Power-down exit commands are scheduled from:
1) The refreshEvent, prior to issuing a refresh
2) doDRAMAccess, to wake-up the rank for RD/WR command issue.
Self-refresh exit commands are scheduled from:
1) The next request event, when the queue has commands for the rank
in the readQueue or there are commands for the rank in the
writeQueue and the bus state is WRITE.
Change-Id: I6103f660776e36c686655e71d92ec7b5b752050a
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
|
|
The per rank statistics are periodically updated based on
state transition and refresh events.
Add a method to update these when a dump event occurs to
ensure they reflect accurate values.
Specifically, need to ensure that the low-power state
durations, power, and energy are logged correctly.
Change-Id: Ib642a6668340de8f494a608bb34982e58ba7f1eb
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
|
|
Add constraint that all ranks have to be in PWR_IDLE
before signaling drain complete
This will ensure that the banks are all closed and the rank
has exited any low-power states.
On suspend, update the power stats to sync the DRAM power logic
The logic maintains the location of the signalDrainDone
method, which is still triggered from either:
1) Read response event
2) Next request event
This ensures that the drain will complete in the READ bus
state and minimizes the changes required.
Change-Id: If1476e631ea7d5999fe50a0c9379c5967a90e3d1
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
|
|
Add local variable to stores commands to be issued.
These commands are in order within a single bank but will be out
of order across banks & ranks.
A new procedure, flushCmdList, sorts commands across banks / ranks,
and flushes the sorted list, up to curTick() to DRAMPower.
This is currently called in refresh, once all previous commands are
guaranteed to have completed. Could be called in other events like
the powerEvent as well.
By only flushing commands up to curTick(), will not get out of sync
when flushed at a periodic stats dump (done in subsequent patch).
Change-Id: I4ac65a52407f64270db1e16a1fb04cfe7f638851
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
|
|
Change-Id: I8992ddc1664c3ed4b2d36d8a34e4ce8be113b9de
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
|
|
|
|
|
|
This removes errors when building gem5.fast
|
|
Revamped version of garnet with more optimized single-cycle routers,
more configurability, and cleaner code.
|
|
Only garnet2.0 will be supported henceforth.
|
|
This patch adds port direction names to the links during topology
creation, which can be used for better printed names for the links
or for users to code up their own adaptive routing algorithms.
It also adds support for every router to have an independent latency
value to support heterogeneous topologies with the subsequent
garnet2.0 patch.
|
|
This patch makes the internal links within the network topology
unidirectional, thus allowing any deadlock-free routing algorithms to
be specified from the topology itself using weights.
This patch also renames Mesh.py and MeshDirCorners.py to
Mesh_XY.py and MeshDirCorners_XY.py (Mesh with XY routing).
It also adds a Mesh_westfirst.py and CrossbarGarnet.py topologies.
|
|
Over the past 6 years, we realized that the protocol is essentially used
to run the garnet network in a standalone manner, and feed standard synthetic
traffic patterns through it.
|
|
Fixed AbstractController::queueMemoryWritePartial to specify the
correct size for partial memory writes.
|
|
print number of bytes written as a decimal number, not hex
|
|
Only map memories into the KVM guest address space that are
marked as usable by KVM. Create BackingStoreEntry class
containing flags for is_conf_reported, in_addr_map, and
kvm_map.
|
|
Previously printing an mshr would trigger an assertion if the MSHR was
not in service or if the targets list was empty. This patch changes
the print function to bypasses the accessor functions for
postInvalidate and postDowngrade and avoid the relevant assertions. It
also checks if the targets list is empty before calling print on it.
Change-Id: Ic18bee6cb088f63976112eba40e89501237cfe62
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Secure and non-secure data can coexist in the cache and therefore the
snoop filter should treat differently packets with secure and non
secure accesses. This patch uses the lower bits of the line address to
keep track of whether the packet is addressing secure memory or not.
Change-Id: I54a5e614dad566a5083582bede86c86896f2c2c1
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch changes the default behaviour of the SystemXBar, adding a
snoop filter. With the recent updates to the snoop filter allocation
behaviour this change no longer causes problems for the regressions
without caches.
Change-Id: Ibe0cd437b71b2ede9002384126553679acc69cc1
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch improves the snoop filter allocation decisions by not only
looking at whether a port is snooping or not, but also if the packet
actually came from a cache. The issue with only looking at isSnooping
is that the CPU ports, for example, are snooping, but not actually
caching. Previously we ended up incorrectly allocating entries in
systems without caches (such as the atomic and timing quick
regressions). Eventually these misguided allocations caused the snoop
filter to panic due to an excessive size.
On the request path we now include the fromCache check on the packet
itself, and for responses we check if we actually have a snoop-filter
entry.
Change-Id: Idd2dbc4f00c7e07d331e9a02658aee30d0350d7e
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch takes yet another step in maintaining the clusivity, in
that it allows a mostly-inclusive cache to hold on to blocks even when
responding to a ReadExReq or UpgradeReq. Previously the cache simply
invalidated these blocks, but there is no strict need to do so.
The most important part of this patch is that we simply mark the block
clean when satisfying the upstream request where the cache is allowed
to keep the block. The only tricky part of the patch is in the memory
management of deferred snoops, where we need to distinguish the cases
where only the packet was copied (we expected to respond), and the
cases where we created an entirely new packet and request (we kept it
only to replay later).
The code in satisfyRequest is definitely ready for some refactoring
after this.
Change-Id: I201ddc7b2582eaa46fb8cff0c7ad09e02d64b0fc
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch changes how the mostly exclusive policy is enforced to
ensure that we drop blocks when we should. As part of this change, the
actual invalidation due to the clusivity enforcement is moved outside
the hit handling, to a separate method maintainClusivity. For the
timing mode that means we can deal with all MSHR targets before taking
any action and possibly dropping the block. The method
satisfyCpuSideRequest is also renamed satisfyRequest as part of this
change (since we only ever see requests from the cpu-side port).
Change-Id: If6f3d1e0c3e7be9a67b72a55e4fc2ec4a90fd3d2
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch adds a FromCache attribute to the packet, and updates a
number of the existing request commands to reflect that the request
originates from a cache. The attribute simplifies checking if a
requests came from a cache or not, and this is used by both the cache
and snoop filter in follow-on patches.
Change-Id: Ib0a7a080bbe4d6036ddd84b46fd45bc7eb41cd8f
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Steve Reinhardt <stever@gmail.com>
|
|
There are cases where we want to put boot ROMs on the PIO bus. Ruby
currently doesn't support functional accesses to such memories since
functional accesses are always assumed to go to physical memory. Add
the required support for routing functional accesses to the PIO bus.
Change-Id: Ia5b0fcbe87b9642bfd6ff98a55f71909d1a804e3
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
Reviewed-by: Michael LeBeane <michael.lebeane@amd.com>
|
|
|
|
Change-Id: I70dd11c23b45dfc606ef08233d2e50fcc0817505
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Currently garnet will not run due to double statistic registration of new
stats in ClockedObject. This occurs because a temporary array named 'cls'
is being added as a child to garnet internal and external link SimObjects.
This patch simply renames the temporary array which prevents it from
being added as a child object and avoids the assertion that a statistic
was already registered.
Committed by Jason Lowe-Power <jason@lowepower.com>
|
|
Sync DRAMPower to external tool
This patch syncs the DRAMPower library of gem5 to the external
one on github (https://github.com/ravenrd/DRAMPower) of which
I am a maintainer.
The version used is the commit:
902a00a1797c48a9df97ec88868f20e847680ae6
from 07. May. 2016.
Committed by Jason Lowe-Power <jason@lowepower.com>
|
|
In this new hmc configuration we have used the existing components in gem5
mainly [SerialLink] [NoncoherentXbar]& [DRAMCtrl] to define 3 different
architecture for HMC.
Highlights
1- It explores 3 different HMC architectures
2- It creates 4-HMC crossbars and attaches 16 vault controllers with it.
This will connect vaults to serial links
3- From the previous version, HMCController with round robin funtionality
is being removed and all the serial links are being accessible directly
from user ports
4- Latency incorporated by HMCController (in previous version) is being
added to SerialLink
Committed by Jason Lowe-Power <jason@lowepower.com>
|
|
The snoop filter handles requests in two steps which preceed and
follow the call to send the packet downstream. An address mapper could
possibly change the address of the packet when it is sent downstream
breaking the snoop filter assumption that the address is unchanged
Change-Id: Ib2db755e9ebef4f2f7c0169a46b1b11185ffbe79
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Fixing an issue with regStats not calling the parent class method
for most SimObjects in Gem5. This causes issues if one adds new
stats in the base class (since they are never initialized properly!).
Change-Id: Iebc5aa66f58816ef4295dc8e48a357558d76a77c
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
We want to extend the stats of objects hierarchically and thus it is necessary
to register the statistics of the base-class(es), as well. For now, these are
empty, but generic stats will be added there.
Patch originally provided by Akash Bagdia at ARM Ltd.
|
|
This implements SwapReq for Ruby memory.
A SwapReq should be treated like a write, except that the response
packet contains the overwritten data.
Note that, in particular, the conditional checking for isStore/isLoad
needs to be reversed, as a SwapReq is both.
|
|
This patch fixes a memory leak where deferred snoop packets never got
deallocated. On the call to MSHR::handleSnoop these snoops were
treated as if a response will be sent, as the MSHR was
pendingModified. Consequently, a copy of the packet was created and
added to the MSHR targets. However, an preceeding target to the same
MSHR, originally from a CPU, was serviced before the snoop, and caused
the block to be invalidated. This happens for ReadExReq and
UpgradeReq.
Note that the original snoop will receive a response, just not from
the cache in question, but instead from the cache upstream that issued
the ReadExReq or UpgradeReq.
Change-Id: I4ac012fbc8a46cf693ca390fe9476105d444e6f4
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This patch changes the flow control for HSHR::handleSnoop to ensure
that we only set cacheResponding on the snoop packet if we are
actually responding. This avoids situations where a responder is
stalling indefinitely on a response that never arrives.
Change-Id: I691dd01755b614b30203581aa74fc743b350eacc
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This patch fixes the type of the unique_ptr instances, to ensure that
the data that is allocated with new[] is also deleted with
delete[]. The issue was highlighted by ASAN.
Change-Id: I2c5510424959d862a9954d83e728d901bb18d309
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
Change-Id: Ia57cc104978861ab342720654e408dbbfcbe4b69
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I57b56771086e1e2f512977fb7248d93c171ab925
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I5042410be54935650b7d05c84d8d9efbfcc06e70
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I6d1feb164a958dde0da87a1cd2698096112c4a82
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Allow usage of packet class in ruby for convenience purposes. This may be
used to access members of the packet/request class (e.g., via helper
functions) and/or push protocol specific information to the packets
SenderState without needing to modify SLICC types and protocols in multiple
locations.
|
|
Somehow the WriteLineReq were never added to the list of commands
considered demand.
|