Age | Commit message (Collapse) | Author |
|
blkAlign was defined as a separate function in the base associative
and fully-associative tags classes although both functions implemented
identical functionality. This patch moves the blkAlign in the base
tags class.
Change-Id: I3d415d0e62bddeec7ce0d559667e40a8c5fdc2d4
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
Change-Id: I0ed4e528cb750a323facdc811dde7f0ed1ff228e
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
The slicc compiler currently treats && and || with the same precedence.
This is highly non-intuitive to people used to C, and was probably an
error. This patch makes && bind tighter than ||.
For example, previously:
if (A || B && C)
compiled to:
if ((A || B) && C)
With this patch, it compiles to:
if (A || (B && C))
Change-Id: Idbbd5b50cc86a8d6601045adc14a253284d7b791
Signed-off-by: Lena Olson (leolson@google.com)
Reviewed-on: https://gem5-review.googlesource.com/2168
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Joe Gross <criusx@gmail.com>
Reviewed-by: Sooraj Puthoor <puthoorsooraj@gmail.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
[ Rebased onto master ]
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
simulations
Modifies the clone system call and adds execve system call. Requires allowing
processes to steal thread contexts from other processes in the same system
object and the ability to detach pieces of process state (such as MemState)
to allow dynamic sharing.
|
|
Change-Id: I6149290d6d2ac1a4bd6165871c93d7b7d6a980ad
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I29f45733c5fad822bdd0d8dcc7939d86b2e8c97b
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I79c2662fc81630ab321db8a75be6cd15fa07d372
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: If9ebb8488e8db587482ecfa99d2c12cfe5734fb9
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I4f3c2c027b1acaaf791a4c71086f34a9b9fbf4df
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Policies like the LRU need to be notified when a block is invalidated,
the helper function does this along with invalidating the block.
Change-Id: I3ed59cf07938caa7f394ee6054b0af9e00b267ea
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
This changeset updates an assert in src/mem/cache/mshr.cc which was
erroneously catching invalidated prefetch requests. These requests can
become invalidated if another component writes (an exclusive access)
to this location during the time that the read request is in
flight. The original assert made the assumption that these cases can
only occur for reads generated by the CPU, and hence
prefetcher-generated requests would sometimes trip the assert.
Change-Id: If4f043273a688c2bab8f7a641192a2b583e7b20e
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Previously the writeback visitor would not consider and set the secure
flag for the blocks that are written back to memory. This patch fixes
this.
Change-Id: Ie1a425fa9211407a70a4343f2c6b3d073371378f
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I7ae5fa44a937f641a2ddd242a49e0cd23f68b9f2
Reviewed-by: Sudhanshu Jha <sudhanshu.jha@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
The traversal of drainable objects could potentially be
non-deterministic when using an unordered set containing object
pointers. To ensure that the iteration is deterministic, we switch to
a vector. Note that the lookup and traversal of the drainable objects
is not performance critical, so the change has no negative consequences.
|
|
This patch fixes a bug where a deferred snoop ended up being to a
partial cache line, and not cache-line aligned, all due to how we copy
the packet.
|
|
Fix compilation errors due to missing include.
|
|
Signed-off-by: Pierre-Yves Péneau <pierre-yves.peneau@lirmm.fr>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed at http://reviews.gem5.org/r/3802/
|
|
Signed-off-by: Pierre-Yves Péneau <pierre-yves.peneau@lirmm.fr>
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed at http://reviews.gem5.org/r/3801/
|
|
Assertion in the respondEvent erroneously fired.
The assertion verifies that the controller has not moved to a low-power
state prior to receiving read data from the memory.
The original assertion triggered if the state was not:
PWR_IDLE or PWR_ACT.
In the case that failed, a periodic refresh event occurred around the
read. The REF is stalled until the final read burst is issued
and the subsequent PRE closes the bank. While the PRE will temporarily
move the state to PWR_IDLE, state will immediately transition to PWR_REF
due to the pending refresh operation. This state does not match the
assertion, which is subsequently triggered.
Fixed the assertion by explicitly checking that the state is not a low
power state
!PWR_SREF && !PWR_PRE_PDN && !PWR_ACT_PDN
Change-Id: I82921a733bbeac2bcb5a487c2f981448d41ed50b
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
|
|
Names of DRAM configurations were updated to reflect both
the channel and device data width.
Previous naming format was:
<DEVICE_TYPE>_<DATA_RATE>_<CHANNEL_WIDTH>
The following nomenclature is now used:
<DEVICE_TYPE>_<DATA_RATE>_<n>x<w>
where n = The number of devices per rank on the channel
x = Device width
Total channel width can be calculated by n*w
Example:
A 64-bit DDR4, 2400 channel consisting of 4-bit devices:
n = 16
w = 4
The resulting configuration name is:
DDR4_2400_16x4
Updated scripts to match new naming convention.
Added unique configurations for DDR4 for:
1) 16x4
2) 8x8
3) 4x16
Change-Id: Ibd7f763b7248835c624309143cb9fc29d56a69d1
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
|
|
The rr arbiter pointer in garnet was getting updated on every request,
even if there is no grant. This was leading to a huge variance in wait
time at a router at high injection rates.
This patch corrects it to update upon a grant.
|
|
Rather than having the 1st line on the Log line and every other line on its
own, add a new line to have a common format for all of them. Makes parsing
a lot easier.
Reviewed at http://reviews.gem5.org/r/3808/
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
The Request constructor requires a MasterID. However, an external
transactor has no chance of getting a MasterID as it does not have a
pointer to the System. This patch adds a MasterID to ExternalMaster to
allow external modules to easily genrerate new Requests.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Used cppclean to help identify useless includes and removed them. This
involved erroneously included headers, but also cases where forward
declarations could have been used rather than a full include.
|
|
the GPUCoalescer code is used in the ruby profiler regardless of
whether or not the coalescer code has been compiled, which can
lead to link/run time errors. here we add #ifdefs to guard the
usage of GPUCoalescer code. eventually we should refactor this
code to use probe points.
|
|
Garnet's NetworkInterface does not consider the size of MessageBuffers when
ejecting a Message from the network. Add a size check for the MessageBuffer
and only enqueue if space is available. If space is not available, the
message if placed in a queue and the credit is held. A callback from the
MessageBuffer is implemented to wake the NetworkInterface. If there are
messages in the stalled queue, they are processed first, in a FIFO manner
and if succesfully ejected, the credit is finally sent back upstream. The
maximum size of the stall queue is equal to the number of valid VNETs
with MessageBuffers attached.
|
|
This patch is an updated version of /r/3297.
"The most important statistic for measuring memory hierarchy performance is
throughput, which is affected by independent variables, buffer sizing and
communication latency. It is difficult/impossible to debug performance issues
through series buffers without knowing which are the bottlenecks. For finite
buffers, this patch adds statistics for the average number of messages in the
buffer, the occupancy of the buffer slots, and number of message stalls."
|
|
The NetworkInterface wakeup currently iterates over all VNETs and breaks the
loop if a VNET is unable to allocate a VC. This can cause a deadlock if a
lower numbered VNET is unable to allocate a VC while a higher numbered VNET
has idle VCs. This seems like a bug as Garnet 1.0 uses a while loop over an
if-statement, suggesting the break was intended for this while loop. This
patch removes the break statement, which allows up to one message to be
dequeued from a VNET and injected into the network.
|
|
|
|
When Ruby controllers stall messages in MessageBuffers, the buffer moves those
messages off the priority heap and into a per-address stall map. When buffers
are finite-sized, the test areNSlotsAvailable() only checks the size of the
priority heap, but ignores the stall map, so the map is allowed to grow
unbounded if the controller stalls numerous messages. This patch fixes the
problem by tracking the stall map size and testing the total number of messages
in the buffer appropriately.
|
|
|
|
the iterator declared in DMASequencer::ackCallback() is only used in an
assert, this causes clang to fail when building fast. here we move
the find call on the request table directly into the assert.
|
|
The Python wrappers generally assume that destructors are public. Make
the BaseXBar destructor public to avoid confusing the Python wrapper.
Change-Id: If958802409c0be74e875dd6e279742abfdb3ede1
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
|
|
This patch detects garnet network deadlock by monitoring
network interfaces. If a network interface continuously
fails to allocate virtual channels for a message, a
possible deadlock is detected.
|
|
This patch removes the deprecated RubyMemoryControl. The DRAMCtrl
module should be used instead.
|
|
Previously when an InvalidateReq snooped a cache with a dirty block or
a pending modified MSHR, it would invalidate the block or set the
postInv flag. The cache would not send an InvalidateResp. though,
causing memory order violations. This patches changes this behavior,
making the cache with the dirty block or pending modified MSHR the
ordering point.
Change-Id: Ib4c31012f4f6693ffb137cd77258b160fbc239ca
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
Previously an MSHR with one or more invalidating targets would first
service all targets in the MSHR TargetList and then invalidate the
block. As a result any service snooping targets would lookup in the
cache and incorrectly find the block. This patch forces the
invalidation to happen when the first invalidating target is
encountered.
Change-Id: I9df15de24e1d351cd96f5a2c424d9a03d81c2cce
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
This patch changes an assertion that previously assumed that a non
invalidating snoop request should never be serviced by an
InvalidateReq MSHR. The MSHR serves as the ordering point for the
snooping packet. When the InvalidateResp reaches the cache the
snooping packet snoops the caches above to find the requested
block. One or more of the caches above will have the block since
earlier it has seen a WriteLineReq.
Change-Id: I0c147c8b5d5019e18bd34adf9af0fccfe431ae07
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
When the snoopFilter receives a response, it updates its state using
the hasSharers flag (indicates whether there are more than one copies
of the block in the caches above). The hasSharers flag of the packet
was previously populated when the request was traversing and snooping
the caches looking for the block.
1) When the response is coming from the memory-side port, its order
with respect to other responses is not necessarily preserved (e.g., a
request that arrived second to the xbar can get its response first). As
a result the snoopFilter might process responses out of order updating
its residency information using the non valid hasSharers flag which was
populated much earlier.
2) When the response is from an on-chip, the MSHRs preserve a well
defined order and the hasSharers flag should contain valid
information.
This patch changes the snoopFilter by avoiding the hasSharers flag
when the response is from the memory-side port.
Change-Id: Ib2d22a5b7bf3eccac64445127d2ea20ee74bb25b
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
Previously, a WriteLineReq that missed in a cache would send out an
InvalidateReq if the block lookup failed or an UpgradeReq if the
block lookup succeeded but the block had sharers. This changes ensures
that a WriteLineReq always sends an InvalidateReq to invalidate all
copies of the block and satisfy the WriteLineReq.
Change-Id: I207ff5b267663abf02bc0b08aeadde69ad81be61
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
Change-Id: Ie3beeef25331f84a0a5bcc17f7a791f4a829695b
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
This patch fixes an issue where an MSHR would incorrectly be perceived
to provide data to targets arriving after an InvalidateReq. To address
this the InvalidateReq is now treated as isForward, much like an
UpgradeReq that did not hit in the cache.
Change-Id: Ia878444d949539b5c33fd19f3e12b0b8a872275e
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
Previously DPRINTFs printing information about a packet would use ad hoc
formats. This patch changes all DPRINTFs to use the print function
defined by the packet class, making the packet printing format more
uniform and easier to change.
Change-Id: Idd436a9758d4bf70c86a574d524648b2a2580970
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
A response to a ReadReq can either be a ReadResp or a
ReadRespWithInvalidate. As we add targets to an MSHR for a ReadReq we
assume that the response will be a ReadResp. When the response is
invalidating (ReadRespWithInvalidate) servicing more than one targets
can potentially violate the memory ordering. This change fixes the way
we handle a ReadRespWithInvalidate. When a cache receives a
ReadRespWithInvalidate we service only the first FromCPU target and
all the FromSnoop targets from the MSHR target list. The rest of the
FromCPU targets are deferred and serviced by a new request.
Change-Id: I75c30c268851987ee5f8644acb46f440b4eeeec2
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
Previously the information of whether a response was allocating or not
was a property of the MSHR. This change makes this flag a property of
the TargetList. Differernt TargetLists, e.g. the targets and the
deferred targets lists might have different values. Additionally, the
information about whether each of the target expects an allocating
response is stored inside the TargetList container. This allows for
repopulating the flag in case some of the targets are removed.
Change-Id: If3ec2516992f42a6d9da907009ffe3ab8d0d2021
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
This patch adds support for repopulating the flags of an MSHR
TargetList. The added functionality makes it possible to remove
targets from a TargetList without leaving it in an inconsistent state.
Change-Id: I3f7a8e97bfd3e2e49bebad056d11bbfb087aad91
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
In MessageBuffer the m_not_avail_count member is incremented but not used.
This causes an overflow reported by ASAN. This patch changes from an int to
Stats::Scalar, since the count is useful in debugging finite MessageBuffers.
|
|
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|