Age | Commit message (Collapse) | Author |
|
The secure bit should be set when we fill a block with data from a
secure location, as indicated by the packet that triggers the fill.
This patch fixes a bug in which the cache wouldn't populate the secure
bit when filling the temp block.
Change-Id: I95c706146449804ff42b205b25dd79750f3e882a
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/8284
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
|
|
Change-Id: Ia9b825bcbb8d326705f74c15a93a88703153ba5a
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/8283
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
|
|
A writeclean packet writes a dirty block to the memory below and
therefore sets the dirty flag for the block when the memory below is a
cache. If the block was also marked as writable it can satisfy future
write requests without further requests/snoops. This can lead to
multiple copies of the same block marked as dirty which is not
allowed. This changeset clears the writable flag from the cleaned
block to prevent the cache from satisfying future write requests
without sending a downstream request.
Change-Id: I14d3c62fd33f81b1a8ba62374c8565ccab00a6fe
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/7821
Maintainer: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Exclusive caches use the tempBlock to fill for responses from a
downstream cache. The reason for this is that they only pass the block
to the cache above without keeping a copy. When all requests are
serviced the block is immediately invalidated unless it is dirty, in
which case it has to be written back to the memory below.
To avoid unnecessary writebacks, this changeset forces mostly
exclusive caches to issuse requests that can only fetch clean data
when possible.
Reported-by: Quereshi Muhammad Avais <avais@kaist.ac.kr>
Change-Id: I01b377563f5aa3e12d22f425a04db7c023071849
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/5061
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
A clean packet request serving a cache maintenance operation (CMO)
visits all memories down to the specified xbar. The visited caches
invalidate their copy (if the CMO is invalidating) and if a dirty copy
is found a write packet writes the dirty data to the memory level
below the specified xbar. A response is send back when all the caches
are clean and/or invalidated and the specified xbar has seen the write
packet.
This patch adds the following functionality in the xbar:
1) Accounts for the cache clean requests that go through the xbar
2) Generates the cache clean response when both the cache clean
request and the corresponding writeclean packet has crossed the
destination xbar.
Previously transactions in the xbar were identified using the pointer
of the original request. Cache clean transactions comprise of two
different packets, the clean request and the writeclean, and therefore
have different request pointers. This patch adds support for custom
transaction IDs that by default take the value of the request pointer
but can be overriden by the contructor. This allows the clean request
and writeclean share the same id which the coherent xbar uses to
co-ordinate them and send the response in a timely manner.
Change-Id: I80db76386a1caded38dc66e6e18f930c3bb800ff
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/5051
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
|
|
This change adds support for maintenance operations (CMOs) in the
cache. The supported memory operations clean and/or invalidate a cache
block as specified by its VA to the specified xbar (PoU, PoC).
A cache maintenance packet visits all memories down to the specified
xbar. Caches need to invalidate their copy if it is an invalidating
CMO. If it is (additionally) a cleaning CMO and a dirty copy exists,
the cache cleans it with a WriteClean request.
Change-Id: Ibf31daa7213925898f3408738b11b1dd76c90b79
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/5049
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
|
|
When a response indicates that there are no other sharers of the
block, the cache can promote its copy of the block to writable and
potential service deferred targets even if the request didn't ask for
a writable copy.
Previously, a response would guarantee the presence of the block in
the cache. A response could either be filling, upgrading or a response
to an invalidation due to a pending whole line write. Responses to
cache maintenance invalidations break this assumption. This change
adds an extra check to make sure that the block was already valid or
that the response is filling before promoting the block.
Change-Id: I6839f683a05d4dad4205c23f365a925b7b05e366
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Anouk Van Laer <anouk.vanlaer@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/5048
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Previously, WriteClean packets would always write to the first memory
below unless the memory was unable to allocate in which case it would
be forwarded further below.
This change adds support for specifying the destination of a
WriteClean packet. The cache annotates the request with the specified
destination and marks the packet as write-through upon its
creation. The coherent xbar checks packets for their destination and
resets the write-through flag when necessary e.g., the coherent xbar
that is set as the PoC will reset the write-through flag for packets
to the PoC.
Change-Id: I84b653f5cb6e46e97e09508649a3725d72d94606
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Anouk Van Laer <anouk.vanlaer@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/5046
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
|
|
This change adds support for creating and handling WriteClean
packets. The WriteClean operation is almost identical to a
WritebackDirty with the exception that the cache generating a
WriteClean retains a copy of the block.
Change-Id: I63c8de62919fad0f9547d412f8266aa4292ebecd
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Anouk Van Laer <anouk.vanlaer@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/5045
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This changeset adds support for checking whether the cache is
currently busy and a timing request would be rejected.
Change-Id: I5e37b011b2387b1fa1c9e687b9be545f06ffb5f5
Reviewed-on: https://gem5-review.googlesource.com/5042
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
These files aren't a collection of miscellaneous stuff, they're the
definition of the Logger interface, and a few utility macros for
calling into that interface (panic, warn, etc.).
Change-Id: I84267ac3f45896a83c0ef027f8f19c5e9a5667d1
Reviewed-on: https://gem5-review.googlesource.com/6226
Reviewed-by: Brandon Potter <Brandon.Potter@amd.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
Request and Packet for squashed HWPrefetches were not deleted
Change-Id: I9b66bb01b8ed6a5ddfaaa8739a68165dc4a7006c
Signed-off-by: Pau Cabre <pau.cabre@metempsy.com>
Reviewed-on: https://gem5-review.googlesource.com/4340
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
NOTE: With this change there is a possibility for `DRAMCtrl::Rank`s
event names to not properly match the rank they were generated by. This
could occur if the public rank member is modified after the Rank's
construction. A patch would mean refactoring Rank and `DRAMCtrl`b to
privatize many of the members of Rank behind getters.
Change-Id: I7b8bd15086f4ffdfd3f40be4aeddac5e786fd78e
Signed-off-by: Sean Wilson <spwilson2@wisc.edu>
Reviewed-on: https://gem5-review.googlesource.com/3745
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Change-Id: I0ed4e528cb750a323facdc811dde7f0ed1ff228e
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
Change-Id: I79c2662fc81630ab321db8a75be6cd15fa07d372
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Policies like the LRU need to be notified when a block is invalidated,
the helper function does this along with invalidating the block.
Change-Id: I3ed59cf07938caa7f394ee6054b0af9e00b267ea
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Previously the writeback visitor would not consider and set the secure
flag for the blocks that are written back to memory. This patch fixes
this.
Change-Id: Ie1a425fa9211407a70a4343f2c6b3d073371378f
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Rather than having the 1st line on the Log line and every other line on its
own, add a new line to have a common format for all of them. Makes parsing
a lot easier.
Reviewed at http://reviews.gem5.org/r/3808/
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Previously when an InvalidateReq snooped a cache with a dirty block or
a pending modified MSHR, it would invalidate the block or set the
postInv flag. The cache would not send an InvalidateResp. though,
causing memory order violations. This patches changes this behavior,
making the cache with the dirty block or pending modified MSHR the
ordering point.
Change-Id: Ib4c31012f4f6693ffb137cd77258b160fbc239ca
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
Previously an MSHR with one or more invalidating targets would first
service all targets in the MSHR TargetList and then invalidate the
block. As a result any service snooping targets would lookup in the
cache and incorrectly find the block. This patch forces the
invalidation to happen when the first invalidating target is
encountered.
Change-Id: I9df15de24e1d351cd96f5a2c424d9a03d81c2cce
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
This patch changes an assertion that previously assumed that a non
invalidating snoop request should never be serviced by an
InvalidateReq MSHR. The MSHR serves as the ordering point for the
snooping packet. When the InvalidateResp reaches the cache the
snooping packet snoops the caches above to find the requested
block. One or more of the caches above will have the block since
earlier it has seen a WriteLineReq.
Change-Id: I0c147c8b5d5019e18bd34adf9af0fccfe431ae07
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
Previously, a WriteLineReq that missed in a cache would send out an
InvalidateReq if the block lookup failed or an UpgradeReq if the
block lookup succeeded but the block had sharers. This changes ensures
that a WriteLineReq always sends an InvalidateReq to invalidate all
copies of the block and satisfy the WriteLineReq.
Change-Id: I207ff5b267663abf02bc0b08aeadde69ad81be61
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
This patch fixes an issue where an MSHR would incorrectly be perceived
to provide data to targets arriving after an InvalidateReq. To address
this the InvalidateReq is now treated as isForward, much like an
UpgradeReq that did not hit in the cache.
Change-Id: Ia878444d949539b5c33fd19f3e12b0b8a872275e
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
Previously DPRINTFs printing information about a packet would use ad hoc
formats. This patch changes all DPRINTFs to use the print function
defined by the packet class, making the packet printing format more
uniform and easier to change.
Change-Id: Idd436a9758d4bf70c86a574d524648b2a2580970
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
A response to a ReadReq can either be a ReadResp or a
ReadRespWithInvalidate. As we add targets to an MSHR for a ReadReq we
assume that the response will be a ReadResp. When the response is
invalidating (ReadRespWithInvalidate) servicing more than one targets
can potentially violate the memory ordering. This change fixes the way
we handle a ReadRespWithInvalidate. When a cache receives a
ReadRespWithInvalidate we service only the first FromCPU target and
all the FromSnoop targets from the MSHR target list. The rest of the
FromCPU targets are deferred and serviced by a new request.
Change-Id: I75c30c268851987ee5f8644acb46f440b4eeeec2
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
Previously the information of whether a response was allocating or not
was a property of the MSHR. This change makes this flag a property of
the TargetList. Differernt TargetLists, e.g. the targets and the
deferred targets lists might have different values. Additionally, the
information about whether each of the target expects an allocating
response is stored inside the TargetList container. This allows for
repopulating the flag in case some of the targets are removed.
Change-Id: If3ec2516992f42a6d9da907009ffe3ab8d0d2021
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
|
|
This patch takes yet another step in maintaining the clusivity, in
that it allows a mostly-inclusive cache to hold on to blocks even when
responding to a ReadExReq or UpgradeReq. Previously the cache simply
invalidated these blocks, but there is no strict need to do so.
The most important part of this patch is that we simply mark the block
clean when satisfying the upstream request where the cache is allowed
to keep the block. The only tricky part of the patch is in the memory
management of deferred snoops, where we need to distinguish the cases
where only the packet was copied (we expected to respond), and the
cases where we created an entirely new packet and request (we kept it
only to replay later).
The code in satisfyRequest is definitely ready for some refactoring
after this.
Change-Id: I201ddc7b2582eaa46fb8cff0c7ad09e02d64b0fc
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch changes how the mostly exclusive policy is enforced to
ensure that we drop blocks when we should. As part of this change, the
actual invalidation due to the clusivity enforcement is moved outside
the hit handling, to a separate method maintainClusivity. For the
timing mode that means we can deal with all MSHR targets before taking
any action and possibly dropping the block. The method
satisfyCpuSideRequest is also renamed satisfyRequest as part of this
change (since we only ever see requests from the cpu-side port).
Change-Id: If6f3d1e0c3e7be9a67b72a55e4fc2ec4a90fd3d2
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch adds a FromCache attribute to the packet, and updates a
number of the existing request commands to reflect that the request
originates from a cache. The attribute simplifies checking if a
requests came from a cache or not, and this is used by both the cache
and snoop filter in follow-on patches.
Change-Id: Ib0a7a080bbe4d6036ddd84b46fd45bc7eb41cd8f
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Steve Reinhardt <stever@gmail.com>
|
|
Change-Id: I70dd11c23b45dfc606ef08233d2e50fcc0817505
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
This patch fixes a memory leak where deferred snoop packets never got
deallocated. On the call to MSHR::handleSnoop these snoops were
treated as if a response will be sent, as the MSHR was
pendingModified. Consequently, a copy of the packet was created and
added to the MSHR targets. However, an preceeding target to the same
MSHR, originally from a CPU, was serviced before the snoop, and caused
the block to be invalidated. This happens for ReadExReq and
UpgradeReq.
Note that the original snoop will receive a response, just not from
the cache in question, but instead from the cache upstream that issued
the ReadExReq or UpgradeReq.
Change-Id: I4ac012fbc8a46cf693ca390fe9476105d444e6f4
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Change-Id: I57b56771086e1e2f512977fb7248d93c171ab925
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I5042410be54935650b7d05c84d8d9efbfcc06e70
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I6d1feb164a958dde0da87a1cd2698096112c4a82
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
This patch removes the write-queue entry tracking previously used for
uncacheable writes. The write-queue entry is now deallocated as soon
as the packet is sent. As a result we also forego the stats for
uncacheable writes. Additionally, there is no longer a need to attach
the write-queue entry to the packet.
|
|
This patch makes the control flow more uniform in atomic and timing,
ultimately making the code easier to understand.
|
|
Added stat to the cache to account for HardPF'ed blocks that are evicted
before being referenced (over-prefetching).
|
|
This patch breaks out the cache write buffer into a separate class,
without affecting any stats. The goal of the patch is to avoid
encumbering the much-simpler write queue with the complex MSHR
handling. In a follow on patch this simplification allows us to
implement write combining.
The WriteQueue gets its own class, but shares a common ancestor, the
generic Queue, with the MSHRQueue.
|
|
This patch fixes an issue where an InvalidationReq only traversed one
level of the cache hierarchy, and was subsequently turned into a
ReadExReq due to it needing writable, and the command not being
checked for explicitly.
|
|
This patch introduces the ability of making the coherent crossbar the
point of coherency. If so, the crossbar does not forward packets where
a cache with ownership has already committed to responding, and also
does not forward any coherency-related packets that are not intended
for a downstream memory controller. Thus, invalidations and upgrades
are turned around in the crossbar, and the memory controller only sees
normal reads and writes.
In addition this patch moves the express snoop promotion of a packet
to the crossbar, thus allowing the downstream cache to check the
express snoop flag (as it should) for bypassing any blocking, rather
than relying on whether a cache is responding or not.
|
|
Adopt the same flow as in timing mode, where the caches on the path to
memory get to keep the line (if present), and we use the
responderHadWritable flag to determine if we need to forward the
(invalidating) packet or not.
|
|
This patch unifies the snoop handling in case of hitting writebacks
with how we handle snoops hitting in the tags. As a result, we end up
using the same optimisation as the normal snoops, where we inform the
downstream cache if we encounter a line in Modified (writable and
dirty) state, which enables us to avoid sending out express snoops to
invalidate any Shared copies of the line. A few regressions
consequently change, as some transactions are sunk higher up in the
cache hierarchy.
|
|
Some of the DPRINTFs added to the classic cache in cset 45df88079f04,
while useful to those unfamiliar with the cache code, end up being
noise when you're familiar with the code but are trying to debug tricky
protocol issues. (Particularly getting two messages from each cache
as it receives a snoop request then declares that there was no match.)
This patch introduces a CacheVerbose debug flag, and moves a subset of
the added DPRINTFs into that category, so that Cache by itself returns
to being a more succinct summary of cache activity.
Also added a CacheAll compound flag to turn on all the cache-related
debug flags (other than CacheTags, which you *really* have to want badly
to turn it on, IMO).
|
|
This patch looks at the request and response command to determine if
either actually has any data payload, and if not, we do not allocate
any space for packet data.
The only tricky case is where the command type is changed as part of
the MSHR functionality. In these cases where the original packet had
no data, but the new packet does, we need to explicitly call
allocate().
|
|
This patch ensures we do not respond with a Modified (dirty and
writable) line if the request is uncacheable, and that the cache
responding retains the line without modifying the state (even if
responding).
|
|
This patch changes the name of a bunch of packet flags and MSHR member
functions and variables to make the coherency protocol easier to
understand. In addition the patch adds and updates lots of
descriptions, explicitly spelling out assumptions.
The following name changes are made:
* the packet memInhibit flag is renamed to cacheResponding
* the packet sharedAsserted flag is renamed to hasSharers
* the packet NeedsExclusive attribute is renamed to NeedsWritable
* the packet isSupplyExclusive is renamed responderHadWritable
* the MSHR pendingDirty is renamed to pendingModified
The cache states, Modified, Owned, Exclusive, Shared are also called
out in the cache and MSHR code to make it easier to understand.
|
|
This patch removes the unused squash function from the MSHR queue, and
the associated (and also unused) threadNum member from the MSHR.
|
|
The checks made before sending out a HardPFReq were unecessarily
complex, and checked for cases that never occur. This patch
tidies it up.
|
|
This patch changes how the cache tracks which snoops are forwarded,
and which ones are created locally. Previously the identification was
based on an empty sender state of a specific class, but this method
fails to distinguish which cache actually attached the sender
state. Instead we use the same mechanism as the crossbar, and keep
track of the requests that have outstanding snoops.
|
|
This patch addresses a bug in how the cache attached the MSHR as a
sender state. Rather than overwriting any existing sender state it now
pushes a new one. The handling of upward snoops is also clarified.
|