Age | Commit message (Collapse) | Author |
|
Since the tag classes are subclasses of SimObject, they inherit an
init function which does generic initialization at simulation startup
and which doesn't take any parameters. A new function was added which
does take a parameter, and which is just for doing tag specific
initialization as triggered by the base cache. These two names clashed,
and clang complained that the tag local name was hiding the SimObject
name (which it was).
Change-Id: I399775aceaf8f1a8e2646d434facef22e6d3e7d0
Reviewed-on: https://gem5-review.googlesource.com/c/13875
Reviewed-by: Gabe Black <gabeblack@google.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Gabe Black <gabeblack@google.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Classes were using memcpy instead of the Packet functions
created for writing to/from the packet. This allows these
writes to be better checked and tracked.
This also fixes a bug in MemCheckerMonitor, which was using
the incorrect type for the packet pointer.
Change-Id: I5bbc8a24e59464e8219bb6d54af8209e6d4ee1af
Signed-off-by: Daniel R. Carvalho <odanrc@yahoo.com.br>
Reviewed-on: https://gem5-review.googlesource.com/c/13695
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Block was being invalidated twice when not a tempBlock.
Make explicit that the else case is only to be applied
when handling the tempBlock, as otherwise the Tags
should be taking care of the invalidation.
Change-Id: Ie7603fdbe156c54e94bbdc83541b55e66f8d250f
Signed-off-by: Daniel R. Carvalho <odanrc@yahoo.com.br>
Reviewed-on: https://gem5-review.googlesource.com/c/13895
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Note current PRFM supports only PLD, but PST (prefetch for store) is
also important for latency hiding. We also bug fix in disassembler to
display prfop correctly.
Change-Id: I9144e7233900aa2d555e1c1a6a2c2e41d837aa13
Signed-off-by: Yuetsu Kodama <yuetsu.kodama@riken.jp>
Reviewed-on: https://gem5-review.googlesource.com/c/13675
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Move evictBlock(CacheBlk*, PacketList&) to base cache,
as it is both sub-classes implementations are equal.
Change-Id: I80fbd16813bfcc4938fb01ed76abe29b3f8b3018
Signed-off-by: Daniel R. Carvalho <odanrc@yahoo.com.br>
Reviewed-on: https://gem5-review.googlesource.com/c/13656
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Enable the cache to detect contiguous writes and hold on to the MSHR
long enough to allow the entire line to be written. If the whole line
is written, the MSHR will be sent out as an invalidation requests, as
it is part of a whole-line write, i.e. no-fetch-on-write.
The cache is also able to switch to a write-no-allocate policy on the
actual completion of the writes, and instead use the tempBlock and
turn the write operation into a writeback.
These policies are all well-known, and described in works such as
Jouppi, Cache Write Policies and Performance, vol 21, no 2, ACM, 1993.
Change-Id: I19792f2970b3c6798c9b2b493acdd156897284ae
Reviewed-on: https://gem5-review.googlesource.com/c/12907
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This patch changes how we deal with whole-line writes their
responses. With these changes, we use the MSHR tracking to determine
if a whole-line is written, and on a fill we simply handle the
invalidation response, with the actual writes taking place as part of
satisfying the CPU-side hit.
Change-Id: I9a18e41a95db3c20b97f8bca7d95ff33d35a578b
Reviewed-on: https://gem5-review.googlesource.com/c/12905
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Encapsulate and virtualize block print, so that relevant
information can be easily printed anywhere.
Change-Id: I91109c29c126755183a0fd2b4446f5335e64076b
Signed-off-by: Daniel R. Carvalho <odanrc@yahoo.com.br>
Reviewed-on: https://gem5-review.googlesource.com/c/13415
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Having the blocks initialized in the constructor makes it harder
to apply inheritance in the tags classes. This patch decouples
the block initialization functionality from the constructor by
using an init() function. It also sets the parent cache.
Change-Id: I0da7fdaae492b1177c7cc3bda8639f79921fbbeb
Reviewed-on: https://gem5-review.googlesource.com/c/11509
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
Decouple Tags from Packets, only extracting the necessary
functionality for block insertion. As a side effect, create
a new function to update common insertion statistics.
Change-Id: I5c58f7c17de3255beee531f72a3fd25a30d74c90
Reviewed-on: https://gem5-review.googlesource.com/c/11098
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
This change is because I want to make CacheBlk::data private, so that
I can track all the places which write to it. But to keep that commit
smaller (it is pretty big, because of all the places which might
change it), I have split this into a commit of its own.
Change-Id: I15a2fc1752085ff3681f5c74ec90be3828a559ea
Reviewed-on: https://gem5-review.googlesource.com/11829
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Packet::checkFunctional also wrote data to/from the packet depending
on if it was read/write, respectively, which the 'check' in the name
would suggest otherwise. This renames it to doFunctional, which is
more suggestive. It also renames any function called checkFunctional
which calls Packet::checkFunctional. These are
- Bridge::BridgeMasterPort::checkFunctional
- calls Packet::checkFunctional
- MSHR::checkFunctional
- calls Packet::checkFunctional
- MSHR::TargetList::checkFunctional
- calls Packet::checkFunctional
- Queue<>::checkFunctional
(of src/mem/cache/queue.hh, not src/cpu/minor/buffers.h)
- Instantiated with Queue<WriteQueueEntry> and Queue<MSHR>
- WriteQueueEntry
- calls Packet::checkFunctional
- WriteQueueEntry::TargetList
- calls Packet::checkFunctional
- MemDelay::checkFunctional
- calls QueuedSlavePort/QueuedMasterPort::checkFunctional
- Packet::checkFunctional
- PacketQueue::checkFunctional
- calls Packet::checkFunctional
- QueuedSlavePort::checkFunctional
- calls PacketQueue::doFunctional
- QueuedMasterPort::checkFunctional
- calls PacketQueue::doFunctional
- SerialLink::SerialLinkMasterPort::checkFunctional
- calls Packet::doFunctional
Change-Id: Ieca2579c020c329040da053ba8e25820801b62c5
Reviewed-on: https://gem5-review.googlesource.com/11810
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
The writebacks happen before anything below, not after.
Change-Id: I7eaefbbf33aa17c496255dedd964a56118a28741
Reviewed-on: https://gem5-review.googlesource.com/11749
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
While a cache clean operation is pending, all requests to the
corresponding block get deferred. When the response of a cache clean
operation is received, if the block is present and the response is not
invalidating, we can service all deferred targets that didn't require
writable. This change implements this functionality.
Change-Id: Ief47e74d07749a6a9736ab450eb46eefa53464a2
Reviewed-on: https://gem5-review.googlesource.com/11018
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
AtomicOpFunctor can be used to implement atomic memory operations.
AtomicOpFunctor is captured inside a memory request and executed directly
in the memory hierarchy in a single step.
This patch enables AtomicOpFunctor pointers to be included in a memory
request and executed in a single step in the classic cache system.
This patch also makes the copy constructor of Request class do a deep
copy of AtomicOpFunctor object. This prevents a copy of a Request object
from accessing a deleted AtomicOpFunctor object.
Change-Id: I6649532b37f711e55f4552ad26893efeb300dd37
Reviewed-on: https://gem5-review.googlesource.com/8185
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
When a block is being replaced in an allocation, if successfull,
the block will be inserted. Therefore we move the insertion
functionality to allocateBlock().
allocateBlock's signature has been modified to allow this
modification.
Change-Id: I60d17a83ff4f3021fdc976378868ccde6c7507bc
Reviewed-on: https://gem5-review.googlesource.com/10812
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This patch is changing the underlying type for RequestPtr from Request*
to shared_ptr<Request>. Having memory requests being managed by smart
pointers will simplify the code; it will also prevent memory leakage and
dangling pointers.
Change-Id: I7749af38a11ac8eb4d53d8df1252951e0890fde3
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/10996
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Every usage of Request* in the code has been replaced with the
RequestPtr alias. This is a preparing patch for when RequestPtr will be
the typdefed to a smart pointer to Request rather then a raw pointer to
Request.
Change-Id: I73cbaf2d96ea9313a590cdc731a25662950cd51a
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/10995
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
Maintainer: Anthony Gutierrez <anthony.gutierrez@amd.com>
|
|
Change tag to address check for compatibility with sector design.
Cache should not use tag, as sector sub-blocks share them, and
it could lead to wrong accesses.
Change-Id: Id1fa26f417595f475c5b5c07ae1f02f5fa0684ba
Reviewed-on: https://gem5-review.googlesource.com/10723
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Sector caches must know if there was a sector hit in order
to decide whether a victim's sector must be fully evicted
to give place to a new sector or not.
In order to do so it needs the tag and secure information.
Change-Id: Ib554169e25fa131d6bf986561f7970b787c56874
Reviewed-on: https://gem5-review.googlesource.com/10722
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
For both sector and compressed caches multiple blocks may need
to be evicted in order to make room for a new block.
For example, when replacing a sector, all the blocks in this
sector must be evicted. A replacement, however, does not always
need to evict multiple blocks, as it is in the case of an
insertion of a block whose sector is already present in the cache
(i.e., its corresponding entry in the sector had not been brought
in yet, so it was invalid).
This patch creates the cache framework for that to happen.
Change-Id: I77bedf69637cf899fef4d9432eb6da8529ea398b
Reviewed-on: https://gem5-review.googlesource.com/10142
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
tempBlock has its member variables manually set in order to allow
it to be used in the block address regeneration function. This is
not necessary, and ti can be simply given the address, so it does
not need to be aware of set and tag. This will simplify
implementation of sector and skewed caches.
Change-Id: Iaffb10c323509722cd5589fe1030b818d43336d6
Reviewed-on: https://gem5-review.googlesource.com/9961
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Secure bit was being updated outside insertion.
Change-Id: I83d9b010e8cf64013bbea9bae3ea68b0c414a189
Reviewed-on: https://gem5-review.googlesource.com/10622
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This change modifies forEachBlk tags function to accept std::function
as parameter. It also adds an anyBlk tags function that given a
condition, it iterates through the blocks and returns whether the
condition is met.
Finally, it uses forEachBlk to implement the print, computeStats and
cleanupRefs functions that also work for the FALRU class.
Change-Id: I2f75f4baa1fdd5a1d343a63ecace3eb9458fbf03
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/10621
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Cache bypass is necessary for cpu models like the KvmCPU. Previously
the bypass would happen at the cache classes. With this change the
bypassing happens directly at the ports.
Change-Id: I34de9fc63383aee8590643e169501ea6060d2d62
Reviewed-on: https://gem5-review.googlesource.com/10432
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
|
|
This patch changes what goes into the BaseCache and what goes into the
Cache, to make it easier to add a NoncoherentCache with as much re-use
as possible. A number of redundant members and definitions are also
removed in the process.
This is a modified version of a changeset put together by Andreas
Hansson <andreas.hansson@arm.com>
Change-Id: Ie9dd73c4ec07732e778e7416b712dad8b4bd5d4b
Reviewed-on: https://gem5-review.googlesource.com/10431
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Change-Id: I25dbcfcddfe1c422a76eb1af3f726c1360d8d110
Reviewed-on: https://gem5-review.googlesource.com/10426
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Daniel Carvalho <odanrc@yahoo.com.br>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Replacement policies (LRU, Random) are currently considered as array
indexing methods, but have completely different functionalities:
- Array indexers determine the possible locations for block allocation.
This information is used to generate replacement candidates when
conflicts happen.
- Replacement policies determine which of the replacement candidates
should be evicted to make room for new allocations.
For this reason, they were split into different classes. Advantages:
- Easier and more straightforward to implement other replacement
policies (RRIP, LFU, ARC, ...)
- Allow easier future implementation of cache organization schemes
As now we can't assure the use of sets, the previous way to create a
true LRU is not viable. Now a timestamp_bits parameter controls how
many bits are dedicated for the timestamp, and a true LRU can be
achieved through an infinite number of bits (although a few bits suffice
in practice).
Change-Id: I23750db121f1474d17831137e6ff618beb2b3eda
Reviewed-on: https://gem5-review.googlesource.com/8501
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
NOTE: With this change there is a possibility for `DRAMCtrl::Rank`s
event names to not properly match the rank they were generated by. This
could occur if the public rank member is modified after the Rank's
construction. A patch would mean refactoring Rank and `DRAMCtrl`b to
privatize many of the members of Rank behind getters.
Change-Id: I7b8bd15086f4ffdfd3f40be4aeddac5e786fd78e
Signed-off-by: Sean Wilson <spwilson2@wisc.edu>
Reviewed-on: https://gem5-review.googlesource.com/3745
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
We want to extend the stats of objects hierarchically and thus it is necessary
to register the statistics of the base-class(es), as well. For now, these are
empty, but generic stats will be added there.
Patch originally provided by Akash Bagdia at ARM Ltd.
|
|
Change-Id: Ia57cc104978861ab342720654e408dbbfcbe4b69
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I5042410be54935650b7d05c84d8d9efbfcc06e70
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I6d1feb164a958dde0da87a1cd2698096112c4a82
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Somehow the WriteLineReq were never added to the list of commands
considered demand.
|
|
Prune cache stats that are never actually used.
|
|
Added stat to the cache to account for HardPF'ed blocks that are evicted
before being referenced (over-prefetching).
|
|
The cache queue reserve is there as an overflow to give us enough
headroom based on when we block the cache, and how many transactions
we may already have accepted before actually blocking. The previous
values were probably chosen to be "big enough", when we actually know
that we check the MSHRs after every single allocation, and for the
write buffers we know that we implicitly may need one entry for every
outstanding MSHR.
* * *
mem: Adjust cache queue reserve to more conservative values
The cache queue reserve is there as an overflow to give us enough
headroom based on when we block the cache, and how many transactions
we may already have accepted before actually blocking. The previous
values were probably chosen to be "big enough", when we actually know
that we check the MSHRs after every single allocation, and for the
write buffers we know that we implicitly may need one entry for every
outstanding MSHR.
|
|
This patch breaks out the cache write buffer into a separate class,
without affecting any stats. The goal of the patch is to avoid
encumbering the much-simpler write queue with the complex MSHR
handling. In a follow on patch this simplification allows us to
implement write combining.
The WriteQueue gets its own class, but shares a common ancestor, the
generic Queue, with the MSHRQueue.
|
|
This patch changes how the cache determines if snoops should be
forwarded from the memory side to the CPU side. Instead of having a
parameter, the cache now looks at the port connected on the CPU side,
and if it is a snooping port, then snoops are forwarded. Less error
prone, and less parameters to worry about.
The patch also tidies up the CPU classes to ensure that their I-side
port is not snooping by removing overrides to the snoop request
handler, such that snoop requests will panic via the default
MasterPort implement
|
|
Open up for other subclasses to BaseCache and transition to using the
explicit Cache subclass.
--HG--
rename : src/mem/cache/BaseCache.py => src/mem/cache/Cache.py
|
|
Draining is currently done by traversing the SimObject graph and
calling drain()/drainResume() on the SimObjects. This is not ideal
when non-SimObjects (e.g., ports) need draining since this means that
SimObjects owning those objects need to be aware of this.
This changeset moves the responsibility for finding objects that need
draining from SimObjects and the Python-side of the simulator to the
DrainManager. The DrainManager now maintains a set of all objects that
need draining. To reduce the overhead in classes owning non-SimObjects
that need draining, objects inheriting from Drainable now
automatically register with the DrainManager. If such an object is
destroyed, it is automatically unregistered. This means that drain()
and drainResume() should never be called directly on a Drainable
object.
While implementing the new functionality, the DrainManager has now
been made thread safe. In practice, this means that it takes a lock
whenever it manipulates the set of Drainable objects since SimObjects
in different threads may create Drainable objects
dynamically. Similarly, the drain counter is now an atomic_uint, which
ensures that it is manipulated correctly when objects signal that they
are done draining.
A nice side effect of these changes is that it makes the drain state
changes stricter, which the simulation scripts can exploit to avoid
redundant drains.
|
|
The drain state enum is currently a part of the Drainable
interface. The same state machine will be used by the DrainManager to
identify the global state of the simulator. Make the drain state a
global typed enum to better cater for this usage scenario.
|
|
This patch takes the final step in removing the is_top_level parameter
from the cache. With the recent changes to read requests and write
invalidations, the parameter is no longer needed, and consequently
removed.
This also means that asymmetric cache hierarchies are now fully
supported (and we are actually using them already with L1 caches, but
no table-walker caches, connected to a shared L2).
|
|
This patch adds two new read requests packets:
ReadCleanReq - For a cache to explicitly request clean data. The
response is thus exclusive or shared, but not owned or modified. The
read-only caches (see previous patch) use this request type to ensure
they do not get dirty data.
ReadSharedReq - We add this to distinguish cache read requests from
those issued by other masters, such as devices and CPUs. Thus, devices
use ReadReq, and caches use ReadCleanReq, ReadExReq, or
ReadSharedReq. For the latter, the response can be any state, shared,
exclusive, owned or even modified.
Both ReadCleanReq and ReadSharedReq re-use the normal ReadResp. The
two transactions are aligned with the emerging cache-coherent TLM
standard and the AMBA nomenclature.
With this change, the normal ReadReq should never be used by a cache,
and is reserved for the actual (non-caching) masters in the system. We
thus have a way of identifying if a request came from a cache or
not. The introduction of ReadSharedReq thus removes the need for the
current isTopLevel hack, and also allows us to stop relying on
checking the packet size to determine if the source is a cache or
not. This is fixed in follow-on patches.
|
|
This patch adds a parameter to the BaseCache to enable a read-only
cache, for example for the instruction cache, or table-walker cache
(not for x86). A number of checks are put in place in the code to
ensure a read-only cache does not end up with dirty data.
A follow-on patch adds suitable read requests to allow a read-only
cache to explicitly ask for clean data.
|
|
This patch takes a last step in fixing issues related to uncacheable
accesses. We do not separate uncacheable memory from uncacheable
devices, and in cases where it is really memory, there are valid
scenarios where we need to snoop since we do not support cache
maintenance instructions (yet). On snooping an uncacheable access we
thus provide data if possible. In essence this makes uncacheable
accesses IO coherent.
The snoop filter is also queried to steer the snoops, but not updated
since the uncacheable accesses do not allocate a block.
|
|
This patch changes the cache implementation to rely on virtual methods
rather than using the replacement policy as a template argument.
There is no impact on the simulation performance, and overall the
changes make it easier to modify (and subclass) the cache and/or
replacement policy.
|
|
Avoid redundant inclusion of the name in the DPRINTF string.
|
|
This patch fixes a long-standing isue with the port flow
control. Before this patch the retry mechanism was shared between all
different packet classes. As a result, a snoop response could get
stuck behind a request waiting for a retry, even if the send/recv
functions were split. This caused message-dependent deadlocks in
stress-test scenarios.
The patch splits the retry into one per packet (message) class. Thus,
sendTimingReq has a corresponding recvReqRetry, sendTimingResp has
recvRespRetry etc. Most of the changes to the code involve simply
clarifying what type of request a specific object was accepting.
The biggest change in functionality is in the cache downstream packet
queue, facing the memory. This queue was shared by requests and snoop
responses, and it is now split into two queues, each with their own
flow control, but the same physical MasterPort. These changes fixes
the previously seen deadlocks.
|