Age | Commit message (Collapse) | Author |
|
Over the past 6 years, we realized that the protocol is essentially used
to run the garnet network in a standalone manner, and feed standard synthetic
traffic patterns through it.
|
|
Adding details, e.g. rip, rsp etc. to the kvm pagefault exit when in SE mode.
|
|
Instead of scheduling another event, this patch adds a warning in case gdb
is attached multiple times and the first attachement event has not been
processed yet.
|
|
This patch adds a method to the Wavefront class to compute the actual workgroup
size. This can be different from the maximum workgroup size specified when
launching the kernel through the NDRange object. Current solution is still not
optimal, as we are computing these for each wavefront and the dispatcher also
needs to have this information and can't actually call
Wavefront::computeActuallWgSz before the wavefronts are being created. A long
term solution would be to have a Workgroup class that deals with all these
details.
|
|
When loading a checkpoint, it's sometimes desirable to be able to test
whether an entry within a secion exists. This is currently done
automatically in the UNSERIALIZE_OPT_SCALAR macro, but it isn't
possible to do for arrays, containers, or enums. Instead of adding
even more macros, add a helper function (CheckpointIn::entryExists())
that tests for the presence of an entry.
Change-Id: I4b4646b03276b889fd3916efefff3bd552317dbc
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
Fixed AbstractController::queueMemoryWritePartial to specify the
correct size for partial memory writes.
|
|
print number of bytes written as a decimal number, not hex
|
|
Change-Id: If19b9c593b48ded1ea848f2d3710d4369ec8a221
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
The drain did not wait until stages were ready again. Therefore, as a
result of messages in the TimeBuffer being drain, the state after the
drain was not consistent and asserts fired in some places when the
draining happened after a stage got blocked, but before the notification
arrived to the previous stages.
Change-Id: Ib50b3b40b7f745b62c1eba2931dec76860824c71
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
|
|
|
|
This patch adds methods to serialize the context of a particular wavefront
to the simulated system memory. Context serialization is used when a wavefront
is preempeted (i.e. context switch).
|
|
|
|
This patch introduces DPRINTFs for reading and writing to and from the vector
register file.
|
|
std::stack has no iterators, therefore the reconvergence stack can't be
iterated without poping elements off. We will be using std::list instead to be
able to iterate for saving and restoring purposes.
|
|
Adding runtime support for determining the memory required by a SIMD engine
when executing a particular wavefront.
|
|
Renaming members of the Wavefront class in accordance with the style guide.
|
|
WFContext struct is currently unused and it has been rendered not useful in
saving and restoring the context of a Wavefront. Wavefront class should be
sufficient for that purpose and the runtime can figure out the memory size
it will need to allocate for a Wavefront through an IOCTL.
|
|
Change-Id: I3e282baeb969b6bb9534813a2f433d68246c0669
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: Id2acbc09772be310a0eb9e33295afab07e08a4fa
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
This change adds a Trace CPU param to exit simulation early,
i.e. when the first (any one) trace execution is complete. With
this change the user gets a choice to configure exit as either
when the last CPU finishes (default) or first CPU finishes
replay. Configuring an early exit enables simulating and
measuring stats strictly when memory-system resources are being
stressed by all Trace CPUs.
Change-Id: I3998045fdcc5cd343e1ca92d18dd7f7ecdba8f1d
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This change subtracts the time offset present in the trace from
all the event times when nodes and request are sent so that the
replay starts immediately when the simulation starts. This makes
the stats accurate when the time offset in traces is large, for
example when traces are generated in the middle of a workload
execution. It also solves the problem of unnecessary DRAM
refresh events that would keep occuring during the large time
offset before even a single request is replayed into the system.
Change-Id: Ie0898842615def867ffd5c219948386d952af7f7
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This change adds a simple feature to scale the frequency of
the Trace CPU.
The compute delays in the input traces provide timing. This
change adds a freqency multiplier parameter to the Trace CPU
set to 1.0 by default. The compute delay is manipulated to
effectively achieve the frequency at which the nodes become
ready and thus scale the frequency of the Trace CPU.
Change-Id: Iaabbd57806941ad56094fcddbeb38fcee1172431
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This patch enables timing accesses for KVM cpu. A new state,
RunningMMIOPending, is added to indicate that there are outstanding timing
requests generated by KVM in the system. KVM's tick() is disabled and the
simulation does not enter into KVM until all outstanding timing requests have
completed. The main motivation for this is to allow KVM CPU to perform MMIO
in Ruby, since Ruby does not support atomic accesses.
|
|
Normal MMAPPED_IPR requests are allowed to execute speculatively under the
assumption that they have no side effects. The special case of m5ops that are
treated like MMAPPED_IPR should not be allowed to execute speculatively, since
they can have side-effects. Adding the STRICT_ORDER flag to these requests
blocks execution until the associated instruction hits the ROB head.
|
|
The quiesce family of magic ops can be simplified by the inclusion of
quiesceTick() and quiesce() functions on ThreadContext. This patch also
gets rid of the FS guards, since suspending a CPU is also a valid
operation for SE mode.
|
|
This patch introduces the DmaCallback helper class, which registers a callback
to fire after a sequence of (potentially non-contiguous) DMA transfers on a
DmaPort completes.
|
|
Add support for calling mmap on an EmulatedDriver file descriptor.
|
|
Connecting basic blocks would stop too early in kernels where ret was not the
last instruction. This patch allows basic blocks after the ret instruction
to be properly connected.
|
|
The receiver thread in dist_iface is allowed to directly exit the simulation.
This can cause exit to be called twice if the main thread simultaneously wants
to exit the simulation. Therefore, have the receiver thread enqueue a request
to exit on the primary event queue for the main simulation thread to handle.
|
|
Ethernet devices are currently only hooked up if running in FS mode. Much of
the Ethernet networking code is generic and can be used to build non-Ethernet
device models. Some of these device models do not require a complex driver
stack and can be built to use an EmulatedDriver in SE mode. This patch enables
etherent interfaces to properly connect regardless of whether the simulation
is in FS or SE mode.
|
|
Currently only 'start' and 'end' of AddrRange are printed in config.ini.
This causes address ranges to be overlapping when loading a c++-only
config with interleaved addresses using CxxConfigManger. This patch adds
prints for the interleave and XOR bits to config.ini such that address
ranges are properly setup with cxx config.
|
|
Add a customizable NoMali GPU model and an example Mali T760
configuration. Unlike the normal NoMali model (NoMaliGpu), the
NoMaliCustopmGpu model exposes all the important GPU ID registers to
Python. This makes it possible to implement custom GPU configurations
by without changing the underlying NoMali library.
Change-Id: I4fdba05844c3589893aa1a4c11dc376ec33d4e9e
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
Only map memories into the KVM guest address space that are
marked as usable by KVM. Create BackingStoreEntry class
containing flags for is_conf_reported, in_addr_map, and
kvm_map.
|
|
This changeset reverts the changset "dev, sim: Added missing override
keywords to fix CLANG compilation (OSX)" which was incorrectly rebased.
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: Ic37311443ca11ee6d95bceffea599e054e7aa110
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Previously printing an mshr would trigger an assertion if the MSHR was
not in service or if the targets list was empty. This patch changes
the print function to bypasses the accessor functions for
postInvalidate and postDowngrade and avoid the relevant assertions. It
also checks if the targets list is empty before calling print on it.
Change-Id: Ic18bee6cb088f63976112eba40e89501237cfe62
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: Ice5fa11e77d06576eaa42149f5fa340a769d8b01
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I183b9942929c873c3272ce6d1abd4ebc472c7132
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Secure and non-secure data can coexist in the cache and therefore the
snoop filter should treat differently packets with secure and non
secure accesses. This patch uses the lower bits of the line address to
keep track of whether the packet is addressing secure memory or not.
Change-Id: I54a5e614dad566a5083582bede86c86896f2c2c1
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch changes the default behaviour of the SystemXBar, adding a
snoop filter. With the recent updates to the snoop filter allocation
behaviour this change no longer causes problems for the regressions
without caches.
Change-Id: Ibe0cd437b71b2ede9002384126553679acc69cc1
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch improves the snoop filter allocation decisions by not only
looking at whether a port is snooping or not, but also if the packet
actually came from a cache. The issue with only looking at isSnooping
is that the CPU ports, for example, are snooping, but not actually
caching. Previously we ended up incorrectly allocating entries in
systems without caches (such as the atomic and timing quick
regressions). Eventually these misguided allocations caused the snoop
filter to panic due to an excessive size.
On the request path we now include the fromCache check on the packet
itself, and for responses we check if we actually have a snoop-filter
entry.
Change-Id: Idd2dbc4f00c7e07d331e9a02658aee30d0350d7e
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch takes yet another step in maintaining the clusivity, in
that it allows a mostly-inclusive cache to hold on to blocks even when
responding to a ReadExReq or UpgradeReq. Previously the cache simply
invalidated these blocks, but there is no strict need to do so.
The most important part of this patch is that we simply mark the block
clean when satisfying the upstream request where the cache is allowed
to keep the block. The only tricky part of the patch is in the memory
management of deferred snoops, where we need to distinguish the cases
where only the packet was copied (we expected to respond), and the
cases where we created an entirely new packet and request (we kept it
only to replay later).
The code in satisfyRequest is definitely ready for some refactoring
after this.
Change-Id: I201ddc7b2582eaa46fb8cff0c7ad09e02d64b0fc
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch changes how the mostly exclusive policy is enforced to
ensure that we drop blocks when we should. As part of this change, the
actual invalidation due to the clusivity enforcement is moved outside
the hit handling, to a separate method maintainClusivity. For the
timing mode that means we can deal with all MSHR targets before taking
any action and possibly dropping the block. The method
satisfyCpuSideRequest is also renamed satisfyRequest as part of this
change (since we only ever see requests from the cpu-side port).
Change-Id: If6f3d1e0c3e7be9a67b72a55e4fc2ec4a90fd3d2
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch adds a FromCache attribute to the packet, and updates a
number of the existing request commands to reflect that the request
originates from a cache. The attribute simplifies checking if a
requests came from a cache or not, and this is used by both the cache
and snoop filter in follow-on patches.
Change-Id: Ib0a7a080bbe4d6036ddd84b46fd45bc7eb41cd8f
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Steve Reinhardt <stever@gmail.com>
|
|
When using a Ruby memory system, the Ruby configuration scripts expect
to get a list of DMA ports to create the necessary DMA sequencers. Add
support in the utility functions that wire up devices to append DMA
ports to a list instead of connecting them to the IO bus. These
functions are currently only used by the VExpress_GEM5_V1 platform.
Change-Id: I46059e46b0f69e7be5f267e396811bd3caa3ed63
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
|
|
There are cases where we want to put boot ROMs on the PIO bus. Ruby
currently doesn't support functional accesses to such memories since
functional accesses are always assumed to go to physical memory. Add
the required support for routing functional accesses to the PIO bus.
Change-Id: Ia5b0fcbe87b9642bfd6ff98a55f71909d1a804e3
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
Reviewed-by: Michael LeBeane <michael.lebeane@amd.com>
|
|
The boot ROM shouldn't be used as a memory by the kernel. Memories
have a flag to indicate this which is set for some platforms. Update
all platforms to consistently set this flag to indicate that the boot
ROM shouldn't be reported as normal memory.
Change-Id: I2bf0273e99d2a668e4e8d59f535c1910c745aa7b
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
--HG--
extra : amend_source : c2cbda38636ea37cbe9ae6977a06b923eab5ba56
|
|
this patch fixes issues with changeset 11593
use the host's pwrite() syscall for pwrite64Func(),
as opposed to pwrite64(), because pwrite64() does
not work well on all distros.
undo the enabling of fstatfs, as we will add this
in a separate pate.
|