Age | Commit message (Collapse) | Author |
|
std::stack has no iterators, therefore the reconvergence stack can't be
iterated without poping elements off. We will be using std::list instead to be
able to iterate for saving and restoring purposes.
|
|
Adding runtime support for determining the memory required by a SIMD engine
when executing a particular wavefront.
|
|
Renaming members of the Wavefront class in accordance with the style guide.
|
|
WFContext struct is currently unused and it has been rendered not useful in
saving and restoring the context of a Wavefront. Wavefront class should be
sufficient for that purpose and the runtime can figure out the memory size
it will need to allocate for a Wavefront through an IOCTL.
|
|
Change-Id: I3e282baeb969b6bb9534813a2f433d68246c0669
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: Id2acbc09772be310a0eb9e33295afab07e08a4fa
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
This change adds a Trace CPU param to exit simulation early,
i.e. when the first (any one) trace execution is complete. With
this change the user gets a choice to configure exit as either
when the last CPU finishes (default) or first CPU finishes
replay. Configuring an early exit enables simulating and
measuring stats strictly when memory-system resources are being
stressed by all Trace CPUs.
Change-Id: I3998045fdcc5cd343e1ca92d18dd7f7ecdba8f1d
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This change subtracts the time offset present in the trace from
all the event times when nodes and request are sent so that the
replay starts immediately when the simulation starts. This makes
the stats accurate when the time offset in traces is large, for
example when traces are generated in the middle of a workload
execution. It also solves the problem of unnecessary DRAM
refresh events that would keep occuring during the large time
offset before even a single request is replayed into the system.
Change-Id: Ie0898842615def867ffd5c219948386d952af7f7
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This change adds a simple feature to scale the frequency of
the Trace CPU.
The compute delays in the input traces provide timing. This
change adds a freqency multiplier parameter to the Trace CPU
set to 1.0 by default. The compute delay is manipulated to
effectively achieve the frequency at which the nodes become
ready and thus scale the frequency of the Trace CPU.
Change-Id: Iaabbd57806941ad56094fcddbeb38fcee1172431
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
This patch enables timing accesses for KVM cpu. A new state,
RunningMMIOPending, is added to indicate that there are outstanding timing
requests generated by KVM in the system. KVM's tick() is disabled and the
simulation does not enter into KVM until all outstanding timing requests have
completed. The main motivation for this is to allow KVM CPU to perform MMIO
in Ruby, since Ruby does not support atomic accesses.
|
|
Normal MMAPPED_IPR requests are allowed to execute speculatively under the
assumption that they have no side effects. The special case of m5ops that are
treated like MMAPPED_IPR should not be allowed to execute speculatively, since
they can have side-effects. Adding the STRICT_ORDER flag to these requests
blocks execution until the associated instruction hits the ROB head.
|
|
The quiesce family of magic ops can be simplified by the inclusion of
quiesceTick() and quiesce() functions on ThreadContext. This patch also
gets rid of the FS guards, since suspending a CPU is also a valid
operation for SE mode.
|
|
This patch introduces the DmaCallback helper class, which registers a callback
to fire after a sequence of (potentially non-contiguous) DMA transfers on a
DmaPort completes.
|
|
Add support for calling mmap on an EmulatedDriver file descriptor.
|
|
Connecting basic blocks would stop too early in kernels where ret was not the
last instruction. This patch allows basic blocks after the ret instruction
to be properly connected.
|
|
The receiver thread in dist_iface is allowed to directly exit the simulation.
This can cause exit to be called twice if the main thread simultaneously wants
to exit the simulation. Therefore, have the receiver thread enqueue a request
to exit on the primary event queue for the main simulation thread to handle.
|
|
Ethernet devices are currently only hooked up if running in FS mode. Much of
the Ethernet networking code is generic and can be used to build non-Ethernet
device models. Some of these device models do not require a complex driver
stack and can be built to use an EmulatedDriver in SE mode. This patch enables
etherent interfaces to properly connect regardless of whether the simulation
is in FS or SE mode.
|
|
Currently only 'start' and 'end' of AddrRange are printed in config.ini.
This causes address ranges to be overlapping when loading a c++-only
config with interleaved addresses using CxxConfigManger. This patch adds
prints for the interleave and XOR bits to config.ini such that address
ranges are properly setup with cxx config.
|
|
Add a customizable NoMali GPU model and an example Mali T760
configuration. Unlike the normal NoMali model (NoMaliGpu), the
NoMaliCustopmGpu model exposes all the important GPU ID registers to
Python. This makes it possible to implement custom GPU configurations
by without changing the underlying NoMali library.
Change-Id: I4fdba05844c3589893aa1a4c11dc376ec33d4e9e
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
|
|
Only map memories into the KVM guest address space that are
marked as usable by KVM. Create BackingStoreEntry class
containing flags for is_conf_reported, in_addr_map, and
kvm_map.
|
|
This changeset reverts the changset "dev, sim: Added missing override
keywords to fix CLANG compilation (OSX)" which was incorrectly rebased.
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: Ic37311443ca11ee6d95bceffea599e054e7aa110
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Previously printing an mshr would trigger an assertion if the MSHR was
not in service or if the targets list was empty. This patch changes
the print function to bypasses the accessor functions for
postInvalidate and postDowngrade and avoid the relevant assertions. It
also checks if the targets list is empty before calling print on it.
Change-Id: Ic18bee6cb088f63976112eba40e89501237cfe62
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: Ice5fa11e77d06576eaa42149f5fa340a769d8b01
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I183b9942929c873c3272ce6d1abd4ebc472c7132
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Secure and non-secure data can coexist in the cache and therefore the
snoop filter should treat differently packets with secure and non
secure accesses. This patch uses the lower bits of the line address to
keep track of whether the packet is addressing secure memory or not.
Change-Id: I54a5e614dad566a5083582bede86c86896f2c2c1
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch changes the default behaviour of the SystemXBar, adding a
snoop filter. With the recent updates to the snoop filter allocation
behaviour this change no longer causes problems for the regressions
without caches.
Change-Id: Ibe0cd437b71b2ede9002384126553679acc69cc1
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch improves the snoop filter allocation decisions by not only
looking at whether a port is snooping or not, but also if the packet
actually came from a cache. The issue with only looking at isSnooping
is that the CPU ports, for example, are snooping, but not actually
caching. Previously we ended up incorrectly allocating entries in
systems without caches (such as the atomic and timing quick
regressions). Eventually these misguided allocations caused the snoop
filter to panic due to an excessive size.
On the request path we now include the fromCache check on the packet
itself, and for responses we check if we actually have a snoop-filter
entry.
Change-Id: Idd2dbc4f00c7e07d331e9a02658aee30d0350d7e
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch takes yet another step in maintaining the clusivity, in
that it allows a mostly-inclusive cache to hold on to blocks even when
responding to a ReadExReq or UpgradeReq. Previously the cache simply
invalidated these blocks, but there is no strict need to do so.
The most important part of this patch is that we simply mark the block
clean when satisfying the upstream request where the cache is allowed
to keep the block. The only tricky part of the patch is in the memory
management of deferred snoops, where we need to distinguish the cases
where only the packet was copied (we expected to respond), and the
cases where we created an entirely new packet and request (we kept it
only to replay later).
The code in satisfyRequest is definitely ready for some refactoring
after this.
Change-Id: I201ddc7b2582eaa46fb8cff0c7ad09e02d64b0fc
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch changes how the mostly exclusive policy is enforced to
ensure that we drop blocks when we should. As part of this change, the
actual invalidation due to the clusivity enforcement is moved outside
the hit handling, to a separate method maintainClusivity. For the
timing mode that means we can deal with all MSHR targets before taking
any action and possibly dropping the block. The method
satisfyCpuSideRequest is also renamed satisfyRequest as part of this
change (since we only ever see requests from the cpu-side port).
Change-Id: If6f3d1e0c3e7be9a67b72a55e4fc2ec4a90fd3d2
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
|
|
This patch adds a FromCache attribute to the packet, and updates a
number of the existing request commands to reflect that the request
originates from a cache. The attribute simplifies checking if a
requests came from a cache or not, and this is used by both the cache
and snoop filter in follow-on patches.
Change-Id: Ib0a7a080bbe4d6036ddd84b46fd45bc7eb41cd8f
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Steve Reinhardt <stever@gmail.com>
|
|
When using a Ruby memory system, the Ruby configuration scripts expect
to get a list of DMA ports to create the necessary DMA sequencers. Add
support in the utility functions that wire up devices to append DMA
ports to a list instead of connecting them to the IO bus. These
functions are currently only used by the VExpress_GEM5_V1 platform.
Change-Id: I46059e46b0f69e7be5f267e396811bd3caa3ed63
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
|
|
There are cases where we want to put boot ROMs on the PIO bus. Ruby
currently doesn't support functional accesses to such memories since
functional accesses are always assumed to go to physical memory. Add
the required support for routing functional accesses to the PIO bus.
Change-Id: Ia5b0fcbe87b9642bfd6ff98a55f71909d1a804e3
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
Reviewed-by: Michael LeBeane <michael.lebeane@amd.com>
|
|
The boot ROM shouldn't be used as a memory by the kernel. Memories
have a flag to indicate this which is set for some platforms. Update
all platforms to consistently set this flag to indicate that the boot
ROM shouldn't be reported as normal memory.
Change-Id: I2bf0273e99d2a668e4e8d59f535c1910c745aa7b
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
--HG--
extra : amend_source : c2cbda38636ea37cbe9ae6977a06b923eab5ba56
|
|
this patch fixes issues with changeset 11593
use the host's pwrite() syscall for pwrite64Func(),
as opposed to pwrite64(), because pwrite64() does
not work well on all distros.
undo the enabling of fstatfs, as we will add this
in a separate pate.
|
|
this patch adds an implementation for the pwrite64 syscall and
enables it for x86_64, and enables fstatfs for x86_64.
|
|
Factored out of the larger banked register change.
Change-Id: I947dbdb9c00b4678bea9d4f77b913b7014208690
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Updated according to GICv2 documentation.
Change-Id: I5d926d1abf665eecc43ff0f7d6e561e1ee1c390a
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Introduce and use a lookup table.
Using fetchDescriptor() rather than DMA cleanly handles nested paging.
Change-Id: I69ec762f176bd752ba1040890e731826b58d15a6
|
|
During host bootup, KVM reads/writes to CNTHCTL_EL2. Because this
miscreg has not been implemented, the simulation would end there. This
patch causes the simulation to warn about the read/write instead of fail.
Change-Id: If034bfd0818a9a5e50c5fe86609e945258c96fa3
|
|
This fixes a bug where stage 2 lookups used the AArch32
permissions rules even if we were executing in AArch64 mode.
Change-Id: Ia40758f0599667ca7ca15268bd3bf051342c24c1
|
|
This patch corrects IPA reporting if the translation faults in a
stage 2 lookup.
Change-Id: I0b914527f8a9f98a5e980a131cf9d03e5584b4e9
|
|
This patch adds support for stage 2 TLBI instructions
such as TLBI IPAS2E1_Xt.
Change-Id: I0cd5e8055b0c1003e03439aa5183252f50ea0a88
|
|
Change-Id: I14c93a5460550051a12129e792a9a9bd522a145c
|
|
This patch restricts trapping to hypervisor only if we are in the
correct exception level for the trap to happen.
Change-Id: I0a382b6a572ef835ea36d2702b8a81b633bd3df0
|
|
Faults that could potentially be routed to the hypervisor checked
whether or not they were in a secure state without checking if security
was enabled or not. This caused faults not to be routed correctly. This
patch causes secure state checking to first ask if security is enabled.
Change-Id: I179e9b181b27f552734c9bab2b18d05ac579a119
|
|
We recompute if we are doing a stage 2 walk inside of the table walker
but we have already figured it out in the tlb. Pass the information in
to the walk instead of recomputing it.
Change-Id: I39637ce99309b2ddbc30344d45ac9ebf6a203401
|
|
The functional case is already handled within the fetchDescriptor()
function. We can thus use that function for both atomic and functional
mode when we start the table walk.
Change-Id: Iacaed28cd9024d259fd37a58150efd00ff94d86e
|
|
This patch adds the option for faults to be routed to the hypervisor
using the pre-existing routeToHyp() functions that are present in each
fault type.
Change-Id: I9735512c094457636b9870456a5be5432288e004
|