Age | Commit message (Collapse) | Author |
|
Remve the assertion that we always need to add a delta larger than
zero as that does not seem to be true when we hit it in the
'PMU reset cycle counter to zero' case.
Committed by Jason Lowe-Power <power.jg@gmail.com>
|
|
A few warnings (and thus errors) pop up after being added to -Wall:
1. -Wmisleading-indentation
In the auto-generated code there were instances of if/else blocks that
were not indented to gcc's liking. This is addressed by adding braces.
2. -Wshift-negative-value
gcc is clever enougn to consider ~0 a negative constant, and
rightfully complains. This is addressed by using mask() which
explicitly casts to unsigned before shifting.
That is all. Porting done.
|
|
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
This is a re-spin of 20264eb after the revert (bd1c6789) and includes
some fixes of that commit.
|
|
|
|
The following patches had unexpected interactions with the current
upstream code and have been reverted for now:
e07fd01651f3: power: Add support for power models
831c7f2f9e39: power: Low-power idle power state for idle CPUs
4f749e00b667: power: Add power states to ClockedObject
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
--HG--
extra : amend_source : 0b6fb073c6bbc24be533ec431eb51fbf1b269508
|
|
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
|
|
Add 4 power states to the ClockedObject, provides necessary access functions
to check and update the power state. Default power state is UNDEFINED, it is
responsibility of the respective simulation model to provide the startup state
and any other logic for state change.
Add number of transition stat.
Add distribution of time spent in clock gated state.
Add power state residency stat.
Add dump call back function to allow stats update of distribution and residency
stats.
|
|
After all this it turns out we don't even use it.
|
|
The openFlagTable and mmapFlagTables for emulated Linux
platforms are basically identical, but are specified
repetitively for every platform. Use a common file
that gets included for each platform so that we only
have one copy, making them more consistent and simplifying
changes (like adding #ifdefs).
In the process, made some minor fixes that slipped through
due to previous inconsistencies, and added more #ifdefs
to try to fix building on alternative hosts.
|
|
Refactor the TLB and page table walker test interface to use a dynamic
registration mechanism. Instead of patching a couple of empty methods
to wire up a TLB tester, this change allows such testers to register
themselves using the setTestInterface() method.
|
|
Libraries are loaded into the process address space using the
mmap system call. Conveniently, this happens to be a good
time to update the process symbol table with the library's
incoming symbols so we handle the table update from within the
system call.
This works just like an application's normal symbols. The only
difference between a dynamic library and a main executable is
when the symbol table update occurs. The symbol table update for
an executable happens at program load time and is finished before
the process ever begins executing. Since dynamic linking happens
at runtime, the symbol loading happens after the library is
first loaded into the process address space. The library binary
is examined at this time for a symbol section and that section
is parsed for symbol types with specific bindings (global,
local, weak). Subsequently, these symbols are added to the table
and are available for use by gem5 for things like trace
generation.
Checkpointing should work just as it did previously. The address
space (and therefore the library) will be recorded and the symbol
table will be entirely recorded. (It's not possible to do anything
clever like checkpoint a program and then load the program back
with different libraries with LD_LIBRARY_PATH, because the
library becomes part of the address space after being loaded.)
|
|
|
|
|
|
The mmapGrowsDown() method was a static method on the OperatingSystem
class (and derived classes), which worked OK for the templated syscall
emulation methods, but made it hard to access elsewhere. This patch
moves the method to be a virtual function on the LiveProcess method,
where it can be overridden for specific platforms (for now, Alpha).
This patch also changes the value of mmapGrowsDown() from being false
by default and true only on X86Linux32 to being true by default and
false only on Alpha, which seems closer to reality (though in reality
most people use ASLR and this doesn't really matter anymore).
In the process, also got rid of the unused mmap_start field on
LiveProcess and OperatingSystem mmapGrowsUp variable.
|
|
|
|
For O3, which has a stat that counts reg reads, there is an additional
reg read per mmap() call since there's an arg we no longer ignore.
Otherwise, stats should not be affected.
|
|
|
|
The structure definition only had the open system call flag set in mind when
it was named, so we rename it here with the intention of using it to define
additional tables to translate flags for other system calls in the future.
|
|
Fix the printDataInst function to properly print the immediate value.
|
|
This changeset adds support for changing the simulator output
directory. This can be useful when the simulation goes through several
stages (e.g., a warming phase, a simulation phase, and a verification
phase) since it allows the output from each stage to be located in a
different directory. Relocation is done by calling core.setOutputDir()
from Python or simout.setOutputDirectory() from C++.
This change affects several parts of the design of the gem5's output
subsystem. First, files returned by an OutputDirectory instance (e.g.,
simout) are of the type OutputStream instead of a std::ostream. This
allows us to do some more book keeping and control re-opening of files
when the output directory is changed. Second, new subdirectories are
OutputDirectory instances, which should be used to create files in
that sub-directory.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
[sascha.bischoff@arm.com: Rebased patches onto a newer gem5 version]
Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
This patch adds assertions that enforce that only invalidating snoops
will ever reach into the logic that tracks in-order load completion and
also invalidation of LL/SC (and MONITOR / MWAIT) monitors. Also adds
some comments to MSHR::replaceUpgrades().
|
|
Properly done for the ERET instruction in v8, but not for v7.
Many control register changes are only visible after explicit
instruction synchronization barriers or exception entry/exit.
This means mode changing instructions should squash any
younger in-flight speculative instructions.
|
|
Make clang happy...again.
|
|
Since the last round of fixes a few new issues have snuck in. We
should consider switching the regression runs to clang.
|
|
This patch implements the clock_getres() system call for arm and x86 in linux
SE mode.
|
|
The previous implementation did a pair of nested RMW operations,
which isn't compatible with the way that locked RMW operations are
implemented in the cache models. It was convenient though in that
it didn't require any new micro-ops, and supported cmpxchg16b using
64-bit memory ops. It also worked in AtomicSimpleCPU where
atomicity was guaranteed by the core and not by the memory system.
It did not work with timing CPU models though.
This new implementation defines new 'split' load and store micro-ops
which allow a single memory operation to use a pair of registers as
the source or destination, then uses a single ldsplit/stsplit RMW
pair to implement cmpxchg. This patch requires support for 128-bit
memory accesses in the ISA (added via a separate patch) to support
cmpxchg16b.
|
|
Although the cache models support wider accesses, the ISA descriptions
assume that (for the most part) memory operands are integer types,
which makes it difficult to define instructions that do memory accesses
larger than 64 bits.
This patch adds some generic support for memory operands that are arrays
of uint64_t, and specifically a 'u2qw' operand type for x86 that is an
array of 2 uint64_ts (128 bits). This support is unused at this point,
but will be needed shortly for cmpxchg16b. Ideally the 128-bit SSE
memory accesses will also be rewritten to use this support.
Support for 128-bit accesses could also have been added using the gcc
__int128_t extension, which would have been less disruptive. However,
although clang also supports __int128_t, it's still non-standard.
Also, more importantly, this approach creates a path to defining
256- and 512-byte operands as well, which will be useful for eventual
AVX support.
|
|
MemOperand variables were being initialized to 0
"to avoid 'uninitialized variable' errors" but these
no longer seem to be a problem (with the exception of
one use case in POWER that is arguably broken and
easily fixed here).
Getting rid of the initialization is necessary to
set up a subsequent patch which extends memory
operands to possibly not be scalars, making the
'= 0' initialization no longer feasible.
|
|
Writing 16 bytes from an 8-byte source value is a bad idea.
This doesn't appear to have broken anything, but showed up
as spurious differences when tracediffing runs.
|
|
Result of running 'hg m5style --skip-all --fix-control -a' to get
rid of '== true' comparisons, plus trivial manual edits to get
rid of '== false'/'== False' comparisons.
Left a couple of explicit comparisons in where they didn't seem
unreasonable:
invalid boolean comparison in src/arch/mips/interrupts.cc:155
>> DPRINTF(Interrupt, "Interrupts OnCpuTimerINterrupt(tc) == true\n");<<
invalid boolean comparison in src/unittest/unittest.hh:110
>> "EXPECT_FALSE(" #expr ")", (expr) == false)<<
|
|
In the process of trying to get rid of an '== false' comparison,
it became apparent that a slightly more involved solution was
needed. Split this out into its own changeset since it's not
a totally trivial local change like the others.
|
|
Result of running 'hg m5style --skip-all --fix-control -a'.
|
|
Result of running 'hg m5style --skip-all --fix-white -a'.
|
|
|
|
For historical reasons, the ExecContext interface had a single
function, readMem(), that did two different things depending on
whether the ExecContext supported atomic memory mode (i.e.,
AtomicSimpleCPU) or timing memory mode (all the other models).
In the former case, it actually performed a memory read; in the
latter case, it merely initiated a read access, and the read
completion did not happen until later when a response packet
arrived from the memory system.
This led to some confusing things, including timing accesses
being required to provide a pointer for the return data even
though that pointer was only used in atomic mode.
This patch splits this interface, adding a new initiateMemRead()
function to the ExecContext interface to replace the timing-mode
use of readMem().
For consistency and clarity, the readMemTiming() helper function
in the ISA definitions is renamed to initiateMemRead() as well.
For x86, where the access size is passed in explicitly, we can
also get rid of the data parameter at this level. For other ISAs,
where the access size is determined from the type of the data
parameter, we have to keep the parameter for that purpose.
|
|
The readMemAtomic/writeMemAtomic helper functions were calling
readMemTiming/writeMemTiming respectively. This is functionally
correct, since the *Timing functions are doing the same access
initiation operation as the *Atomic functions (just that the
*Atomic versions also complete the access in line). It also
provides for some (very minimal) code reuse. Unfortunately,
it's potentially pretty confusing, since it makes it look like
the atomic accesses are somehow being converted to timing
accesses. It also gets in the way of specializing the timing
interface (as will be done in a future patch).
|
|
|
|
By ignoring SIG_TRAP, using --debug-break <N> when not connected to
a debugger becomes a no-op. Apparently this was intended to be a
feature, though the rationale is not clear.
If we don't ignore SIG_TRAP, then using --debug-break <N> when not
connected to a debugger causes the simulation process to terminate
at tick N. This is occasionally useful, e.g., if you just want to
collect a trace for a specific window of execution then you can combine
this with --debug-start to do exactly that.
In addition to not ignoring the signal, this patch also updates
the --debug-break help message and deletes a handful of unprotected
calls to Debug::breakpoint() that relied on the prior behavior.
|
|
Make best use of the compiler, and enable -Wextra as well as
-Wall. There are a few issues that had to be resolved, but they are
all trivial.
|
|
The key parameter can be used to read out various config parameters from
within the simulated software.
|
|
Currently, the wire format of register values in g- and G-packets is
modelled using a union of uint8/16/32/64 arrays. The offset positions
of each register are expressed as a "register count" scaled according
to the width of the register in question. This results in counter-
intuitive and error-prone "register count arithmetic", and some
formats would even be altogether unrepresentable in such model, e.g.
a 64-bit register following a 32-bit one would have a fractional index
in the regs64 array.
Another difficulty is that the array is allocated before the actual
architecture of the workload is known (and therefore before the correct
size for the array can be calculated).
With this patch I propose a simpler mechanism for expressing the
register set structure. In the new code, GdbRegCache is an abstract
class; its subclasses contain straightforward structs reflecting the
register representation. The determination whether to use e.g. the
AArch32 vs. AArch64 register set (or SPARCv8 vs SPARCv9, etc.) is made
by polymorphically dispatching getregs() to the concrete subclass.
The subclass is not instantiated until it is needed for actual
g-/G-packet processing, when the mode is already known.
This patch is not meant to be merged in on its own, because it changes
the contract between src/base/remote_gdb.* and src/arch/*/remote_gdb.*,
so as it stands right now, it would break the other architectures.
In this patch only the base and the ARM code are provided for review;
once we agree on the structure, I will provide src/arch/*/remote_gdb.*
for the other architectures; those patches could then be merged in
together.
Review Request: http://reviews.gem5.org/r/3207/
Pushed by Joel Hestness <jthestness@gmail.com>
|
|
Add support for automatically discover available platforms. The
Python-side uses functionality similar to what we use when
auto-detecting available CPU models. The machine IDs have been updated
to match the platform configurations. If there isn't a matching
machine ID, the configuration scripts default to -1 which Linux uses
for device tree only platforms.
|
|
Add support for automatically selecting a boot loader that matches the
guest system's kernel. Instead of accepting a single boot loader, the
ArmSystem class now accepts a vector of boot loaders. When
initializing a system, the we now look for the first boot loader with
an architecture that matches the kernel.
This changeset makes it possible to use the same system for both
64-bit and 32-bit kernels.
|
|
Appease clang.
|
|
As per the x86 architecture specification, matching TLB entries need to be
invalidated on a page fault. For instance, after a page fault due to inadequate
protection bits on a TLB hit, the TLB entry needs to be invalidated. This
behavior is clearly specified in the x86 architecture manuals from both AMD and
Intel. This invalidation is missing currently in gem5, due to which linux
kernel versions 3.8 and up cannot be simulated efficiently. This is exposed by
a linux optimisation in commit e4a1cc56e4d728eb87072c71c07581524e5160b1, which
removes a tlb flush on updating page table entries in x86.
Testing: Linux kernel versions 3.8 onwards were booting very slowly in FS mode,
due to repeated page faults (~300000 before the first print statement in a
bash file). Ensured that page fault rate drops drastically and observed
reduction in boot time from order of hours to minutes for linux kernel v3.8
and v3.11
|
|
doCpuid() has to identical warn messages about unimplemented functions. Add
the family to the log message to make them distinguishable.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
|
|
Remove sparc V8 TBR register from list of registers since it is not part of
sparc V9. This brings the number of registers in sync with what gdb expects
Without this patch gdb complains about receoved packet too long.
with this patch gdb is able to work properly with gem5 for remote debugging.
Note: gdb is version 7.8
Note: gdb is configured with --target=sparc64-sun-solaris2.8
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
|
|
The checkpoint changes, along with the SMT patches have changed a
number of APIs. Adapt the ArmKvmCPU accordingly.
|