Age | Commit message (Collapse) | Author |
|
If two bitfields are of the same type, also implying that they have the same
first and last bit positions, the existing implementation would copy the
entire bitfield. That includes the __data member which is shared among all the
bitfields, effectively overwritting the entire bitunion.
This change also adjusts the write only signed bitfield assignment operator to
be like the unsigned version, using "using" instead of implementing it again
and calling down to the underlying implementation.
|
|
These are for the monitor/mwait instructions, SSSE3, and XSAVE.
|
|
That change enables CPUID bits for features that aren't implemented in gem5.
If a simulated system tries to use those features because it was told it
could, bad things can happen.
|
|
Minor was reporting the data cache access as ".inst" accesses.
This just switches the MasterPortID to dataMasterPortId.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
added ARM aarch64 unlinkat syscall support, modeled on other <xxx>at syscalls.
This gets all of the cpu2006 int workloads passing in SE mode on aarch64.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
This patch implements the simd128 ADDSUBPD instruction for the x86 architecture.
Tested with a simple program in assembly language which executes the
instruction. Checked that different versions of the instruction are executed
by using the execution tracing option.
Committed by: Nilay Vaish <nilay@cs.wisc.edu
|
|
This change includes edits to MC146818 timer to prevent RTC events
firing before startup to comply with SimObject initialization call sequence.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
According to Linux man pages, if writev is successful, it returns the total
number of bytes written. Otherwise, it returns an error code. Instead of
returning 0, return the result from the actual call to writev in the system
call.
|
|
Prefechers has used rand() to generate random numers previously.
|
|
Without this tweak, a prefetcher will happily prefetch data that will
promptly be invalidated and overwritten by a WriteInvalidate.
|
|
The cache's MemSidePacketQueue schedules a sendEvent based upon
nextMSHRReadyTime() which is the time when the next MSHR is ready or whenever
a future prefetch is ready. However, a prefetch being ready does not guarentee
that it can obtain an MSHR. So, when all MSHRs are full,
the simulation ends up unnecessiciarly scheduling a sendEvent every picosecond
until an MSHR is finally freed and the prefetch can happen.
This patch fixes this by not signaling the prefetch ready time if the prefetch
could not be generated. The event is rescheduled as soon as a MSHR becomes
available.
|
|
Previously the code commented about an unhandled case where it might be
possible for a writeback to arrive after a prefetch was generated but
before it was sent to the memory system. I hit that case. Luckily
the prefetchSquash() logic already in the code handles dropping prefetch
request in certian circumstances.
|
|
Re-organizes the prefetcher class structure. Previously the
BasePrefetcher forced multiple assumptions on the prefetchers that
inherited from it. This patch makes the BasePrefetcher class truly
representative of base functionality. For example, the base class no
longer enforces FIFO order. Instead, prefetchers with FIFO requests
(like the existing stride and tagged prefetchers) now inherit from a
new QueuedPrefetcher base class.
Finally, the stride-based prefetcher now assumes a custimizable lookup table
(sets/ways) rather than the previous fully associative structure.
|
|
Adds a new parameter that reserves some number of MSHR entries for demand
accesses. This helps prevent prefetchers from taking all MSHRs, forcing demand
requests from the CPU to stall.
|
|
This patch adds table walker stats for:
- Walk events
- Instruction vs Data
- Page size histogram
- Wait time and service time histograms
- Pending requests histogram (per cycle) - measures dist. of L
(p(1..) = how often busy, p(0) = how often idle)
- Squashes, before starting and after completion
|
|
This patch gives the user direct influence over the number of DRAM
ranks to make it easier to tune the memory density without affecting
the bandwidth (previously the only means of scaling the device count
was through the number of channels).
The patch also adds some basic sanity checks to ensure that the number
of ranks is a power of two (since we rely on bit slices in the address
decoding).
|
|
This patch addresses an issue seen with the KVM CPU where the refresh
events scheduled by the DRAM controller forces the simulator to switch
out of the KVM mode, thus killing performance.
The current patch works around the fact that we currently have no
proper API to inform a SimObject of the mode switches. Instead we rely
on drainResume being called after any switch, and cache the previous
mode locally to be able to decide on appropriate actions.
The switcheroo regression require a minor stats bump as a result.
|
|
This patch adds rank-wise refresh to the controller, as opposed to the
channel-wide refresh currently in place. In essence each rank can be
refreshed independently, and for this to be possible the controller
is extended with a state machine per rank.
Without this patch the data bus is always idle during a refresh, as
all the ranks are refreshing at the same time. With the rank-wise
refresh it is possible to use one rank while another one is
refreshing, and thus the data bus can be kept busy.
The patch introduces a Rank class to encapsulate the state per rank,
and also shifts all the relevant banks, activation tracking etc to the
rank. The arbitration is also updated to consider the state of the rank.
|
|
Fix a minor issue that affects multi-rank systems.
|
|
This patch adds the stack distance calculator to the CommMonitor. The
stats are disabled by default.
|
|
This patch adds a stand-alone stack distance calculator. The stack
distance calculator is a passive SimObject that observes the addresses
passed to it. It calculates stack distances (LRU Distances) of
incoming addresses based on the partial sum hierarchy tree algorithm
described by Alamasi et al. http://doi.acm.org/10.1145/773039.773043.
For each transaction a hashtable look-up is performed. At every
non-unique transaction the tree is traversed from the leaf at the
returned index to the root, the old node is deleted from the tree, and
the sums (to the right) are collected and decremented. The collected
sum represets the stack distance of the found node. At every unique
transaction the stack distance is returned as
numeric_limits<uint64>::max().
In addition to the basic stack distance calculation, a feature to mark
an old node in the tree is added. This is useful if it is required to
see the reuse pattern. For example, Writebacks to the lower level
(e.g. membus from L2), can be marked instead of being removed from the
stack (isMarked flag of Node set to True). And then later if this same
address is accessed (by L1), the value of the isMarked flag would be
True. This gives some insight on how the Writeback policy of the
lower level affect the read/write accesses in an application.
Debugging is enabled by setting the verify flag to true. Debugging is
implemented using a dummy stack that behaves in a naive way, using STL
vectors. Note that this has a large impact on run time.
|
|
This patch adds the MemChecker and MemCheckerMonitor classes. While
MemChecker can be integrated anywhere in the system and is independent,
the most convenient usage is through the MemCheckerMonitor -- this
however, puts limitations on where the MemChecker is able to observe
read/write transactions.
|
|
We currently don't handle unaligned PCs correctly. There is one check
for unaligned PCs in the TLB when running in aarch64 mode, but this
check does not cover cases where the CPU does not do a TLB lookup when
decoding an instruction (e.g., a branch stays within the same cache
line). Additionally, the Decoder class sometimes throws an assertion
for unaligned PCs which breaks speculation.
This changeset introduces a decoder fault bit field in the ExtMachInst
structure. This field can be used to signal a decoder failure. If set,
the decoder generates an internal gem5fault instruction instead of a
normal instruction. This instruction in turns either panics (fault
type PANIC), returns an PCAlignmentFault (fault type UNALIGNED,
aarch64) or PrefetchAbort (fault type UNALIGNED, aarch32).
The patch causes minor changes to the realview64 regressions, and a
stats bump will follow.
|
|
This changeset adds more documentation to the ArmISA::Decoder class
and restructures it slightly to make API groups more obvious.
|
|
This patch adds support for filtering events in the PMU. In order to
do so, it updates the ISADevice base class to forward an ISA pointer
to ISA devices. This enables such devices to access the MiscReg file
to determine the current execution level.
|
|
|
|
The aarch64 system register decoder is currently not decoding
PMXEVTYPER_EL0 and PMCCFILTR_EL0 correctly. This changeset updates the
decoder so that they are decoded using the values in table C5-6 in ARM
DDI 0478A.c.
|
|
Add an assert in the PioPort that checks if a response packet from a
device has the right flags set before passing it to them rest of the
memory system.
|
|
The VirtIO devices didn't correctly set the response flags in memory
packets. This changeset adds the required Packet::makeResponse()
calls.
|
|
The new single stepping implementation for x86 doesn't rely on any ISA
specific properties or functionality. This change pulls out the per ISA
implementation of those functions and promotes the X86 implementation to the
base class.
One drawback of that implementation is that the CPU might stop on an
instruction twice if it's affected by both breakpoints and single stepping.
While that might be a little surprising, it's harmless and would only happen
under somewhat unlikely circumstances.
|
|
This stub should allow remote debugging of 32 bit and 64 bit targets. Single
stepping seems to work, as do breakpoints. If both breakpoints and single
stepping affect an instruction, gdb will stop at the instruction twice before
continuing. That's a little surprising, but is generally harmless.
|
|
These can be used to simplify the implementation of single step in derived
classes.
|
|
The "Event" name is the same as the base event class. That's a bit confusing,
and makes it a little awkward to add other event types.
|
|
Use the comInstEventQueue to ensure GDB interrupts the simulation at an
instruction boundary and not in the middle of a macroop, memory access, etc.
|
|
Only the instruction address is actually checked, so there's no need to check
repeatedly while we're working through the microops of a macroop and that's
not changing.
|
|
Not all ISAs have 64 bit sized registers, so it's not always very convenient
to access the GDB register cache in 64 bit sized chunks. This change makes it
accessible in 8, 16, 32, or 64 bit chunks. The MIPS and ARM implementations
were working around that limitation by bundling and unbundling 32 bit values
into 64 bit values. That code has been removed.
|
|
Instead of counting the number of opcode bytes in an instruction and recording
each byte before the actual opcode, we can represent the path we took to get to
the actual opcode byte by using a type code. That has a couple of advantages.
First, we can disambiguate the properties of opcodes of the same length which
have different properties. Second, it reduces the amount of data stored in an
ExtMachInst, making them slightly easier/faster to create and process. This
also adds some flexibility as far as how different types of opcodes are
handled, which might come in handy if we decide to support VEX or XOP
instructions.
This change also adds tables to support properly decoding 3 byte opcodes.
Before we would fall off the end of some arrays, on top of the ambiguity
described above.
This change doesn't measureably affect performance on the twolf benchmark.
--HG--
rename : src/arch/x86/isa/decoder/three_byte_opcodes.isa => src/arch/x86/isa/decoder/three_byte_0f38_opcodes.isa
rename : src/arch/x86/isa/decoder/three_byte_opcodes.isa => src/arch/x86/isa/decoder/three_byte_0f3a_opcodes.isa
|
|
The values in a "bitfield" or in an ExtMachInst structure member may not be a
literal value, it might select from an arbitrary collection of options. Instead
of using the raw value of those constants in the decoder, it's easier to tell
what's going on if they can be referred to as a symbolic constant/enum.
To support that, the ISA description language is extended slightly so that in
addition to integer literals, the case value for decode blobs can also be a
string literal. It's up to the ISA author to ensure that the string evaluates
to a legal constant value when interpretted as C++.
|
|
|
|
The check which makes sure the length of the breakpoint being written is the
same as a MachInst is only correct on fixed instruction width ISAs. Instead of
incorrectly applying that check to all ISAs, this change makes that the
default check and lets ISA specific GDB classes override it.
|
|
This command is supposed to set up a timer which will put the drive into a
standby mode if it isn't sent a command within a given time out. Since most of
the timeouts are generally significantly longer than a simulation would run
anyway, and we don't have an implementation for standby mode to begin with,
we can accept the command, do nothing, and report success.
|
|
This is used primarily for VNC.
|
|
This patch adds sorting based on the SimObject name or parameter name
for all situations where we iterate over dictionaries. This should
ensure a deterministic and consistent order across the host systems
and hopefully avoid regression results differing across python
versions.
|
|
This patch takes a clean-slate approach to providing WriteInvalidate
(write streaming, full cache line writes without first reading)
support.
Unlike the prior attempt, which took an aggressive approach of directly
writing into the cache before handling the coherence actions, this
approach follows the existing cache flows as closely as possible.
|
|
Prepare for a different implementation following in the next patch
|
|
This patch fixes a case where a store in Minor's store buffer never
leaves the store buffer as it is pre-maturely counted as having been
issued, leading to the store buffer idling.
LSQ::StoreBuffer::numUnissuedAccesses should count the number of accesses
either in memory, or still in the store buffer after being completed.
For stores which are also barriers, the store will stay in the store
buffer for a cycle after it is completed and will be cleaned up by the
barrier clearing code (to ensure that barriers are completed in-order).
To acheive this, numUnissuedAccesses is not decremented when a store-barrier
is issued to memory, but when its barrier effect is cleared.
Without this patch, the correct behaviour happens when a memory transaction
is immediately accepted, but not if it needs a retry.
|
|
This patch fixes the checking of the number of memory instructions issued
per cycles in the Minor CPU.
|
|
This patch fixes a case where the Minor CPU can deadlock due to the lack
of a response to TLB request because of a bug in fault handling in the ARM
table walker.
TableWalker::processWalkWrapper is the scheduler-called wrapper which
handles deferred walks which calls to TableWalker::wait cannot immediately
process. The handling of faults generated by processWalk{AArch64,LPAE,}
calls in those two functions is is different. processWalkWrapper ignores
fault returns from processWalk... which can lead to ::finish not being
called on a translation.
This fix provides fault handling in processWalkWrapper similar to that
found in the leaf functions which BaseTLB::Translation::finish.
|
|
In case the memory subsystem sends a combined response with invalidate
(e.g. ReadRespWithInvalidate), we cannot ignore the invalidate part
of the response.
If we were to ignore the invalidate part, under certain circumstances
this effectively leads to reordering of loads to the same address
which is not permitted under any memory consistency model implemented
in gem5.
Consider the case where a later load's address is computed before an
earlier load in program order, and is therefore sent to the memory
subsystem first. At some point the earlier load's address is computed
and in doing so correctly marks the later load as a
possibleLoadViolation. In the meantime some other node writes and
sends invalidations to all other nodes. The invalidation races with
the later load's ReadResp, and arrives before ReadResp and is
deferred. Upon receipt of the ReadResp, the response is changed to
ReadRespWithInvalidate, and sent to the CPU. If we ignore the
invalidate part of the packet, we let the later load read the old
value of the address. Eventually the earlier load's ReadResp arrives,
but with new data. As there was no invalidate snoop (sunk into the
ReadRespWithInvalidate), and if we did not process the invalidate of
the ReadRespWithInvalidate, we obtain a load reordering.
A similar scenario can be constructed where the earlier load's address
is computed after ReadRespWithInvalidate arrives for the younger
load. In this case hitExternalSnoop needs to be set to true on the
ReadRespWithInvalidate, so that upon knowing the address of the
earlier load, checkViolations will cause the later load to be
squashed.
Finally we must account for the case where both loads are sent to the
memory subsystem (reordered), a snoop invalidate arrives and correctly
sets the later loads fault to ReExec. However, before the CPU
processes the fault, the later load's ReadResp arrives and the
writeback discards the outstanding fault. We must add a check to
ensure that we do not skip any unprocessed faults.
|
|
Ensure the snoop address check is always using a cache-block aligned
address. This patch updates Alpha and Mips to match the other ISAs.
|