Age | Commit message (Collapse) | Author |
|
These can be used to simplify the implementation of single step in derived
classes.
|
|
The "Event" name is the same as the base event class. That's a bit confusing,
and makes it a little awkward to add other event types.
|
|
Use the comInstEventQueue to ensure GDB interrupts the simulation at an
instruction boundary and not in the middle of a macroop, memory access, etc.
|
|
Only the instruction address is actually checked, so there's no need to check
repeatedly while we're working through the microops of a macroop and that's
not changing.
|
|
Not all ISAs have 64 bit sized registers, so it's not always very convenient
to access the GDB register cache in 64 bit sized chunks. This change makes it
accessible in 8, 16, 32, or 64 bit chunks. The MIPS and ARM implementations
were working around that limitation by bundling and unbundling 32 bit values
into 64 bit values. That code has been removed.
|
|
Instead of counting the number of opcode bytes in an instruction and recording
each byte before the actual opcode, we can represent the path we took to get to
the actual opcode byte by using a type code. That has a couple of advantages.
First, we can disambiguate the properties of opcodes of the same length which
have different properties. Second, it reduces the amount of data stored in an
ExtMachInst, making them slightly easier/faster to create and process. This
also adds some flexibility as far as how different types of opcodes are
handled, which might come in handy if we decide to support VEX or XOP
instructions.
This change also adds tables to support properly decoding 3 byte opcodes.
Before we would fall off the end of some arrays, on top of the ambiguity
described above.
This change doesn't measureably affect performance on the twolf benchmark.
--HG--
rename : src/arch/x86/isa/decoder/three_byte_opcodes.isa => src/arch/x86/isa/decoder/three_byte_0f38_opcodes.isa
rename : src/arch/x86/isa/decoder/three_byte_opcodes.isa => src/arch/x86/isa/decoder/three_byte_0f3a_opcodes.isa
|
|
The values in a "bitfield" or in an ExtMachInst structure member may not be a
literal value, it might select from an arbitrary collection of options. Instead
of using the raw value of those constants in the decoder, it's easier to tell
what's going on if they can be referred to as a symbolic constant/enum.
To support that, the ISA description language is extended slightly so that in
addition to integer literals, the case value for decode blobs can also be a
string literal. It's up to the ISA author to ensure that the string evaluates
to a legal constant value when interpretted as C++.
|
|
|
|
The check which makes sure the length of the breakpoint being written is the
same as a MachInst is only correct on fixed instruction width ISAs. Instead of
incorrectly applying that check to all ISAs, this change makes that the
default check and lets ISA specific GDB classes override it.
|
|
This command is supposed to set up a timer which will put the drive into a
standby mode if it isn't sent a command within a given time out. Since most of
the timeouts are generally significantly longer than a simulation would run
anyway, and we don't have an implementation for standby mode to begin with,
we can accept the command, do nothing, and report success.
|
|
This is used primarily for VNC.
|
|
This patch adds sorting based on the SimObject name or parameter name
for all situations where we iterate over dictionaries. This should
ensure a deterministic and consistent order across the host systems
and hopefully avoid regression results differing across python
versions.
|
|
This patch takes a clean-slate approach to providing WriteInvalidate
(write streaming, full cache line writes without first reading)
support.
Unlike the prior attempt, which took an aggressive approach of directly
writing into the cache before handling the coherence actions, this
approach follows the existing cache flows as closely as possible.
|
|
Prepare for a different implementation following in the next patch
|
|
This patch fixes a case where a store in Minor's store buffer never
leaves the store buffer as it is pre-maturely counted as having been
issued, leading to the store buffer idling.
LSQ::StoreBuffer::numUnissuedAccesses should count the number of accesses
either in memory, or still in the store buffer after being completed.
For stores which are also barriers, the store will stay in the store
buffer for a cycle after it is completed and will be cleaned up by the
barrier clearing code (to ensure that barriers are completed in-order).
To acheive this, numUnissuedAccesses is not decremented when a store-barrier
is issued to memory, but when its barrier effect is cleared.
Without this patch, the correct behaviour happens when a memory transaction
is immediately accepted, but not if it needs a retry.
|
|
This patch fixes the checking of the number of memory instructions issued
per cycles in the Minor CPU.
|
|
This patch fixes a case where the Minor CPU can deadlock due to the lack
of a response to TLB request because of a bug in fault handling in the ARM
table walker.
TableWalker::processWalkWrapper is the scheduler-called wrapper which
handles deferred walks which calls to TableWalker::wait cannot immediately
process. The handling of faults generated by processWalk{AArch64,LPAE,}
calls in those two functions is is different. processWalkWrapper ignores
fault returns from processWalk... which can lead to ::finish not being
called on a translation.
This fix provides fault handling in processWalkWrapper similar to that
found in the leaf functions which BaseTLB::Translation::finish.
|
|
In case the memory subsystem sends a combined response with invalidate
(e.g. ReadRespWithInvalidate), we cannot ignore the invalidate part
of the response.
If we were to ignore the invalidate part, under certain circumstances
this effectively leads to reordering of loads to the same address
which is not permitted under any memory consistency model implemented
in gem5.
Consider the case where a later load's address is computed before an
earlier load in program order, and is therefore sent to the memory
subsystem first. At some point the earlier load's address is computed
and in doing so correctly marks the later load as a
possibleLoadViolation. In the meantime some other node writes and
sends invalidations to all other nodes. The invalidation races with
the later load's ReadResp, and arrives before ReadResp and is
deferred. Upon receipt of the ReadResp, the response is changed to
ReadRespWithInvalidate, and sent to the CPU. If we ignore the
invalidate part of the packet, we let the later load read the old
value of the address. Eventually the earlier load's ReadResp arrives,
but with new data. As there was no invalidate snoop (sunk into the
ReadRespWithInvalidate), and if we did not process the invalidate of
the ReadRespWithInvalidate, we obtain a load reordering.
A similar scenario can be constructed where the earlier load's address
is computed after ReadRespWithInvalidate arrives for the younger
load. In this case hitExternalSnoop needs to be set to true on the
ReadRespWithInvalidate, so that upon knowing the address of the
earlier load, checkViolations will cause the later load to be
squashed.
Finally we must account for the case where both loads are sent to the
memory subsystem (reordered), a snoop invalidate arrives and correctly
sets the later loads fault to ReExec. However, before the CPU
processes the fault, the later load's ReadResp arrives and the
writeback discards the outstanding fault. We must add a check to
ensure that we do not skip any unprocessed faults.
|
|
Ensure the snoop address check is always using a cache-block aligned
address. This patch updates Alpha and Mips to match the other ISAs.
|
|
Move the packet deallocations in the O3 CPU so that the completeDataAccess
deals only with the LSQ specific parts and the generic recvTimingResp frees the
packet in all other cases.
|
|
This patch allows objects to get the src/dest of a packet even if it
is not set to a valid port id. This simplifies (ab)using the bridge as
a buffer and latency adapter in situations where the neighbouring
MemObjects are not crossbars.
The checks that were done in the packet are now shifted to the
crossbar where the fields are used to index into the port
arrays. Thus, the carrier of the information is not burdened with
checking, and the crossbar can check not only that the destination is
set, but also that the port index is within limits.
|
|
This patch attempts to make the rules for data allocation in the
packet explicit, understandable, and easy to verify. The constructor
that copies a packet is extended with an additional flag "alloc_data"
to enable the call site to explicitly say whether the newly created
packet is short-lived (a zero-time snoop), or has an unknown life-time
and therefore should allocate its own data (or copy a static pointer
in the case of static data).
The tricky case is the static data. In essence this is a
copy-avoidance scheme where the original source of the request (DMA,
CPU etc) does not ask the memory system to return data as part of the
packet, but instead provides a pointer, and then the memory system
carries this pointer around, and copies the appropriate data to the
location itself. Thus any derived packet actually never copies any
data. As the original source does not copy any data from the response
packet when arriving back at the source, we must maintain the copy of
the original pointer to not break the system. We might want to revisit
this one day and pay the price for a few extra memcpy invocations.
All in all this patch should make it easier to grok what is going on
in the memory system and how data is actually copied (or not).
|
|
This patch cleans up the use of hasData and checkFunctional in the
packet. The hasData function is unfortunately suggesting that it
checks if the packet has a valid data pointer, when it does in fact
only check if the specific packet type is specified to have a data
payload. The confusion led to a bug in checkFunctional. The latter
function is also tidied up to avoid name overloading.
|
|
This adds a basic level of sanity checking to the packet by ensuring
that a request is not modified once the packet is created. The only
issue that had to be worked around is the relaying of
software-prefetches in the cache. The specific situation is now solved
by first copying the request, and then creating a new packet
accordingly.
|
|
This patch tidies up the Request class, making all getters const. The
odd one out is incAccessDepth which is called by the memory system as
packets carry the request around. This is also const to enable the
packet to hold on to a const Request.
|
|
|
|
This patch simplifies how we deal with dynamically allocated data in
the packet, always assuming that it is array allocated, and hence
should be array deallocated (delete[] as opposed to delete). The only
uses of dataDynamic was in the Ruby testers.
The ARRAY_DATA flag in the packet is removed accordingly. No
defragmentation of the flags is done at this point, leaving a gap in
the bit masks.
As the last part the patch, it renames dataDynamicArray to dataDynamic.
|
|
This patch cleans up the packet memory allocation confusion. The data
is always allocated at the requesting side, when a packet is created
(or copied), and there is never a need for any device to allocate any
space if it is merely responding to a paket. This behaviour is in line
with how SystemC and TLM works as well, thus increasing
interoperability, and matching established conventions.
The redundant calls to Packet::allocate are removed, and the checks in
the function are tightened up to make sure data is only ever allocated
once. There are still some oddities in the packet copy constructor
where we copy the data pointer if it is static (without ownership),
and allocate new space if the data is dynamic (with ownership). The
latter is being worked on further in a follow-on patch.
|
|
This patch changes the various write functions in the port proxies
to use const pointers for all sources (similar to how memcpy works).
The one unfortunate aspect is the need for a const_cast in the packet,
to avoid having to juggle a const and a non-const data pointer. This
design decision can always be re-evaluated at a later stage.
|
|
This patch takes a first step in tightening up how we use the data
pointer in write packets. A const getter is added for the pointer
itself (getConstPtr), and a number of member functions are also made
const accordingly. In a range of places throughout the memory system
the new member is used.
The patch also removes the unused isReadWrite function.
|
|
This patch removes the parameter that enables bypassing the null check
in the Packet::getPtr method. A number of call sites assume the value
to be non-null.
The one odd case is the RubyTester, which issues zero-sized
prefetches(!), and despite being reads they had no valid data
pointer. This is now fixed, but the size oddity remains (unless anyone
object or has any good suggestions).
Finally, in the Ruby Sequencer, appropriate checks are made for flush
packets as they have no valid data pointer.
|
|
This patch adds a first cut GDDR5 config to accommodate the users
combining gem5 and GPUSim. The config is based on a SK Hynix
datasheet, and the Nvidia GTX580 specification. Someone from the
GPUSim user-camp should tweak the default page-policy and static
frontend and backend latencies.
|
|
Mostly addressing uninitialised members.
|
|
This patch adds uncacheable/cacheable and read-only/read-write attributes to
the map method of PageTableBase. It also modifies the constructor of TlbEntry
structs for all architectures to consider the new attributes.
|
|
The multi level page table was giving false positives for already mapped
translations. This patch fixes the bogus behavior.
|
|
Trimmed down all the lines greater than 78 characters.
|
|
This patch sets up low and high privilege code and data segments and places them
in the following order: cs low, ds low, ds, cs, in the GDT. Additionally, a
syscall and page fault handler for KvmCPU in SE mode are defined. The order of
the segment selectors in GDT is required in this manner for interrupt handling
to work properly. Segment initialization is done for all the thread
contexts.
|
|
This patch adds methods in KvmCPU model to handle KVM exits caused by syscall
instructions and page faults. These types of exits will be encountered if
KvmCPU is run in SE mode.
|
|
Adding more features in the CPUid with the purpose of supporting running the
KvmCPU in SE mode.
|
|
There was already a stub device at 0x80, the port traditionally used for an IO
delay. 0x80 is also the port used for POST codes sent by firmware, and that
may have prompted adding this port as a second option.
|
|
|
|
|
|
The data size used for actually writing the base value for the segment was the
default size, but really it should set the entire value without any possible
truncation.
|
|
The far pointer should be shifted right to get the selector value, not left.
Also, when calculating the width of the offset, the wrong register was used in
one spot.
|
|
Otherwise the IPI which isn't sent will never arrive, and the deliveryStatus
bit will never be cleared.
|
|
The getRegArrayBit function extracts a bit from a series of registers which
are treated as a single large bit array. A previous change had modified the
logic which figured out which bit to extract from ">> 5" to "% 5" which seems
wrong, especially when other, similar functions were changed to use "% 32".
|
|
The value in EAX has an 8 bit field for the linear address size and one for
the physical address size when calling that function. A recent change
implemented it but returned 0xff for both of those fields. That implies that
linear and physical addresses are 255 bits wide which is wrong. When using the
KVM CPU model this causes an error, presumably because some of those bits are
actually reserved, or the CPU or kernel realizes 255 bits is a bad value.
This change makes those values 48.
|
|
Another churn to clean up undefined behaviour, mostly ARM, but some
parts also touching the generic part of the code base.
Most of the fixes are simply ensuring that proper intialisation. One
of the more subtle changes is the return type of the sign-extension,
which is changed to uint64_t. This is to avoid shifting negative
values (undefined behaviour) in the ISA code.
|
|
|
|
With recent changes OSX clang compilation fails due to an unused variable.
|