Age | Commit message (Collapse) | Author |
|
When each load or store is sent to the LSQ, we check whether it will cross a
cache line boundary and, if so, split it in two. This creates two TLB
translations and two memory requests. Care has to be taken if the first
packet of a split load is sent but the second blocks the cache. Similarly,
for a store, if the first packet cannot be sent, we must store the second
one somewhere to retry later.
This modifies the LSQSenderState class to record both packets in a split
load or store.
Finally, a new const variable, HasUnalignedMemAcc, is added to each ISA
to indicate whether unaligned memory accesses are allowed. This is used
throughout the changed code so that compiler can optimise away code dealing
with split requests for ISAs that don't need them.
|
|
1) Move alpha-specific code out of page_table.cc:serialize().
2) Begin serializing M5_pid and unserializing it, but adding an function to do optional paramIn so that old checkpoints don't need to be fixed up.
3) Fix up alpha startup code so that the unserialized M5_pid value is properly written to DTB_IPR_ASN.
4) Fix the memory unserialize that I forgot somehow in the last changeset.
5) Add in an agg_se.py to handle aggregated checkpoints. --bench foo-bar plus positional arguments foo bar are the only changes in usage from se.py.
Note this aggregation stuff has only been tested for Alpha and nothing else, though it should take a very minimal amount of work to get it to work with another ISA.
|
|
|
|
|
|
When accessing arguments for a syscall, the position of an argument depends on
the policies of the ISA, how much space preceding arguments took up, and the
"alignment" of the index for this particular argument into the number of
possible storate locations. This change adjusts getSyscallArg to take its
index parameter by reference instead of value and to adjust it to point to the
possible location of the next argument on the stack, basically just after the
current one. This way, the rules for the new argument can be applied locally
without knowing about other arguments since those have already been taken into
account implicitly.
All system calls have also been changed to reflect the new interface. In a
number of cases this made the implementation clearer since it encourages
arguments to be collected in one place in order and then used as necessary
later, as opposed to scattering them throughout the function or using them in
place in long expressions. It also discourages using getSyscallArg over and
over to retrieve the same value when a temporary would do the job.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This file is for register indices, Num* constants, and register types.
copyRegs and copyMiscRegs were moved to utility.hh and utility.cc.
--HG--
rename : src/arch/alpha/regfile.hh => src/arch/alpha/registers.hh
rename : src/arch/arm/regfile.hh => src/arch/arm/registers.hh
rename : src/arch/mips/regfile.hh => src/arch/mips/registers.hh
rename : src/arch/sparc/regfile.hh => src/arch/sparc/registers.hh
rename : src/arch/x86/regfile.hh => src/arch/x86/registers.hh
|
|
regfile.hh.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This object encapsulates (or will eventually) the identity and characteristics
of the ISA in the CPU.
|
|
|
|
|
|
|
|
--HG--
rename : src/sim/host.hh => src/base/types.hh
|
|
|
|
TLBUnit no longer used and we also get rid of memAccSize and memAccFlags functions added to ISA and StaticInst
since TLB is not a separate resource to acquire. Instead, TLB access is done before any read/write to memory
and the result is checked before it's sent out to memory.
* * *
|
|
inorder was incorrectly storing FP values and confusing the integer/fp storage view of floating point operations. A big issue was knowing trying to infer when were doing single or double precision access
because this lets you know the size of value to store (32-64 bits). This isnt exactly straightforward since alpha uses all 64-bit regs while mips/sparc uses a dual-reg view. by getting this value from
the actual floating point register file, the model can figure out what it needs to store
|
|
|
|
|
|
Remove subinstructions eaComp/memAcc since unused in CPU Models. Instead, create eaComp that is visible from StaticInst object. Gives InOrder model capability of generating address without actually initiating access
* * *
|
|
Changes so that InOrder can work for a non-delay-slot ISA like Alpha. Typically, changes have to do with handling misspeculated branches at different points in pipeline
|
|
Edit AlphaISA to support the inorder model. Mostly alternate constructor functions and also a few skeleton multithreaded support functions
* * *
Remove namespace from header file. Causes compiler issues that are hard to find
* * *
Separate the TLB from the CPU and allow it to live in the TLBUnit resource. Give CPU accessor functions for access and also bind at construction time
* * *
Expose memory access size and flags through instruction object
(temporarily memAccSize and memFlags to get TLB stuff working.)
|
|
|
|
This patch adds limited multithreading support in syscall-emulation
mode, by using the clone system call. The clone system call works
for Alpha, SPARC and x86, and multithreaded applications run
correctly in Alpha and SPARC.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(See cset d35d2b28df38 for x86 fix.)
|
|
object.
|
|
|
|
|
|
|
|
the timing simple CPU to use it.
|
|
|
|
|
|
been tested on Alpha, compiles for the rest but not tested. I don't see why it wouldn't work though.
|