Age | Commit message (Collapse) | Author |
|
This DPRINTF shouldn't be necessary since it shows the operands and
results of the instruction which the trace should already make
available. Also by passing the destination register to DPRINTF, the ISA
parser will assume that it's also a source when tracking dependencies.
Change-Id: I820387c82578bdbb8d2e3d91652a6c0185077f54
Reviewed-on: https://gem5-review.googlesource.com/c/14475
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
The ISA parser had been assuming these microops were all FloatAddOp
which is usually not correct.
Change-Id: Ic54881d16f16b50c3d6a8c74b94bff9ae3b1f43e
Reviewed-on: https://gem5-review.googlesource.com/10541
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Tariq Azmy <tariqslayer01@gmail.com>
Maintainer: Anthony Gutierrez <anthony.gutierrez@amd.com>
|
|
These are non-temporal packed SSE stores.
Change-Id: I526cd6551b38d6d35010bc6173f23d017106b466
Reviewed-on: https://gem5-review.googlesource.com/9861
Reviewed-by: Gabe Black <gabeblack@google.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
This percolates down to the memory request object which will have its
"UNCACHEABLE" flag set.
Change-Id: Ie73f4249bfcd57f45a473f220d0988856715a9ce
Reviewed-on: https://gem5-review.googlesource.com/9881
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Anthony Gutierrez <anthony.gutierrez@amd.com>
|
|
Add bitfields which can gather/scatter base and limit fields within
"normal" segment descriptors, and in TSS descriptors which have the
same bitfields in the same positions for those two values.
This centralizes the code which manages those bitfields and makes it
less likely that a local implementation will be buggy.
Change-Id: I9809aa626fc31388595c3d3b225c25a0ec6a1275
Reviewed-on: https://gem5-review.googlesource.com/7661
Reviewed-by: Gabe Black <gabeblack@google.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
These instructions originally read the TSC into t1 and then unpacked it
into eax and edx using a move, a right shift, and then another move.
We can combine the second shift and move. The shift will move the
upper 32 bits into the lower 32 bits, and clear the upper 32 bits to
zero. This has the same effect as moving the lower 32 bits post-shift
into another register, since the upper 32 bits will be cleared to zero
based on x86 partial register access semantics.
Change-Id: Iba85e501c7e84147ad0047f5c555e61bdf8f032b
Reviewed-on: https://gem5-review.googlesource.com/9044
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
This is very similar to RDTSC, except that it requires all younger
instructions to retire before it completes, and it writes the TSC_AUX
MSR into ECX. I've added an mfence as an iniitial microop to ensure
that memory accesses complete before RDTSCP runs, and added an rdval
microop at the end to read the TSC_AUX value into ECX.
Change-Id: I9766af562b7fd0c22e331b56e06e8818a9e268c9
Reviewed-on: https://gem5-review.googlesource.com/9043
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
Change-Id: I20bf6a57ea4354aac9267845bb37b70b83d6fcde
Reviewed-on: https://gem5-review.googlesource.com/9042
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
This makes it explicit which type of serialization you want, and also
makes it possible to make a macroop serialize before. The old
serializing directive was renamed .serialize_after in the microcode
assembler, and throughout the microcode implementation, and its
behavior is unchanged. More specifically, it still marks the last
microop within the macroop as IsSerializing and IsSerializeAfter.
The new .serialize_before directive does something similar and marks
the first microop as IsSerializing and IsSerializeBefore.
Change-Id: Ia53466c734c651c65400809de7ef903c4a6c3e7e
Reviewed-on: https://gem5-review.googlesource.com/9041
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
This patch adds support for cache flushing instructions in x86.
It piggybacks on support for similar instructions in arm ISA
added by Nikos Nikoleris. I have tested each instruction using
microbenchmarks.
Change-Id: I72b6b8dc30c236a21eff7958fa231f0663532d7d
Reviewed-on: https://gem5-review.googlesource.com/7401
Reviewed-by: Gabe Black <gabeblack@google.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
That particular ExtMachInst is a convenient placeholder, but a value
of 0 in RISCV or a static uninitialized ExtMachInst (which will
therefore be all zeroes) on x86 works just as well, and removes the
need for an ISA specific constant.
Also, the idea of a universal Nop doesn't always make sense since it
could be that what, exactly, doesn't do anything depends on context
which would be lost on a constant value of an ExtMachInst. For
instance, the value of an ExtMachInst that makes sense might depend on
what mode the CPU was in, etc.
Change-Id: I1f1a43a5c607a667e11b79bcf6e059e4f7141b3f
Reviewed-on: https://gem5-review.googlesource.com/6825
Reviewed-by: Gabe Black <gabeblack@google.com>
Reviewed-by: Alec Roelke <ar4jc@virginia.edu>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
GCC 7.2 is much stricter than previous GCC versions. The following changes
are needed:
* There is now a warning if there is an implicit fallthrough between two
case statments. C++17 adds the [[fallthrough]]; declaration. However,
to support non C++17 standards (i.e., C++11), we use M5_FALLTHROUGH.
M5_FALLTHROUGH checks for [[fallthrough]] compliant C++17 compiler and
if that doesn't exist, it defaults to nothing (no older compilers
generate warnings).
* The above resulted in a couple of bugs that were found. This is noted
in the review request on gerrit.
* throw() for dynamic exception specification is deprecated
* There were a couple of new uninitialized variable warnings
* Can no longer perform bitwise operations on a bool.
* Must now include <functional> for std::function
* Compiler bug for void* lambda. Changed to auto as work around. See
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82878
Change-Id: I5d4c782a4e133fa4cdb119e35d9aff68c6e2958e
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-on: https://gem5-review.googlesource.com/5802
Reviewed-by: Gabe Black <gabeblack@google.com>
|
|
This means the instruction is treated as cmpxchg8b when the effective
operand size is 16 bits.
Change-Id: I4d9bb295f96097e1746a9bbccb2c579d14738fab
Reviewed-on: https://gem5-review.googlesource.com/6603
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
Replace them with std::array<>s.
Change-Id: I76624c87a1cd9b21c386a96147a18de92b8a8a34
Reviewed-on: https://gem5-review.googlesource.com/6602
Maintainer: Gabe Black <gabeblack@google.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Explicitly separate the way the data is represented in the underlying
representation from how it's represented in the instruction.
In order to make the ISA parser happy, the Mem operand needs to have
a single, particular type. To handle that with scalar types, we just
used uint64_ts and then worked with values that were smaller than the
maximum we could hold. To work with these new array values, we also
use an underlying uint64_t for each element.
To make accessing the underlying memory system more natural, when we
go to actually read or write values, we translate the access into an
array of the actual, correct underlying type. That way we don't have
non-exact asserts which confuse gcc, or weird endianness conversion
which assumes that the data should be flipped 8 bytes at a time.
Because the functions involved are generally inline, the syntactic
niceness should all boil off, and the final implementation in the
binary should be simple and efficient for the given data types.
Change-Id: I14ce7a2fe0dc2cbaf6ad4a0d19f743c45ee78e26
Reviewed-on: https://gem5-review.googlesource.com/6582
Reviewed-by: Gabe Black <gabeblack@google.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
The FSW and TOP values are technically part of the same register, but
they have very different behaviors. One of them can be renamed and
float along without affecting global state, while the other requires
serialization. They just need to *look* like the same register when
read by the user.
Also, there was a missing break in setMiscRegNoEffect.
Change-Id: If58de0f566f65068208240f4001209fb9e1826d6
Reviewed-on: https://gem5-review.googlesource.com/6441
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
The microcode for those instructions needs a directive which overrides
that setting in the instructions emulation environment.
Reported-by: Matt Sinclair <mattdsinclair@gmail.com>
Change-Id: I474d938c0b3cf01da92ec817a58b08de783f1967
Reviewed-on: https://gem5-review.googlesource.com/6301
Reviewed-by: Gabe Black <gabeblack@google.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
These files aren't a collection of miscellaneous stuff, they're the
definition of the Logger interface, and a few utility macros for
calling into that interface (panic, warn, etc.).
Change-Id: I84267ac3f45896a83c0ef027f8f19c5e9a5667d1
Reviewed-on: https://gem5-review.googlesource.com/6226
Reviewed-by: Brandon Potter <Brandon.Potter@amd.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
In the ISA instruction definitions, some classes were declared with
execute, etc., functions outside of the main template because they
had CPU specific signatures and would need to be duplicated with
each CPU plugged into them. Now that the instructions always just
use an ExecContext, there's no reason for those templates to be
separate. This change folds those templates together.
Change-Id: I13bda247d3d1cc07c0ea06968e48aa5b4aace7fa
Reviewed-on: https://gem5-review.googlesource.com/5401
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Alec Roelke <ar4jc@virginia.edu>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
The ISA parser used to generate different copies of exec functions
for each exec context class a particular CPU wanted to use. That's
since been changed so that those functions take a pointer to the base
ExecContext, so the code which would generate those extra functions
can be removed, and some functions which used to be templated on an
ExecContext subclass can be untemplated, or minimally less templated.
Now that some functions aren't going to be instantiated multiple times
with different signatures, there are also opportunities to collapse
templates and make many instruction definitions simpler within the
parser. Since those changes will be less mechanical, they're left for
later changes and will probably be done in smaller increments.
Change-Id: I0015307bb02dfb9c60380b56d2a820f12169ebea
Reviewed-on: https://gem5-review.googlesource.com/5381
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
MOV Rd,Cd is MR encoded but the control register is operand 2
not operand 1 hence this needs to be MODRM_REG not MODRM_RM.
While MOV Cd,Rd is RM encoded registers are also swapped, so
it also needs to be MODRM_REG as well (as it already correctly is).
This fixes incorrect UD2 reportings leading to invalid traps
reported in O3 on X86 FS introduced with 4e939a7 .
Change-Id: Ib33c8ba87b00e0264d33da44fff64ed9e4d2d9d8
Reviewed-on: https://gem5-review.googlesource.com/4861
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Gabe Black <gabeblack@google.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
The condition is whether the control register index is valid.
Change-Id: I8a225fcfd4955032b5bbf7d3392ee5bcc7d6bc64
Reviewed-on: https://gem5-review.googlesource.com/4581
Reviewed-by: Michael LeBeane <Michael.Lebeane@amd.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
A condition can be specified which will tell the decoder whether to return
the instruction being requested, or, if the condition fails, UD2.
Change-Id: I0f1c075deb10754ce1dd88be1726a196294e41fd
Reviewed-on: https://gem5-review.googlesource.com/4580
Reviewed-by: Michael LeBeane <Michael.Lebeane@amd.com>
Maintainer: Gabe Black <gabeblack@google.com>
|
|
With the hierarchical RegId there are a lot of functions that are
redundant now.
The idea behind the simplification is that instead of having the regId,
telling which kind of register read/write/rename/lookup/etc. and then
the function panic_if'ing if the regId is not of the appropriate type,
we provide an interface that decides what kind of register to read
depending on the register type of the given regId.
Change-Id: I7d52e9e21fc01205ae365d86921a4ceb67a57178
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
[ Fix RISCV build issues ]
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/2702
|
|
Replace the unified register mapping with a structure associating
a class and an index. It is now much easier to know which class of
register the index is referring to. Also, when adding a new class
there is no need to modify existing ones.
Change-Id: I55b3ac80763702aa2cd3ed2cbff0a75ef7620373
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
[ Fix RISCV build issues ]
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/2700
|
|
Remove redundant information from the ExtMachInst, hash the vex
information to ensure the decode cache works properly, print the vex info
when printing an ExtMachInst, consider the vex info when comparing two
ExtMachInsts, fold the info from the vex prefixes into existing settings,
remove redundant decode code, handle vex prefixes one byte at a time and
don't bother building up the entire prefix, and let instructions that care
about vex use it in their implementation, instead of developing an entire
parallel decode tree.
This also eliminates the error prone vex immediate decode table which was
incomplete and would result in an out of bounds access for incorrectly
encoded instructions or when the CPU was mispeculating, as it was (as far
as I can tell) redundant with the tables that already existed for two and
three byte opcodes. There were differences, but I think those may have
been mistakes based on the documentation I found.
Also, in 32 bit mode, the VEX prefixes might actually be LDS or LES
instructions which are still legal in that mode. A valid VEX prefix would
look like an LDS/LES with an otherwise invalid modrm encoding, so use that
as a signal to abort processing the VEX and turn the instruction into an
LES/LDS as appropriate.
Change-Id: Icb367eaaa35590692df1c98862f315da4c139f5c
Reviewed-on: https://gem5-review.googlesource.com/3501
Reviewed-by: Joe Gross <joe.gross@amd.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Anthony Gutierrez <anthony.gutierrez@amd.com>
|
|
If the operands were 64 bit, an intermediate calculation could lose a
carry bit. This change rearranges that intermediate calculation if the
operand width is large, and reworks the microop implementation in general
in an attempt to make it easier to understand.
Change-Id: Ib36333f3f2695a33cd9623e43682de22ebd2e7ea
Reviewed-on: https://gem5-review.googlesource.com/3381
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Maintainer: Anthony Gutierrez <anthony.gutierrez@amd.com>
|
|
This changeset adds functionality that allows system calls to retry without
affecting thread context state such as the program counter or register values
for the associated thread context (when system calls return with a retry
fault).
This functionality is needed to solve problems with blocking system calls
in multi-process or multi-threaded simulations where information is passed
between processes/threads. Blocking system calls can cause deadlock because
the simulator itself is single threaded. There is only a single thread
servicing the event queue which can cause deadlock if the thread hits a
blocking system call instruction.
To illustrate the problem, consider two processes using the producer/consumer
sharing model. The processes can use file descriptors and the read and write
calls to pass information to one another. If the consumer calls the blocking
read system call before the producer has produced anything, the call will
block the event queue (while executing the system call instruction) and
deadlock the simulation.
The solution implemented in this changeset is to recognize that the system
calls will block and then generate a special retry fault. The fault will
be sent back up through the function call chain until it is exposed to the
cpu model's pipeline where the fault becomes visible. The fault will trigger
the cpu model to replay the instruction at a future tick where the call has
a chance to succeed without actually going into a blocking state.
In subsequent patches, we recognize that a syscall will block by calling a
non-blocking poll (from inside the system call implementation) and checking
for events. When events show up during the poll, it signifies that the call
would not have blocked and the syscall is allowed to proceed (calling an
underlying host system call if necessary). If no events are returned from the
poll, we generate the fault and try the instruction for the thread context
at a distant tick. Note that retrying every tick is not efficient.
As an aside, the simulator has some multi-threading support for the event
queue, but it is not used by default and needs work. Even if the event queue
was completely multi-threaded, meaning that there is a hardware thread on
the host servicing a single simulator thread contexts with a 1:1 mapping
between them, it's still possible to run into deadlock due to the event queue
barriers on quantum boundaries. The solution of replaying at a later tick
is the simplest solution and solves the problem generally.
|
|
When in 64-bit mode, if the stack is accessed implicitly by an instruction
the alternate address prefix should be ignored if present.
This patch adds an extra flag to the ldstop which signifies when the
address override should be ignored. Then, for all of the affected
instructions, this patch adds two options to the ld and st opcode to use
the current stack addressing mode for all addresses and to ignore the
AddressSizeFlagBit. Finally, this patch updates the x86 TLB to not
truncate the address if it is in 64-bit mode and the IgnoreAddrSizeFlagBit
is set.
This fixes a problem when calling __libc_start_main with a binary that is
linked with a recent version of ld. This version of ld uses the address
override prefix (0x67) on the call instruction instead of a nop.
Note: This has not been tested in compatibility mode and only the call
instruction with the address override prefix has been tested.
See [1] page 9 (pdf page 45)
For instructions that are affected see [1] page 519 (pdf page 555).
[1] http://support.amd.com/TechDocs/24594.pdf
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
UBSAN flags this operation because it detects that arg is being cast directly
to an unsigned type, argBits. this patch fixes this by first casting the
value to a signed int type, then reintrepreting the raw bits of the signed
int into argBits.
|
|
This patch adds the ability for an application to request dist-gem5 to begin/
end synchronization using an m5 op. When toggling on sync, all nodes agree
on the next sync point based on the maximum of all nodes' ticks. CPUs are
suspended until the sync point to avoid sending network messages until sync has
been enabled. Toggling off sync acts like a global execution barrier, where
all CPUs are disabled until every node reaches the toggle off point. This
avoids tricky situations such as one node hitting a toggle off followed by a
toggle on before the other nodes hit the first toggle off.
|
|
The previous implementation did a pair of nested RMW operations,
which isn't compatible with the way that locked RMW operations are
implemented in the cache models. It was convenient though in that
it didn't require any new micro-ops, and supported cmpxchg16b using
64-bit memory ops. It also worked in AtomicSimpleCPU where
atomicity was guaranteed by the core and not by the memory system.
It did not work with timing CPU models though.
This new implementation defines new 'split' load and store micro-ops
which allow a single memory operation to use a pair of registers as
the source or destination, then uses a single ldsplit/stsplit RMW
pair to implement cmpxchg. This patch requires support for 128-bit
memory accesses in the ISA (added via a separate patch) to support
cmpxchg16b.
|
|
Although the cache models support wider accesses, the ISA descriptions
assume that (for the most part) memory operands are integer types,
which makes it difficult to define instructions that do memory accesses
larger than 64 bits.
This patch adds some generic support for memory operands that are arrays
of uint64_t, and specifically a 'u2qw' operand type for x86 that is an
array of 2 uint64_ts (128 bits). This support is unused at this point,
but will be needed shortly for cmpxchg16b. Ideally the 128-bit SSE
memory accesses will also be rewritten to use this support.
Support for 128-bit accesses could also have been added using the gcc
__int128_t extension, which would have been less disruptive. However,
although clang also supports __int128_t, it's still non-standard.
Also, more importantly, this approach creates a path to defining
256- and 512-byte operands as well, which will be useful for eventual
AVX support.
|
|
Result of running 'hg m5style --skip-all --fix-white -a'.
|
|
For historical reasons, the ExecContext interface had a single
function, readMem(), that did two different things depending on
whether the ExecContext supported atomic memory mode (i.e.,
AtomicSimpleCPU) or timing memory mode (all the other models).
In the former case, it actually performed a memory read; in the
latter case, it merely initiated a read access, and the read
completion did not happen until later when a response packet
arrived from the memory system.
This led to some confusing things, including timing accesses
being required to provide a pointer for the return data even
though that pointer was only used in atomic mode.
This patch splits this interface, adding a new initiateMemRead()
function to the ExecContext interface to replace the timing-mode
use of readMem().
For consistency and clarity, the readMemTiming() helper function
in the ISA definitions is renamed to initiateMemRead() as well.
For x86, where the access size is passed in explicitly, we can
also get rid of the data parameter at this level. For other ISAs,
where the access size is determined from the type of the data
parameter, we have to keep the parameter for that purpose.
|
|
The key parameter can be used to read out various config parameters from
within the simulated software.
|
|
These are packed single-precision approximate reciprocal operations,
vector and scalar versions, respectively.
This code was basically developed by copying the code for
sqrtps and sqrtss. The mrcp micro-op was simplified relative to
msqrt since there are no double-precision versions of this operation.
|
|
fild loads an integer value into the x87 top of stack register.
fucomi/fucomip compare two x87 register values (the latter
also doing a stack pop).
These instructions are used by some versions of GNU libstdc++.
|
|
Added explicit data sizes and an opcode type for correct execution.
|
|
This patch updates the x86 decoder so that it can decode instructions with vex
prefix. It also updates the isa with opcodes from vex opcode maps 1, 2 and 3.
Note that none of the instructions have been implemented yet. The
implementations would be provided in due course of time.
|
|
All x87 misc registers are implemented in an array of 64 bit values
but in real hardware the size of some of these registers is smaller.
Previsouly all 64 bits where incorrectly set and then later read. To
ensure correctness we mask the value in setMiscRegNoEffect to write
only the valid bits.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
Same exception is raised whether division with zero is performed or the
quotient is greater than the maximum value that the provided space can hold.
Divide-by-Zero is the AMD terminology, while Divide-Error is Intel's.
|
|
|
|
When running with the Exec flag, the mwait instruction attempted
to print out its source registers, which were never actually
initialized. This led to sporadic assertion failures when the
value stored there was invalid.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
Makes x86-style locked operations even more distinct from
LLSC operations. Using "locked" by itself should be
obviously ambiguous now.
|
|
This patch corrects the FXSAVE and FXRSTOR Macroops. The actual code used for
saving/restore the FP registers is in the file but it was not used.
The FXSAVE and FXRSTOR instructions are used in the kernel for saving and
loading the state of the mmx,xmm and fpu registers.
This operation is triggered in FS by issuing a Device Not Available Fault. The
cr0 register has a TS flag that is set upon each context change. Every time a
task access any FP related register (SIMD as well) if the TS flag is set to
one, the device not available fault is issued. The kernel saves the current
state of the registers, and restore the previous state of the currently running
task.
Right now Gem5 lacks of this capability. the Device Not Available Fault is
never issued, leading to several problems when different threads share the same
CPU and SMT is not used. The PARSEC Ferret benchmark is an example of this
behavior.
In order to test this a hack in the atomic cpu code was done to detect if a
static instruction has any FP operands and the cr0 reg TS bit is set. This
check must be done in the ISA dependent code. But it seems to be tricky to
access the cr0 register while executing an instruction.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
This patch implements the simd128 ADDSUBPD instruction for the x86 architecture.
Tested with a simple program in assembly language which executes the
instruction. Checked that different versions of the instruction are executed
by using the execution tracing option.
Committed by: Nilay Vaish <nilay@cs.wisc.edu
|
|
Instead of counting the number of opcode bytes in an instruction and recording
each byte before the actual opcode, we can represent the path we took to get to
the actual opcode byte by using a type code. That has a couple of advantages.
First, we can disambiguate the properties of opcodes of the same length which
have different properties. Second, it reduces the amount of data stored in an
ExtMachInst, making them slightly easier/faster to create and process. This
also adds some flexibility as far as how different types of opcodes are
handled, which might come in handy if we decide to support VEX or XOP
instructions.
This change also adds tables to support properly decoding 3 byte opcodes.
Before we would fall off the end of some arrays, on top of the ambiguity
described above.
This change doesn't measureably affect performance on the twolf benchmark.
--HG--
rename : src/arch/x86/isa/decoder/three_byte_opcodes.isa => src/arch/x86/isa/decoder/three_byte_0f38_opcodes.isa
rename : src/arch/x86/isa/decoder/three_byte_opcodes.isa => src/arch/x86/isa/decoder/three_byte_0f3a_opcodes.isa
|
|
The data size used for actually writing the base value for the segment was the
default size, but really it should set the entire value without any possible
truncation.
|
|
The far pointer should be shifted right to get the selector value, not left.
Also, when calculating the width of the offset, the wrong register was used in
one spot.
|