summaryrefslogtreecommitdiff
path: root/src/arch/x86
AgeCommit message (Collapse)Author
2016-10-04kvm: Adding details to kvm page fault in x86Alexandru Dutu
Adding details, e.g. rip, rsp etc. to the kvm pagefault exit when in SE mode.
2016-09-13x86: Force strict ordering for memory mapped m5opsMichael LeBeane
Normal MMAPPED_IPR requests are allowed to execute speculatively under the assumption that they have no side effects. The special case of m5ops that are treated like MMAPPED_IPR should not be allowed to execute speculatively, since they can have side-effects. Adding the STRICT_ORDER flag to these requests blocks execution until the associated instruction hits the ROB head.
2016-08-15cpu, arch: fix the type used for the request flagsNikos Nikoleris
Change-Id: I183b9942929c873c3272ce6d1abd4ebc472c7132 Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-08-05sim: fix issues with pwrite(); don't enable fstatfsTony Gutierrez
this patch fixes issues with changeset 11593 use the host's pwrite() syscall for pwrite64Func(), as opposed to pwrite64(), because pwrite64() does not work well on all distros. undo the enabling of fstatfs, as we will add this in a separate pate.
2016-08-04x86, sim: add some syscalls to X86Tony Gutierrez
this patch adds an implementation for the pwrite64 syscall and enables it for x86_64, and enables fstatfs for x86_64.
2016-05-19config, x86: Properly space pad the X86IntelMPBus Entry descriptionsBjoern A. Zeeb
According to the Intel Multi Processor Specification rev 1.4 (-006) (*), section 4.3.2 Bus Entries, Bus type strings are >>6-character ASCII (blank-filled) strings<<. This patch properly pads the entries with the missing spaces at the end. (*) http://www.intel.com/design/pentium/datashts/24201606.pdf Committed by Jason Lowe-Power <power.jg@gmail.com>
2016-05-19x86, dev: properly space the APIC registersBjoern A. Zeeb
Registers are 0x10 and not 0x8 apart. The latter leads to invalid calculations of index in array which in turn means that we will not find the interrupt we were looking (been notified) for in the OS. Committed by Jason Lowe-Power <power.jg@gmail.com>
2016-04-01syscall_emul: remove mmapFlagTableSteve Reinhardt
After all this it turns out we don't even use it.
2016-04-01syscall_emul: factor out flag tables into common fileSteve Reinhardt
The openFlagTable and mmapFlagTables for emulated Linux platforms are basically identical, but are specified repetitively for every platform. Use a common file that gets included for each platform so that we only have one copy, making them more consistent and simplifying changes (like adding #ifdefs). In the process, made some minor fixes that slipped through due to previous inconsistencies, and added more #ifdefs to try to fix building on alternative hosts.
2016-03-17base: support dynamic loading of Linux ELF objects in SE modeBrandon Potter
2016-03-17syscall_emul: update x86 mmap base addressBrandon Potter
2016-03-17syscall_emul: move mmapGrowsDown() to LiveProcessSteve Reinhardt
The mmapGrowsDown() method was a static method on the OperatingSystem class (and derived classes), which worked OK for the templated syscall emulation methods, but made it hard to access elsewhere. This patch moves the method to be a virtual function on the LiveProcess method, where it can be overridden for specific platforms (for now, Alpha). This patch also changes the value of mmapGrowsDown() from being false by default and true only on X86Linux32 to being true by default and false only on Alpha, which seems closer to reality (though in reality most people use ASLR and this doesn't really matter anymore). In the process, also got rid of the unused mmap_start field on LiveProcess and OperatingSystem mmapGrowsUp variable.
2016-03-17syscall_emul: fix bugs for mmap2 system call and x86-32 syscallsBrandon Potter
2016-03-17syscall_emul: extend mmap system call to support file backed mmapsBrandon Potter
For O3, which has a stat that counts reg reads, there is an additional reg read per mmap() call since there's an arg we no longer ignore. Otherwise, stats should not be affected.
2016-03-17syscall_emul: add many Linux kernel flagsBrandon Potter
2016-03-17syscall_emul: rename OpenFlagTransTable structBrandon Potter
The structure definition only had the open system call flag set in mind when it was named, so we rename it here with the intention of using it to define additional tables to translate flags for other system calls in the future.
2016-02-13syscall_emul: Implement clock_getres() system callMichael LeBeane
This patch implements the clock_getres() system call for arm and x86 in linux SE mode.
2016-02-06x86: revamp cmpxchg8b/cmpxchg16b implementationAlexandru Dutu
The previous implementation did a pair of nested RMW operations, which isn't compatible with the way that locked RMW operations are implemented in the cache models. It was convenient though in that it didn't require any new micro-ops, and supported cmpxchg16b using 64-bit memory ops. It also worked in AtomicSimpleCPU where atomicity was guaranteed by the core and not by the memory system. It did not work with timing CPU models though. This new implementation defines new 'split' load and store micro-ops which allow a single memory operation to use a pair of registers as the source or destination, then uses a single ldsplit/stsplit RMW pair to implement cmpxchg. This patch requires support for 128-bit memory accesses in the ISA (added via a separate patch) to support cmpxchg16b.
2016-02-06arch, x86: add support for arrays as memory operandsSteve Reinhardt
Although the cache models support wider accesses, the ISA descriptions assume that (for the most part) memory operands are integer types, which makes it difficult to define instructions that do memory accesses larger than 64 bits. This patch adds some generic support for memory operands that are arrays of uint64_t, and specifically a 'u2qw' operand type for x86 that is an array of 2 uint64_ts (128 bits). This support is unused at this point, but will be needed shortly for cmpxchg16b. Ideally the 128-bit SSE memory accesses will also be rewritten to use this support. Support for 128-bit accesses could also have been added using the gcc __int128_t extension, which would have been less disruptive. However, although clang also supports __int128_t, it's still non-standard. Also, more importantly, this approach creates a path to defining 256- and 512-byte operands as well, which will be useful for eventual AVX support.
2016-02-06syscall_emul: fix bug in aux vector initializationSteve Reinhardt
Writing 16 bytes from an 8-byte source value is a bad idea. This doesn't appear to have broken anything, but showed up as spurious differences when tracediffing runs.
2016-02-06x86: create function to check miscreg validitySteve Reinhardt
In the process of trying to get rid of an '== false' comparison, it became apparent that a slightly more involved solution was needed. Split this out into its own changeset since it's not a totally trivial local change like the others.
2016-02-06style: fix missing spaces in control statementsSteve Reinhardt
Result of running 'hg m5style --skip-all --fix-control -a'.
2016-02-06style: remove trailing whitespaceSteve Reinhardt
Result of running 'hg m5style --skip-all --fix-white -a'.
2016-01-17cpu. arch: add initiateMemRead() to ExecContext interfaceSteve Reinhardt
For historical reasons, the ExecContext interface had a single function, readMem(), that did two different things depending on whether the ExecContext supported atomic memory mode (i.e., AtomicSimpleCPU) or timing memory mode (all the other models). In the former case, it actually performed a memory read; in the latter case, it merely initiated a read access, and the read completion did not happen until later when a response packet arrived from the memory system. This led to some confusing things, including timing accesses being required to provide a pointer for the return data even though that pointer was only used in atomic mode. This patch splits this interface, adding a new initiateMemRead() function to the ExecContext interface to replace the timing-mode use of readMem(). For consistency and clarity, the readMemTiming() helper function in the ISA definitions is renamed to initiateMemRead() as well. For x86, where the access size is passed in explicitly, we can also get rid of the data parameter at this level. For other ISAs, where the access size is determined from the type of the data parameter, we have to keep the parameter for that purpose.
2016-01-17arch: don't call *Timing functions from *Atomic versionsSteve Reinhardt
The readMemAtomic/writeMemAtomic helper functions were calling readMemTiming/writeMemTiming respectively. This is functionally correct, since the *Timing functions are doing the same access initiation operation as the *Atomic functions (just that the *Atomic versions also complete the access in line). It also provides for some (very minimal) code reuse. Unfortunately, it's potentially pretty confusing, since it makes it look like the atomic accesses are somehow being converted to timing accesses. It also gets in the way of specializing the timing interface (as will be done in a future patch).
2016-01-17arch: get rid of unused LargestRead typedefSteve Reinhardt
2016-01-07pseudo inst,util: Add optional key to initparam pseudo instructionGabor Dozsa
The key parameter can be used to read out various config parameters from within the simulated software.
2015-12-18arm: remote GDB: rationalize structure of register offsetsBoris Shingarov
Currently, the wire format of register values in g- and G-packets is modelled using a union of uint8/16/32/64 arrays. The offset positions of each register are expressed as a "register count" scaled according to the width of the register in question. This results in counter- intuitive and error-prone "register count arithmetic", and some formats would even be altogether unrepresentable in such model, e.g. a 64-bit register following a 32-bit one would have a fractional index in the regs64 array. Another difficulty is that the array is allocated before the actual architecture of the workload is known (and therefore before the correct size for the array can be calculated). With this patch I propose a simpler mechanism for expressing the register set structure. In the new code, GdbRegCache is an abstract class; its subclasses contain straightforward structs reflecting the register representation. The determination whether to use e.g. the AArch32 vs. AArch64 register set (or SPARCv8 vs SPARCv9, etc.) is made by polymorphically dispatching getregs() to the concrete subclass. The subclass is not instantiated until it is needed for actual g-/G-packet processing, when the mode is already known. This patch is not meant to be merged in on its own, because it changes the contract between src/base/remote_gdb.* and src/arch/*/remote_gdb.*, so as it stands right now, it would break the other architectures. In this patch only the base and the ARM code are provided for review; once we agree on the structure, I will provide src/arch/*/remote_gdb.* for the other architectures; those patches could then be merged in together. Review Request: http://reviews.gem5.org/r/3207/ Pushed by Joel Hestness <jthestness@gmail.com>
2015-11-16x86: Invalidating TLB entry on page faultSwapnil Haria
As per the x86 architecture specification, matching TLB entries need to be invalidated on a page fault. For instance, after a page fault due to inadequate protection bits on a TLB hit, the TLB entry needs to be invalidated. This behavior is clearly specified in the x86 architecture manuals from both AMD and Intel. This invalidation is missing currently in gem5, due to which linux kernel versions 3.8 and up cannot be simulated efficiently. This is exposed by a linux optimisation in commit e4a1cc56e4d728eb87072c71c07581524e5160b1, which removes a tlb flush on updating page table entries in x86. Testing: Linux kernel versions 3.8 onwards were booting very slowly in FS mode, due to repeated page faults (~300000 before the first print statement in a bash file). Ensured that page fault rate drops drastically and observed reduction in boot time from order of hours to minutes for linux kernel v3.8 and v3.11
2015-11-16x86: cpuid: add family to warn() messageBjoern A. Zeeb
doCpuid() has to identical warn messages about unimplemented functions. Add the family to the log message to make them distinguishable. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-11-16x86: pagetable walker: fix typo in commentBjoern A. Zeeb
2015-10-23x86: Add missing explicit overrides for X86 devicesAndreas Hansson
Make clang >= 3.5 happy when compiling build/X86/gem5.opt on OSX.
2015-10-12misc: Remove redundant compiler-specific definesAndreas Hansson
This patch moves away from using M5_ATTR_OVERRIDE and the m5::hashmap (and similar) abstractions, as these are no longer needed with gcc 4.7 and clang 3.1 as minimum compiler versions.
2015-10-09isa: Add parameter to pick different decoder inside ISARekai Gonzalez Alberquilla
The decoder is responsible for splitting instructions in micro operations (uops). Given that different micro architectures may split operations differently, this patch allows to specify which micro architecture each isa implements, so different cores in the system can split instructions differently, also decoupling uop splitting (microArch) from ISA (Arch). This is done making the decodification calls templates that receive a type 'DecoderFlavour' that maps the name of the operation to the class that implements it. This way there is only one selection point (converting the command line enum to the appropriate DecodeFeatures object). In addition, there is no explicit code replication: template instantiation hides that, and the compiler should be able to resolve a number of things at compile-time.
2015-10-06x86: implement rcpps and rcpss SSE instsSteve Reinhardt
These are packed single-precision approximate reciprocal operations, vector and scalar versions, respectively. This code was basically developed by copying the code for sqrtps and sqrtss. The mrcp micro-op was simplified relative to msqrt since there are no double-precision versions of this operation.
2015-10-06x86: implement fild, fucomi, and fucomip x87 instsSteve Reinhardt
fild loads an integer value into the x87 top of stack register. fucomi/fucomip compare two x87 register values (the latter also doing a stack pop). These instructions are used by some versions of GNU libstdc++.
2015-09-30cpu,isa,mem: Add per-thread wakeup logicMitch Hayenga
Changes wakeup functionality so that only specific threads on SMT capable cpus are woken.
2015-09-30isa,cpu: Add support for FS SMT InterruptsMitch Hayenga
Adds per-thread interrupt controllers and thread/context logic so that interrupts properly get routed in SMT systems.
2015-07-20x86: x86 instruction-implementation bug fixesDavid Hashe
Added explicit data sizes and an opcode type for correct execution.
2015-07-20syscall: Add readlink to x86 with special case /proc/self/exeDavid Hashe
This patch implements the correct behavior.
2015-07-28revert 5af8f40d8f2cNilay Vaish
2015-07-26cpu: implements vector registersNilay Vaish
This adds a vector register type. The type is defined as a std::array of a fixed number of uint64_ts. The isa_parser.py has been modified to parse vector register operands and generate the required code. Different cpus have vector register files now.
2015-07-17x86: decode instructions with vex prefixNilay Vaish
This patch updates the x86 decoder so that it can decode instructions with vex prefix. It also updates the isa with opcodes from vex opcode maps 1, 2 and 3. Note that none of the instructions have been implemented yet. The implementations would be provided in due course of time.
2015-07-07sim: Refactor the serialization base classAndreas Sandberg
Objects that are can be serialized are supposed to inherit from the Serializable class. This class is meant to provide a unified API for such objects. However, so far it has mainly been used by SimObjects due to some fundamental design limitations. This changeset redesigns to the serialization interface to make it more generic and hide the underlying checkpoint storage. Specifically: * Add a set of APIs to serialize into a subsection of the current object. Previously, objects that needed this functionality would use ad-hoc solutions using nameOut() and section name generation. In the new world, an object that implements the interface has the methods serializeSection() and unserializeSection() that serialize into a named /subsection/ of the current object. Calling serialize() serializes an object into the current section. * Move the name() method from Serializable to SimObject as it is no longer needed for serialization. The fully qualified section name is generated by the main serialization code on the fly as objects serialize sub-objects. * Add a scoped ScopedCheckpointSection helper class. Some objects need to serialize data structures, that are not deriving from Serializable, into subsections. Previously, this was done using nameOut() and manual section name generation. To simplify this, this changeset introduces a ScopedCheckpointSection() helper class. When this class is instantiated, it adds a new /subsection/ and subsequent serialization calls during the lifetime of this helper class happen inside this section (or a subsection in case of nested sections). * The serialize() call is now const which prevents accidental state manipulation during serialization. Objects that rely on modifying state can use the serializeOld() call instead. The default implementation simply calls serialize(). Note: The old-style calls need to be explicitly called using the serializeOld()/serializeSectionOld() style APIs. These are used by default when serializing SimObjects. * Both the input and output checkpoints now use their own named types. This hides underlying checkpoint implementation from objects that need checkpointing and makes it easier to change the underlying checkpoint storage code.
2015-07-04x86: Adjust the size of the values written to the x87 misc registersNikos Nikoleris
All x87 misc registers are implemented in an array of 64 bit values but in real hardware the size of some of these registers is smaller. Previsouly all 64 bits where incorrectly set and then later read. To ensure correctness we mask the value in setMiscRegNoEffect to write only the valid bits. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-05-15misc: Appease gcc 5.1Andreas Hansson
Three minor issues are resolved: 1. Apparently gcc 5.1 does not like negation of booleans followed by bitwise AND. 2. Somehow the compiler also gets confused and warns about NoopMachInst being unused (removing it causes compilation errors though). Most likely a compiler bug. 3. There seems to be a number of instances where loop unrolling causes false positives for the array-bounds check. For now, switch to std::array. Potentially we could disable the warning for newer gcc versions, but switching to std::array is probably a good move in any case.
2015-05-05syscall_emul: fix warn_once behaviorSteve Reinhardt
The current ignoreWarnOnceFunc doesn't really work as expected, since it will only generate one warning total, for whichever "warn-once" syscall is invoked first. This patch fixes that behavior by keeping a "warned" flag in the SyscallDesc object, allowing suitably flagged syscalls to warn exactly once per syscall.
2015-05-05mem, cpu: Add a separate flag for strictly ordered memoryAndreas Sandberg
The Request::UNCACHEABLE flag currently has two different functions. The first, and obvious, function is to prevent the memory system from caching data in the request. The second function is to prevent reordering and speculation in CPU models. This changeset gives the order/speculation requirement a separate flag (Request::STRICT_ORDER). This flag prevents CPU models from doing the following optimizations: * Speculation: CPU models are not allowed to issue speculative loads. * Write combining: CPU models and caches are not allowed to merge writes to the same cache line. Note: The memory system may still reorder accesses unless the UNCACHEABLE flag is set. It is therefore expected that the STRICT_ORDER flag is combined with the UNCACHEABLE flag to prevent this behavior.
2015-05-05arch, cpu: Do not forward snoops to table walkerAndreas Hansson
This patch simplifies the overall CPU by changing the TLB caches such that they do not forward snoops to the table walker port(s). Note that only ARM and X86 are affected. There is no reason for the ports to snoop as they do not actually take any action, and from a performance point of view we are better of not snooping more than we have to. Should it at a later point be required to snoop for a particular TLB design it is easy enough to add it back.
2015-04-29x86: change divide-by-zero fault to divide-errorNilay Vaish
Same exception is raised whether division with zero is performed or the quotient is greater than the maximum value that the provided space can hold. Divide-by-Zero is the AMD terminology, while Divide-Error is Intel's.