summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2015-07-26cpu: implements vector registersNilay Vaish
This adds a vector register type. The type is defined as a std::array of a fixed number of uint64_ts. The isa_parser.py has been modified to parse vector register operands and generate the required code. Different cpus have vector register files now.
2015-07-26cpu: o3: slight correction to identation in rename_impl.hhNilay Vaish
2015-07-24style: change Process function calls to use camelCaseBrandon Potter
The Process class methods were using an improper style and this subsequently bled into the system call code. The following regular expressions should be helpful if someone transitions private system call patches on top of these changesets: s/alloc_fd/allocFD/ s/sim_fd(/simFD(/ s/sim_fd_obj/getFDEntry/ s/fix_file_offsets/fixFileOffsets/ s/find_file_offsets/findFileOffsets/
2015-07-24syscall_emul: standardized file descriptor name and add return checks.Brandon Potter
The patch clarifies whether file descriptors are host file descriptors or target file descriptors in the system call code. (Host file descriptors are file descriptors which have been allocated through real system calls where target file descriptors are allocated from an array in the Process class.)
2015-07-24base: refactor process class (specifically FdMap and friends)Brandon Potter
This patch extends the previous patch's alterations around fd_map. It cleans up some of the uglier code in the process file and replaces it with a more concise C++11 version. As part of the changes, the FdMap class is pulled out of the Process class and receives its own file.
2015-07-24syscall_emul: file descriptor interface changesBrandon Potter
This patch gets rid of unused Process::dup_fd method and does minor refactoring in the process class files. The file descriptor max has been changed to be the number of file descriptors since this clarifies the loop boundary condition and cleans up the code a bit. The fd_map field has been altered to be dynamically allocated as opposed to being an array; the intention here is to build on this is subsequent patches to allow processes to share their file descriptors with the clone system call.
2015-07-24ruby: dma sequencer: removes redundant codeBrandon Potter
2015-07-22ruby: network: NetworkLink inherits from Consumer now.Nilay Vaish
2015-07-21configs: network test: remove redundant physical memoryNilay Vaish
2015-07-18stats: x86: updates due to patch on vexNilay Vaish
2015-07-17x86: decode instructions with vex prefixNilay Vaish
This patch updates the x86 decoder so that it can decode instructions with vex prefix. It also updates the isa with opcodes from vex opcode maps 1, 2 and 3. Note that none of the instructions have been implemented yet. The implementations would be provided in due course of time.
2015-07-15dev: add support for multi gem5 runsGabor Dozsa
Multi gem5 is an extension to gem5 to enable parallel simulation of a distributed system (e.g. simulation of a pool of machines connected by Ethernet links). A multi gem5 run consists of seperate gem5 processes running in parallel (potentially on different hosts/slots on a cluster). Each gem5 process executes the simulation of a component of the simulated distributed system (e.g. a multi-core board with an Ethernet NIC). The patch implements the "distributed" Ethernet link device (dev/src/multi_etherlink.[hh.cc]). This device will send/receive (simulated) Ethernet packets to/from peer gem5 processes. The interface to talk to the peer gem5 processes is defined in dev/src/multi_iface.hh and in tcp_iface.hh. There is also a central message server process (util/multi/tcp_server.[hh,cc]) which acts like an Ethernet switch and transfers messages among the gem5 peers. A multi gem5 simulations can be kicked off by the util/multi/gem5-multi.sh wrapper script. Checkpoints are supported by multi-gem5. The checkpoint must be initiated by a single gem5 process. E.g., the gem5 process with rank 0 can take a checkpoint from the bootscript just before it invokes 'mpirun' to launch an MPI test. The message server process will notify all the other peer gem5 processes and make them take a checkpoint, too (after completing a global synchronisation to ensure that there are no inflight messages among gem5).
2015-07-13mem: Fix (ab)use of emplace to avoid temporary object creationAndreas Hansson
2015-07-13mem: Updated DRAMSim2 wrapper to new drain APIAndreas Hansson
Somehow this one slipped through without being updated.
2015-07-10ruby: replace global g_abs_controls with per-RubySystem varBrandon Potter
This is another step in the process of removing global variables from Ruby to enable multiple RubySystem instances in a single simulation. The list of abstract controllers is per-RubySystem and should be represented that way, rather than as a global. Since this is the last remaining Ruby global variable, the src/mem/ruby/Common/Global.* files are also removed.
2015-07-10ruby: replace global g_system_ptr with per-object pointersBrandon Potter
This is another step in the process of removing global variables from Ruby to enable multiple RubySystem instances in a single simulation. With possibly multiple RubySystem objects, we can no longer use a global variable to find "the" RubySystem object. Instead, each Ruby component has to carry a pointer to the RubySystem object to which it belongs.
2015-07-10ruby: replace g_ruby_start with per-RubySystem m_start_cycleBrandon Potter
This patch begins the process of removing global variables from the Ruby source with the goal of eventually allowing users to create multiple Ruby instances in a single simulation. Currently, users cannot do so because several global variables and static members are referenced by the RubySystem object in a way that assumes that there will only ever be a single RubySystem. These need to be replaced with per-RubySystem equivalents. This specific patch replaces the global var g_ruby_start, which is used to calculate throughput statistics for Throttles in simple networks and links in Garnet networks, with a RubySystem instance var m_start_cycle.
2015-07-10ruby: remove extra whitespace and correct misspelled wordsBrandon Potter
2015-07-07dev, arm: Add a device model that uses the NoMali modelAndreas Sandberg
Add a simple device shim that interfaces with the NoMali model library. The gem5 side of the interface supports Mali T60x/T62x/T760 GPUs. This device model pretends to be a Mali GPU, but doesn't render anything and executes in zero time.
2015-07-07ext: Add the NoMali GPU no-simulation libraryAndreas Sandberg
Add revision 9adf9d6e2d889a483a92136c96eb8a434d360561 of NoMali-model from https://github.com/ARM-software/nomali-model. This library implements the register interface of the Mali T6xx/T7xx series GPUs, but doesn't do any rendering. It can be used to hide the effects of software rendering.
2015-07-07stats: Update pc-switcheroo statsAndreas Sandberg
The pc-switcheroo test cases has slightly different timing after decoupling draining from the SimObject hierarchy. This is expected since objects aren't drained in the exact same order as before.
2015-07-07sim: Refactor and simplify the drain APIAndreas Sandberg
The drain() call currently passes around a DrainManager pointer, which is now completely pointless since there is only ever one global DrainManager in the system. It also contains vestiges from the time when SimObjects had to keep track of their child objects that needed draining. This changeset moves all of the DrainState handling to the Drainable base class and changes the drain() and drainResume() calls to reflect this. Particularly, the drain() call has been updated to take no parameters (the DrainManager argument isn't needed) and return a DrainState instead of an unsigned integer (there is no point returning anything other than 0 or 1 any more). Drainable objects should return either DrainState::Draining (equivalent to returning 1 in the old system) if they need more time to drain or DrainState::Drained (equivalent to returning 0 in the old system) if they are already in a consistent state. Returning DrainState::Running is considered an error. Drain done signalling is now done through the signalDrainDone() method in the Drainable class instead of using the DrainManager directly. The new call checks if the state of the object is DrainState::Draining before notifying the drain manager. This means that it is safe to call signalDrainDone() without first checking if the simulator has requested draining. The intention here is to reduce the code needed to implement draining in simple objects.
2015-07-07sim: Decouple draining from the SimObject hierarchyAndreas Sandberg
Draining is currently done by traversing the SimObject graph and calling drain()/drainResume() on the SimObjects. This is not ideal when non-SimObjects (e.g., ports) need draining since this means that SimObjects owning those objects need to be aware of this. This changeset moves the responsibility for finding objects that need draining from SimObjects and the Python-side of the simulator to the DrainManager. The DrainManager now maintains a set of all objects that need draining. To reduce the overhead in classes owning non-SimObjects that need draining, objects inheriting from Drainable now automatically register with the DrainManager. If such an object is destroyed, it is automatically unregistered. This means that drain() and drainResume() should never be called directly on a Drainable object. While implementing the new functionality, the DrainManager has now been made thread safe. In practice, this means that it takes a lock whenever it manipulates the set of Drainable objects since SimObjects in different threads may create Drainable objects dynamically. Similarly, the drain counter is now an atomic_uint, which ensures that it is manipulated correctly when objects signal that they are done draining. A nice side effect of these changes is that it makes the drain state changes stricter, which the simulation scripts can exploit to avoid redundant drains.
2015-07-07sim: Move mem(Writeback|Invalidate) to SimObjectAndreas Sandberg
The memWriteback() and memInvalidate() calls used to live in the Serializable interface. In this series of patches, the Serializable interface will be redesigned to make serialization independent of the object graph and always work on the entire simulator. This means that the Serialization interface won't be useful to perform maintenance of the caches in a sub-graph of the entire SimObject graph. This changeset moves these memory maintenance methods to the SimObject interface instead.
2015-07-07sim: Make the drain state a global typed enumAndreas Sandberg
The drain state enum is currently a part of the Drainable interface. The same state machine will be used by the DrainManager to identify the global state of the simulator. Make the drain state a global typed enum to better cater for this usage scenario.
2015-07-07python: Remove redundant drain when changing memory modesAndreas Sandberg
When the Python helper code switches CPU models, it sometimes also needs to change the memory mode of the simulator. When this happens, it accidentally tried to drain the simulator despite having done so already. This changeset removes the redundant drain.
2015-07-07sim: Add macros to serialize objects into a sectionAndreas Sandberg
Add the SERIALIZE_OBJ / UNSERIALIZE_OBJ macros that serialize an object into a subsection of the current checkpoint section.
2015-07-07base: Add serialization support to Pixels and FrameBufferAndreas Sandberg
Serialize pixels as unsigned 32 bit integers by adding the required to_number() and stream operators. This is used by the FrameBuffer, which now implements the Serializable interface. Users of frame buffers are expected to serialize it into its own section by calling serializeSection().
2015-07-07sim: Fix broken event unserializationAndreas Sandberg
Events expected to be unserialized using an event-specific unserializeEvent call. This call was never actually used, which meant the events relying on it never got unserialized (or scheduled after unserialization). Instead of relying on a custom call, we now use the normal serialization code again. In order to schedule the event correctly, the parrent object is expected to use the EventQueue::checkpointReschedule() call. This happens automatically for events that are serialized using the AutoSerialize mechanism.
2015-07-07sim: Refactor the serialization base classAndreas Sandberg
Objects that are can be serialized are supposed to inherit from the Serializable class. This class is meant to provide a unified API for such objects. However, so far it has mainly been used by SimObjects due to some fundamental design limitations. This changeset redesigns to the serialization interface to make it more generic and hide the underlying checkpoint storage. Specifically: * Add a set of APIs to serialize into a subsection of the current object. Previously, objects that needed this functionality would use ad-hoc solutions using nameOut() and section name generation. In the new world, an object that implements the interface has the methods serializeSection() and unserializeSection() that serialize into a named /subsection/ of the current object. Calling serialize() serializes an object into the current section. * Move the name() method from Serializable to SimObject as it is no longer needed for serialization. The fully qualified section name is generated by the main serialization code on the fly as objects serialize sub-objects. * Add a scoped ScopedCheckpointSection helper class. Some objects need to serialize data structures, that are not deriving from Serializable, into subsections. Previously, this was done using nameOut() and manual section name generation. To simplify this, this changeset introduces a ScopedCheckpointSection() helper class. When this class is instantiated, it adds a new /subsection/ and subsequent serialization calls during the lifetime of this helper class happen inside this section (or a subsection in case of nested sections). * The serialize() call is now const which prevents accidental state manipulation during serialization. Objects that rely on modifying state can use the serializeOld() call instead. The default implementation simply calls serialize(). Note: The old-style calls need to be explicitly called using the serializeOld()/serializeSectionOld() style APIs. These are used by default when serializing SimObjects. * Both the input and output checkpoints now use their own named types. This hides underlying checkpoint implementation from objects that need checkpointing and makes it easier to change the underlying checkpoint storage code.
2015-07-07tests: Skip SPARC tests if the required binaries are missingAndreas Sandberg
The full-system SPARC tests depend on several binaries that aren't generally available to the wider community. Flag the tests as skipped instead of failed if these binaries can't be found.
2015-07-07sim: Add serialization macros for std containersAndreas Sandberg
2015-07-06mem: Cleanup CommMonitor in preparation for probe supportAndreas Sandberg
Make configuration parameters constant and get rid of an unnecessary dependency on the Time class.
2015-07-05stats: x86: update stats missed out on in preivous changesetNilay Vaish
2015-07-04stats: update stale config.ini files, eio and few other stats.Nilay Vaish
2015-07-04x86: Adjust the size of the values written to the x87 misc registersNikos Nikoleris
All x87 misc registers are implemented in an array of 64 bit values but in real hardware the size of some of these registers is smaller. Previsouly all 64 bits where incorrectly set and then later read. To ensure correctness we mask the value in setMiscRegNoEffect to write only the valid bits. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-07-04config: Update location of ruby topologies in helpDavid Hashe
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-07-04o3: correct the number of cc registers in rename mapNilay Vaish
2015-07-04mem: packet: Add const to constructor argumentNilay Vaish
2015-07-04ruby: drop NetworkMessage classNilay Vaish
This patch drops the NetworkMessage class. The relevant data members and functions have been moved to the Message class, which was the parent of NetworkMessage.
2015-07-04ruby: mesi three level: name change to avoid clashNilay Vaish
The accessor function getDestination() for Destination variable in the coherence message clashes with the getDestination() that is part of the Message class. Hence the name change.
2015-07-04ruby: remove message buffer nodeNilay Vaish
This structure's only purpose was to provide a comparison function for ordering messages in the MessageBuffer. The comparison function is now being moved to the Message class itself. So we no longer require this structure.
2015-07-03stats: Update stats for cache, crossbar and DRAM changesAndreas Hansson
This update includes the changes to whole-line writes, the refinement of Read to ReadClean and ReadShared, the introduction of CleanEvict for snoop-filter tracking, and updates to the DRAM command scheduler for bank-group-aware scheduling. Needless to say, almost every regression is affected.
2015-07-03mem: Increase the default buffer sizes for the DDR4 controllerAndreas Hansson
This patch increases the default read/write buffer sizes for the DDR4 controller config to values that are more suitable for the high bandwidth and high bank count.
2015-07-03mem: Update DRAM command scheduler for bank groupsWendy Elsasser
This patch updates the command arbitration so that bank group timing as well as rank-to-rank delays will be taken into account. The resulting arbitration no longer selects commands (prepped or not) that cannot issue seamlessly if there are commands that can issue back-to-back, minimizing the effect of rank-to-rank (tCS) & same bank group (tCCD_L) delays. The arbitration selects a new command based on the following priority. Within each priority band, the arbitration will use FCFS to select the appropriate command: 1) Bank is prepped and burst can issue seamlessly, without a bubble 2) Bank is not prepped, but can prep and issue seamlessly, without a bubble 3) Bank is prepped but burst cannot issue seamlessly. In this case, a bubble will occur on the bus Thus, to enable more parallelism in subsequent selections, an unprepped packet is given higher priority if the bank prep can be hidden. If the bank prep cannot be hidden, the selection logic will choose a prepped packet that cannot issue seamlessly if one exist. Otherwise, the default selection will choose the packet with the minimum bank prep delay.
2015-07-03mem: Avoid DRAM write queue iteration for merging and read lookupAndreas Hansson
This patch adds a simple lookup structure to avoid iterating over the write queue to find read matches, and for the merging of write bursts. Instead of relying on iteration we simply store a set of currently-buffered write-burst addresses and compare against these. For the reads we still perform the iteration if we have a match. For the writes, we rely entirely on the set. Note that there are corner-cases where sub-bursts would actually not be mergeable without a read-modify-write. We ignore these cases and opt for speed.
2015-07-03mem: Delay responses in the crossbar before forwardingAndreas Hansson
This patch changes how the crossbar classes deal with responses. Instead of forwarding responses directly and burdening the neighbouring modules in paying for the latency (through the pkt->headerDelay), we now queue them before sending them. The coherency protocol is not affected as requests and any snoop requests/responses are still passed on in zero time. Thus, the responses end up paying for any header delay accumulated when passing through the crossbar. Any latency incurred on the request path will be paid for on the response side, if no other module has dealt with it. As a result of this patch, responses are returned at a later point. This affects the number of outstanding transactions, and quite a few regressions see an impact in blocking due to no MSHRs, increased cache-miss latencies, etc. Going forward we should be able to use the same concept also for snoop responses, and any request that is not an express snoop.
2015-07-03mem: Remove redundant is_top_level cache parameterAndreas Hansson
This patch takes the final step in removing the is_top_level parameter from the cache. With the recent changes to read requests and write invalidations, the parameter is no longer needed, and consequently removed. This also means that asymmetric cache hierarchies are now fully supported (and we are actually using them already with L1 caches, but no table-walker caches, connected to a shared L2).
2015-07-03mem: Split WriteInvalidateReq into write and invalidateAndreas Hansson
WriteInvalidateReq ensures that a whole-line write does not incur the cost of first doing a read exclusive, only to later overwrite the data. This patch splits the existing WriteInvalidateReq into a WriteLineReq, which is done locally, and an InvalidateReq that is sent out throughout the memory system. The WriteLineReq re-uses the normal WriteResp. The change allows us to better express the difference between the cache that is performing the write, and the ones that are merely invalidating. As a consequence, we no longer have to rely on the isTopLevel flag. Moreover, the actual memory in the system does not see the intitial write, only the writeback. We were marking the written line as dirty already, so there is really no need to also push the write all the way to the memory. The overall flow of the write-invalidate operation remains the same, i.e. the operation is only carried out once the response for the invalidate comes back. This patch adds the InvalidateResp for this very reason.
2015-07-03mem: Add ReadCleanReq and ReadSharedReq packetsAndreas Hansson
This patch adds two new read requests packets: ReadCleanReq - For a cache to explicitly request clean data. The response is thus exclusive or shared, but not owned or modified. The read-only caches (see previous patch) use this request type to ensure they do not get dirty data. ReadSharedReq - We add this to distinguish cache read requests from those issued by other masters, such as devices and CPUs. Thus, devices use ReadReq, and caches use ReadCleanReq, ReadExReq, or ReadSharedReq. For the latter, the response can be any state, shared, exclusive, owned or even modified. Both ReadCleanReq and ReadSharedReq re-use the normal ReadResp. The two transactions are aligned with the emerging cache-coherent TLM standard and the AMBA nomenclature. With this change, the normal ReadReq should never be used by a cache, and is reserved for the actual (non-caching) masters in the system. We thus have a way of identifying if a request came from a cache or not. The introduction of ReadSharedReq thus removes the need for the current isTopLevel hack, and also allows us to stop relying on checking the packet size to determine if the source is a cache or not. This is fixed in follow-on patches.