summaryrefslogtreecommitdiff
path: root/configs
AgeCommit message (Collapse)Author
2016-01-22ruby: changed all references to numCPs to num-cpBrad Beckmann
2016-01-19gpu-compute: AMD's baseline GPU modelTony Gutierrez
2016-01-15dev, arm: Add a platform with support for both aarch32 and aarch64Andreas Sandberg
Add a platform with support for both aarch32 and aarch64. This platform implements a subset of the devices in a real Versatile Express and extends it with some gem5-specific functionality. It is in many ways similar to the old VExpress_EMM64 platform, but supports the following new features: * Automatic PCI interrupt assignment * PCI interrupts allocated in a contiguous range. * Automatic boot loader selection (32-bit / 64-bit) * Cleaner memory map where gem5-specific devices live in CS5 which isn't used by current Versatile Express platforms. * No fake devices. Devices that were previously faked will be removed from the device tree instead. * Support for 510 GiB contiguous memory
2016-01-11configs: Fix inheritance of HMCSystem and cleanup spacingAndreas Hansson
Minor fix to ensure the HMCSystem can actually be instantiated (SimObject cannot be created). Also address some spacing issues.
2016-01-07config: Updates for distributed gem5 simulationsGabor Dozsa
2015-12-17configs: Make the default memtest behaviour more complexAndreas Hansson
Add functional and uncacheable accesses by default.
2015-07-20ruby: more flexible ruby tester supportBrad Beckmann
This patch allows the ruby random tester to use ruby ports that may only support instr or data requests. This patch is similar to a previous changeset (8932:1b2c17565ac8) that was unfortunately broken by subsequent changesets. This current patch implements the support in a more straight-forward way. Since retries are now tested when running the ruby random tester, this patch splits up the retry and drain check behavior so that RubyPort children, such as the GPUCoalescer, can perform those operations correctly without having to duplicate code. Finally, the patch also includes better DPRINTFs for debugging the tester.
2015-12-07config: Enable elastic trace capture and replay in se/fsRadhika Jagtap
This patch adds changes to the configuration scripts to support elastic tracing and replay. The patch adds a command line option to enable elastic tracing in SE mode and FS mode. When enabled the Elastic Trace cpu probe is attached to O3CPU and a few O3 CPU parameters are tuned. The Elastic Trace probe writes out both instruction fetch and data dependency traces. The patch also enables configuring the TraceCPU to replay traces using the SE and FS script. The replay run is designed to resume from checkpoint using atomic cpu to restore state keeping it consistent with FS run flow. It then switches to TraceCPU to replay the input traces.
2015-12-05dev: Rewrite PCI host functionalityAndreas Sandberg
The gem5's current PCI host functionality is very ad hoc. The current implementations require PCI devices to be hooked up to the configuration space via a separate configuration port. Devices query the platform to get their config-space address range. Un-mapped parts of the config space are intercepted using the XBar's default port mechanism and a magic catch-all device (PciConfigAll). This changeset redesigns the PCI host functionality to improve code reuse and make config-space and interrupt mapping more transparent. Existing platform code has been updated to use the new PCI host and configured to stay backwards compatible (i.e., no guest-side visible changes). The current implementation does not expose any new functionality, but it can easily be extended with features such as automatic interrupt mapping. PCI devices now register themselves with a PCI host controller. The host controller interface is defined in the abstract base class PciHost. Registration is done by PciHost::registerDevice() which takes the device, its bus position (bus/dev/func tuple), and its interrupt pin (INTA-INTC) as a parameter. The registration interface returns a PciHost::DeviceInterface that the PCI device can use to query memory mappings and signal interrupts. The host device manages the entire PCI configuration space. Accesses to devices decoded into the devices bus position and then forwarded to the correct device. Basic PCI host functionality is implemented in the GenericPciHost base class. Most platforms can use this class as a basic PCI controller. It provides the following functionality: * Configurable configuration space decoding. The number of bits dedicated to a device is a prameter, making it possible to support both CAM, ECAM, and legacy mappings. * Basic interrupt mapping using the interruptLine value from a device's configuration space. This behavior is the same as in the old implementation. More advanced controllers can override the interrupt mapping method to dynamically assign host interrupts to PCI devices. * Simple (base + addr) remapping from the PCI bus's address space to physical addresses for PIO, memory, and DMA.
2015-12-04arm, config: Automatically discover available platformsAndreas Sandberg
Add support for automatically discover available platforms. The Python-side uses functionality similar to what we use when auto-detecting available CPU models. The machine IDs have been updated to match the platform configurations. If there isn't a matching machine ID, the configuration scripts default to -1 which Linux uses for device tree only platforms.
2015-11-22config: Added missing types to JSON/INI Python readerAndrew Bardsley
Added the missing types EthernetAddr and Current to the JSON/INI file reader example configs/example/read_config.py. Also added __str__ to EthernetAddr to make values appear in the same form in JSON an INI files.
2015-11-22config: Minor fixes to the DRAM utilisation sweepAndreas Hansson
2015-11-06config: Update memtest to stress test clean writebacksAndreas Hansson
This patch adds yet another twist to the memtest cache hierarchy, in that the writeback_clean option is toggled at every level to match the clusivity of the downstream cache.
2015-11-06mem: Add an option to perform clean writebacks from cachesAndreas Hansson
This patch adds the necessary commands and cache functionality to allow clean writebacks. This functionality is crucial, especially when having exclusive (victim) caches. For example, if read-only L1 instruction caches are not sending clean writebacks, there will never be any spills from the L1 to the L2. At the moment the cache model defaults to not sending clean writebacks, and this should possibly be re-evaluated. The implementation of clean writebacks relies on a new packet command WritebackClean, which acts much like a Writeback (renamed WritebackDirty), and also much like a CleanEvict. On eviction of a clean block the cache either sends a clean evict, or a clean writeback, and if any copies are still cached upstream the clean evict/writeback is dropped. Similarly, if a clean evict/writeback reaches a cache where there are outstanding MSHRs for the block, the packet is dropped. In the typical case though, the clean writeback allocates a block in the downstream cache, and marks it writable if the evicted block was writable. The patch changes the O3_ARM_v7a L1 cache configuration and the default L1 caches in config/common/Caches.py
2015-11-06config: Update memtest to stress test cache clusivityAndreas Hansson
This patch adds an new twist to the memtest cache hierarchy, in that it switches from mostly inclusive to mostly exclusive at every level in the tree. This has helped weed out plenty issues, and serves as a good stress tests.
2015-11-06mem: Add cache clusivityAndreas Hansson
This patch adds a parameter to control the cache clusivity, that is if the cache is mostly inclusive or exclusive. At the moment there is no intention to support strict policies, and thus the options are: 1) mostly inclusive, or 2) mostly exclusive. The choice of policy guides the behaviuor on a cache fill, and a new helper function, allocOnFill, is created to encapsulate the decision making process. For the timing mode, the decision is annotated on the MSHR on sending out the downstream packet, and in atomic we directly pass the decision to handleFill. We (ab)use the tempBlock in cases where we are not allocating on fill, leaving the rest of the cache unaffected. Simple and effective. This patch also makes it more explicit that multiple caches are allowed to consider a block writable (this is the case also before this patch). That is, for a mostly inclusive cache, multiple caches upstream may also consider the block exclusive. The caches considering the block writable/exclusive all appear along the same path to memory, and from a coherency protocol point of view it works due to the fact that we always snoop upwards in zero time before querying any downstream cache. Note that this patch does not introduce clean writebacks. Thus, for clean lines we are essentially removing a cache level if it is made mostly exclusive. For example, lines from the read-only L1 instruction cache or table-walker cache are always clean, and simply get dropped rather than being passed to the L2. If the L2 is mostly exclusive and does not allocate on fill it will thus never hold the line. A follow on patch adds the clean writebacks. The patch changes the L2 of the O3_ARM_v7a CPU configuration to be mostly exclusive (and stats are affected accordingly).
2015-11-04configs: fix bug introduced due to 276ad9121192Nilay Vaish
I had made a typo in changeset 276ad9121192. This changeset fixes it
2015-11-03mem: hmc: top level designErfan Azarkhish
This patch enables modeling a complete Hybrid Memory Cube (HMC) device. It highly reuses the existing components in gem5's general memory system with some small modifications. This changeset requires additional patches to model a complete HMC device. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-11-03sparc: add missing parameter to makeSparcSystem()Palle Lyckegaard
makeSparcSystem() in configs/common/FSConfig.py is missing the cmdLine parameter Without the parameter the simulation fails to start. With the parameter the simulation starts properly.
2015-10-14ruby: profiler: provide the number of vnets through ruby systemNilay Vaish
The aim is to ultimately do away with the static function Network::getNumberOfVirtualNetworks().
2015-10-01config: Fix 'learning gem5' configs after SMT pushAndreas Hansson
This patch updates the 'learning gem5' example scripts to match the recent push of the SMT patches.
2015-09-30isa,cpu: Add support for FS SMT InterruptsMitch Hayenga
Adds per-thread interrupt controllers and thread/context logic so that interrupts properly get routed in SMT systems.
2015-09-30config,cpu: Add SMT support to Atomic and Timing CPUsMitch Hayenga
Adds SMT support to the "simple" CPU models so that they can be used with other SMT-supported CPUs. Example usage: this enables the TimingSimpleCPU to be used to warmup caches before swapping to detailed mode with the in-order or out-of-order based CPU models.
2015-09-25util: Fix minor issues in DRAM sweep scriptsAndreas Hansson
This patch fixes a few issues in the sweep scripts, bringing them up-to-date with the latest memory configs and options.
2015-09-16config: Add configs scripts used in Learning gem5Jason Lowe-Power
Added a new directory in configs (learning_gem5) to hold the scripts that are used in the book. See http://lowepower.com/jason/learning_gem5/ for a working copy. For now, only the scripts in Part 1: Getting started with gem5 have been added. A separate patch adds tests for these scripts. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-09-06config: allow ruby to be used with Minor CPUNilay Vaish
2015-09-01ruby: remove random seedNilay Vaish
We no longer use the C library based random number generator: random(). Instead we use the C++ library provided rng. So setting the random seed for the RubySystem class has no effect. Hence the variable and the corresponding option are being dropped.
2015-08-30ruby: specify number of vnets for each protocolNilay Vaish
The default value for number of virtual networks is being removed. Each protocol should now specify the value it needs.
2015-08-21mem: Add explicit Cache subclass and make BaseCache abstractAndreas Hansson
Open up for other subclasses to BaseCache and transition to using the explicit Cache subclass. --HG-- rename : src/mem/cache/BaseCache.py => src/mem/cache/Cache.py
2015-08-21ruby: Move Rubys cache class from Cache.py to RubyCache.pyAndreas Hansson
This patch serves to avoid name clashes with the classic cache. For some reason having two 'SimObject' files with the same name creates problems. --HG-- rename : src/mem/ruby/structures/Cache.py => src/mem/ruby/structures/RubyCache.py
2015-08-19ruby: reverts to changeset: bf82f1f7b040Nilay Vaish
2015-08-14ruby: profiler: provide the number of vnets through ruby systemNilay Vaish
The aim is to ultimately do away with the static function Network::getNumberOfVirtualNetworks().
2015-08-14ruby: remove random seedNilay Vaish
We no longer use the C library based random number generator: random(). Instead we use the C++ library provided rng. So setting the random seed for the RubySystem class has no effect. Hence the variable and the corresponding option are being dropped.
2015-08-14ruby: Protocol changes for SimObject MessageBuffersJoel Hestness
2015-08-14ruby: Expose MessageBuffers as SimObjectsJoel Hestness
Expose MessageBuffers from SLICC controllers as SimObjects that can be manipulated in Python. This patch has numerous benefits: 1) First and foremost, it exposes MessageBuffers as SimObjects that can be manipulated in Python code. This allows parameters to be set and checked in Python code to avoid obfuscating parameters within protocol files. Further, now as SimObjects, MessageBuffer parameters are printed to config output files as a way to track parameters across simulations (e.g. buffer sizes) 2) Cleans up special-case code for responseFromMemory buffers, and aligns their instantiation and use with mandatoryQueue buffers. These two special buffers are the only MessageBuffers that are exposed to components outside of SLICC controllers, and they're both slave ends of these buffers. They should be exposed outside of SLICC in the same way, and this patch does it. 3) Distinguishes buffer-specific parameters from buffer-to-network parameters. Specifically, buffer size, randomization, ordering, recycle latency, and ports are all specific to a MessageBuffer, while the virtual network ID and type are intrinsics of how the buffer is connected to network ports. The former are specified in the Python object, while the latter are specified in the controller *.sm files. Unlike buffer-specific parameters, which may need to change depending on the simulated system structure, buffer-to-network parameters can be specified statically for most or all different simulated systems.
2015-08-14ruby: Remove the RubyCache/CacheMemory latencyJoel Hestness
The RubyCache (CacheMemory) latency parameter is only used for top-level caches instantiated for Ruby coherence protocols. However, the top-level cache hit latency is assessed by the Sequencer as accesses flow through to the cache hierarchy. Further, protocol state machines should be enforcing these cache hit latencies, but RubyCaches do not expose their latency to any existng state machines through the SLICC/C++ interface. Thus, the RubyCache latency parameter is superfluous for all caches. This is confusing for users. As a step toward pushing L0/L1 cache hit latency into the top-level cache controllers, move their latencies out of the RubyCache declarations and over to their Sequencers. Eventually, these Sequencer parameters should be exposed as parameters to the top-level cache controllers, which should assess the latency. NOTE: Assessing these latencies in the cache controllers will require modifying each to eliminate instantaneous Ruby hit callbacks in transitions that finish accesses, which is likely a large undertaking.
2015-08-03misc: Coupling gem5 with SystemC TLM2.0Matthias Jung
Transaction Level Modeling (TLM2.0) is widely used in industry for creating virtual platforms (IEEE 1666 SystemC). This patch contains a standard compliant implementation of an external gem5 port, that enables the usage of gem5 as a TLM initiator component in SystemC based virtual platforms. Both TLM coding paradigms loosely timed (b_transport) and aproximately timed (nb_transport) are supported. Compared to the original patch a TLM memory manager was added. Furthermore, the transaction object was removed and for each TLM payload a PacketPointer that points to the original gem5 packet is added as an TLM extension. For event handling single events are now created. Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-08-03ruby: correctly number the sequencer in MESI_Three_Level.pyNilay Vaish
2015-07-20config: add base class for ruby controllersDavid Hashe
The CntrlBase python class handles configuration parameters such as running counts of controllers and sequencers.
2015-07-20ruby: initialize replacement policies with their own simobjsDavid Hashe
this is in preparation for other replacement policies that take additional parameters.
2015-07-21configs: network test: remove redundant physical memoryNilay Vaish
2015-07-10ruby: remove extra whitespace and correct misspelled wordsBrandon Potter
2015-07-04config: Update location of ruby topologies in helpDavid Hashe
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
2015-07-03mem: Remove redundant is_top_level cache parameterAndreas Hansson
This patch takes the final step in removing the is_top_level parameter from the cache. With the recent changes to read requests and write invalidations, the parameter is no longer needed, and consequently removed. This also means that asymmetric cache hierarchies are now fully supported (and we are actually using them already with L1 caches, but no table-walker caches, connected to a shared L2).
2015-07-03mem: Allow read-only caches and check complianceAndreas Hansson
This patch adds a parameter to the BaseCache to enable a read-only cache, for example for the instruction cache, or table-walker cache (not for x86). A number of checks are put in place in the code to ensure a read-only cache does not end up with dirty data. A follow-on patch adds suitable read requests to allow a read-only cache to explicitly ask for clean data.
2015-06-01kvm, arm: Add support for aarch64Andreas Sandberg
This changeset adds support for aarch64 in kvm. The CPU module supports both checkpointing and online CPU model switching as long as no devices are simulated by the host kernel. It currently has the following limitations: * The system register based generic timer can only be simulated by the host kernel. Workaround: Use a memory mapped timer instead to simulate the timer in gem5. * Simulating devices (e.g., the generic timer) in the host kernel requires that the host kernel also simulates the GIC. * ID registers in the host and in gem5 must match for switching between simulated CPUs and KVM. This is particularly important for ID registers describing memory system capabilities (e.g., ASID size, physical address size). * Switching between a virtualized CPU and a simulated CPU is currently not supported if in-kernel device emulation is used. This could be worked around by adding support for switching to the gem5 (e.g., the KvmGic) side of the device models. A simpler workaround is to avoid in-kernel device models altogether.
2015-05-15config: Use null memory for DRAM sweep scriptAndreas Hansson
Do not waste time when we do not care about the data.
2015-05-15config: Add new MemConfig options to DRAM sweep scriptWendy Elsasser
Update script to match current MemConfig options with external_memory_system option set to 0.
2015-05-05arch, cpu: Do not forward snoops to table walkerAndreas Hansson
This patch simplifies the overall CPU by changing the TLB caches such that they do not forward snoops to the table walker port(s). Note that only ARM and X86 are affected. There is no reason for the ports to snoop as they do not actually take any action, and from a performance point of view we are better of not snooping more than we have to. Should it at a later point be required to snoop for a particular TLB design it is easy enough to add it back.
2015-04-29cpu: o3: replace issueLatency with bool pipelinedNilay Vaish
Currently, each op class has a parameter issueLat that denotes the cycles after which another op of the same class can be issued. As of now, this latency can either be one cycle (fully pipelined) or same as execution latency of the op (not at all pipelined). The fact that issueLat is a parameter of type Cycles makes one believe that it can be set to any value. To avoid the confusion, the parameter is being renamed as 'pipelined' with type boolean. If set to true, the op would execute in a fully pipelined fashion. Otherwise, it would execute in an unpipelined fashion.