Age | Commit message (Collapse) | Author |
|
This patch changes the CPU and cache configurations used in the ARM SE and FS
regressions to make them more representative, and also get better code
coverage by exercising different replacement policies and use an L2
prefetcher.
|
|
This patch changes the name of the Bus classes to XBar to better
reflect the actual timing behaviour. The actual instances in the
config scripts are not renamed, and remain as e.g. iobus or membus.
As part of this renaming, the code has also been clean up slightly,
making use of range-based for loops and tidying up some comments. The
only changes outside the bus/crossbar code is due to the delay
variables in the packet.
--HG--
rename : src/mem/Bus.py => src/mem/XBar.py
rename : src/mem/coherent_bus.cc => src/mem/coherent_xbar.cc
rename : src/mem/coherent_bus.hh => src/mem/coherent_xbar.hh
rename : src/mem/noncoherent_bus.cc => src/mem/noncoherent_xbar.cc
rename : src/mem/noncoherent_bus.hh => src/mem/noncoherent_xbar.hh
rename : src/mem/bus.cc => src/mem/xbar.cc
rename : src/mem/bus.hh => src/mem/xbar.hh
|
|
This patch adds a basic regression test for the snoop filter.
--HG--
rename : tests/configs/memtest.py => tests/configs/memtest-filter.py
|
|
This patch avoids building the 'inorder' CPU model for any permutation
of ALPHA, and also removes the ALPHA regressions using the 'inorder'
CPU. The 'minor' CPU is already providing a broader test coverage.
|
|
This patch changes the CPU configuration used for the full-system ARM
regressions to increase the test coverage. Note that it is only the
core configuration, and not the caches etc.
|
|
This patch fixes scripts related to ruby by adding the ruby clock domain.
Now the L1 controllers and the Sequencer shares the cpu clock domain,
while the rest of the components use the ruby clock domain.
Before this patch, running simulations with the cpu clock set at 2GHz or
1GHz will output the same time results and could distort power measurements.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
This patch adds regression tests results and test harnesses
for the Minor CPU on ARM and ALPHA.
|
|
This patch reflects the recent name change in the DRAM TrafficGen
tests and also tidies up the test directory.
--HG--
rename : tests/configs/tgen-simple-dram.py => tests/configs/tgen-dram-ctrl.py
rename : tests/quick/se/70.tgen/ref/null/none/tgen-simple-dram/config.ini => tests/quick/se/70.tgen/ref/null/none/tgen-dram-ctrl/config.ini
rename : tests/quick/se/70.tgen/ref/null/none/tgen-simple-dram/simerr => tests/quick/se/70.tgen/ref/null/none/tgen-dram-ctrl/simerr
rename : tests/quick/se/70.tgen/ref/null/none/tgen-simple-dram/simout => tests/quick/se/70.tgen/ref/null/none/tgen-dram-ctrl/simout
rename : tests/quick/se/70.tgen/ref/null/none/tgen-simple-dram/stats.txt => tests/quick/se/70.tgen/ref/null/none/tgen-dram-ctrl/stats.txt
rename : tests/quick/se/70.tgen/tgen-simple-dram.cfg => tests/quick/se/70.tgen/tgen-dram-ctrl.cfg
|
|
Splits the CommMonitor trace_file parameter into three parameters. Previously,
the trace was only enabled if the trace_file parameter was set, and would be
written to this file. This patch adds in a trace_enable and trace_compress
parameter to the CommMonitor.
No trace is generated if trace_enable is set to False. If it is set to True, the
trace is written to a file based on the name of the SimObject in the simulation
hierarchy. For example, system.cluster.il1_commmonitor.trc. This filename can be
overridden by additionally specifying a file name to the trace_file parameter
(more on this later).
The trace_compress parameter will append .gz to any filename if set to True.
This enables compression of the generated traces. If the file name already ends
in .gz, then no changes are made.
The trace_file parameter will override the name set by the trace_enable
parameter. In the case that the specified name does not end in .gz but
trace_compress is set to true, .gz is appended to the supplied file name.
|
|
|
|
The patch removes the ruby_fs.py file. The functionality is being moved to
fs.py. This would being ruby fs simulations in line with how ruby se
simulations are started (using --ruby option). The alpha fs config functions
are being combined for classing and ruby memory systems. This required
renaming the piobus in ruby to iobus. So, we will have stats being renamed
in the stats file for ruby fs regression.
|
|
Piobus was recently added to se scripts for ruby so that the interrupt
controller can be connected to something (required since the interrupt
controller sends address range messages). This patch removes the piobus
and instead, the pio port of ruby port will now ignore the range change
messages in se mode.
|
|
Couple of errors were discovered in 4eec7bdde5b0 which necessitated this patch.
Firstly, we create interrupt controllers in the se mode, but no piobus was
being created. RubyPort, which earlier used to ignore range changes now
forwards those to the piobus. The lack of piobus resulted in segmentation
fault. This patch creates a piobus even in se mode. It is not created only
when some tester is running. Secondly, I had missed out on modifying port
connections for other coherence protocols.
|
|
Currently, the interrupt controller in x86 is connected to the io bus
directly. Therefore the packets between the io devices and the interrupt
controller do not go through ruby. This patch changes ruby port so that
these packets arrive at the ruby port first, which then routes them to their
destination. Note that the patch does not make these packets go through the
ruby network. That would happen in a subsequent patch.
|
|
For some reason, the default x86 kernel is specified in
tests/configs/x86_generic.py and not in configs/common/FSConfig.py,
where the kernels for all the other ISAs are. This means that
running configs/example/fs.py for x86 fails because no kernel
is specified. Moving the specification over fixes this problem.
There is another problem that this uncovers, which is that going
past the init stage (i.e., past where the regression test stops)
fails because the fsck test on the disk device fails, but that's
a separate issue.
|
|
The output from the switcheroo tests is voluminous and
(because it includes timestamps) highly sensitive to
minor changes, leading to extremely large updates to the
reference outputs. This patch addresses this problem
by suppressing output from the tests. An internal
parameter can be set to enable the output. Wiring that
up to a command-line flag (perhaps even the rudimantary
-v/-q options in m5/main.py) is left for future work.
|
|
Keep it simple and use the SimpleMemory rather than the DRAM
controller model for atomic full-system tests.
|
|
The number of transitions per cycle that a controller can carry out is
a proxy for the number of ports that a controller has. This value is
currently 32 which is way too high. The patch introduces an option
for the number of ports and uses this option in the protocol files
to set the number of transitions. The default value is being set to
4. None of the se regressions change. Ruby stats for the fs regression
change and are being updated.
|
|
This patch changes the default parameter value of conf_table_reported
to match the common case. It also simplifies the regression and config
scripts to reflect this change.
|
|
This patch adds the notion of voltage domains, and groups clock
domains that operate under the same voltage (i.e. power supply) into
domains. Each clock domain is required to be associated with a voltage
domain, and the latter requires the voltage to be explicitly set.
A voltage domain is an independently controllable voltage supply being
provided to section of the design. Thus, if you wish to perform
dynamic voltage scaling on a CPU, its clock domain should be
associated with a separate voltage domain.
The current implementation of the voltage domain does not take into
consideration cases where there are derived voltage domains running at
ratio of native voltage domains, as with the case where there can be
on-chip buck/boost (charge pumps) voltage regulation logic.
The regression and configuration scripts are updated with a generic
voltage domain for the system, and one for the CPUs.
|
|
This patch moves the instantiation of the memory controller outside
FSConfig and instead relies on the mem_ranges to pass the information
to the caller (e.g. fs.py or one of the regression scripts). The main
motivation for this change is to expose the structural composition of
the memory system and allow more tuning and configuration without
adding a large number of options to the makeSystem functions.
The patch updates the relevant example scripts to maintain the current
functionality. As the order that ports are connected to the memory bus
changes (in certain regresisons), some bus stats are shuffled
around. For example, what used to be layer 0 is now layer 1.
Going forward, options will be added to support the addition of
multi-channel memory controllers.
|
|
The configs for pc-simple-timing-ruby, t1000-simple-atomic had not been
updated correctly in the patch 6e6cefc1db1f.
|
|
This patch adds the notion of source- and derived-clock domains to the
ClockedObjects. As such, all clock information is moved to the clock
domain, and the ClockedObjects are grouped into domains.
The clock domains are either source domains, with a specific clock
period, or derived domains that have a parent domain and a divider
(potentially chained). For piece of logic that runs at a derived clock
(a ratio of the clock its parent is running at) the necessary derived
clock domain is created from its corresponding parent clock
domain. For now, the derived clock domain only supports a divider,
thus ensuring a lower speed compared to its parent. Multiplier
functionality implies a PLL logic that has not been modelled yet
(create a separate clock instead).
The clock domains should be used as a mechanism to provide a
controllable clock source that affects clock for every clocked object
lying beneath it. The clock of the domain can (in a future patch) be
controlled by a handler responsible for dynamic frequency scaling of
the respective clock domains.
All the config scripts have been retro-fitted with clock domains. For
the System a default SrcClockDomain is created. For CPUs that run at a
different speed than the system, there is a seperate clock domain
created. This domain incorporates the CPU and the associated
caches. As before, Ruby runs under its own clock domain.
The clock period of all domains are pre-computed, such that no virtual
functions or multiplications are needed when calling
clockPeriod. Instead, the clock period is pre-computed when any
changes occur. For this to be possible, each clock domain tracks its
children.
|
|
This patch extends the existing system builders to also include a
syscall-emulation builder. This builder is deployed in all
syscall-emulation regressions that do not involve Ruby,
i.e. o3-timing, simple-timing and simple-atomic, as well as the
multi-processor regressions o3-timing-mp, simple-timing-mp and
simple-atomic-mp (the latter are only used by SPARC at this point).
The values chosen for the cache sizes match those that were used in
the existing config scripts (despite being on the large
side). Similarly, a mem_class parameter is added to the builder base
class to enable simple-atomic to use SimpleMemory and o3-timing to use
the default DDR3 configuration.
Due to the different order the ports are connected, the bus stats get
shuffled around for the multi-processor regressions. A separate patch
bumps the port indices. Besides this, all behaviour is exactly the
same.
|
|
This patch adds a 'sys_clock' command-line option and use it to assign
clocks to the system during instantiation.
As part of this change, the default clock in the System class is
removed and whenever a system is instantiated a system clock value
must be set. A default value is provided for the command-line option.
The configs and tests are updated accordingly.
|
|
This patch removes the explicit setting of the clock period for
certain instances of CoherentBus, NonCoherentBus and IOCache where the
specified clock is same as the default value of the system clock. As
all the values used are the defaults, there are no performance
changes. There are similar cases where the toL2Bus is set to use the
parent CPU clock which is already the default behaviour.
The main motivation for these simplifications is to ease the
introduction of clock domains.
|
|
This patch changes the class names of the variuos DRAM configurations
to better reflect what memory they are based on. The speed and
interface width is now part of the name, and also the alias that is
used to select them on the command line.
Some minor changes are done to the actual parameters, to better
reflect the named configurations. As a result of these changes the
regressions change slightly and the stats will be bumped in a separate
patch.
|
|
This patch adds the memory type parameter to the t1000 regression.
|
|
|
|
This patch enables selection of the memory controller class through a
mem-type command-line option. Behind the scenes, this option is
treated much like the cpu-type, and a similar framework is used to
resolve the valid options, and translate the short-hand description to
a valid class.
The regression scripts are updated with a hardcoded memory class for
the moment. The best solution going forward is probably to get the
memory out of the makeSystem functions, but Ruby complicates things as
it does not connect the memory controller to the membus.
--HG--
rename : configs/common/CpuConfig.py => configs/common/MemConfig.py
|
|
This changeset adds support for initializing a KVM VM in the
BaseSystem test class and adds the following methods in run.py:
require_file -- Test if a file exists and abort/skip if not.
require_kvm -- Test if KVM support has been compiled into gem5 (i.e.,
BaseKvmCPU exists) and the KVM device exists on the
host.
|
|
Add the options 'panic_on_panic' and 'panic_on_oops' to the
LinuxArmSystem SimObject. When these option are enabled, the simulator
panics when the guest kernel panics or oopses. Enable panic on panic
and panic on oops in ARM-based test cases.
|
|
This patch removes the functional copy of the memory that was maintained in
the se mode. Now ruby itself will provide the data.
|
|
CPU switching consists of the following steps:
1. Drain the system
2. Switch out old CPUs (cpu.switchOut())
3. Change the system timing mode to the mode the new CPUs require
4. Flush caches if switching to hardware virtualization
5. Inform new CPUs of the handover (cpu.takeOverFrom())
6. Resume the system
m5.switchCpus() previously only did step 2 & 5. Since information
about the new processors' memory system requirements is now exposed,
do all of the steps above.
This patch adds automatic memory system switching and flush (if
needed) to switchCpus(). Additionally, it adds optional draining to
switchCpus(). This has the following implications:
* changeToTiming and changeToAtomic are no longer needed, so they have
been removed.
* changeMemoryMode is only used internally, so it is has been renamed
to be private.
* switchCpus requires a reference to the system containing the CPUs as
its first parameter.
WARNING: This changeset breaks compatibility with existing
configuration scripts since it changes the signature of
m5.switchCpus().
|
|
This patch moves the default DRAM parameters from the SimpleDRAM class
to two different subclasses, one for DDR3 and one for LPDDR2. More can
be added as we go forward.
The regressions that previously used the SimpleDRAM are now using
SimpleDDR3 as this is the most similar configuration.
|
|
This changeset adds a set of tests that stress the CPU switching
code. It adds the following test configurations:
* tsunami-switcheroo-full -- Alpha system (atomic, timing, O3)
* realview-switcheroo-atomic -- ARM system (atomic<->atomic)
* realview-switcheroo-timing -- ARM system (timing<->timing)
* realview-switcheroo-o3 -- ARM system (O3<->O3)
* realview-switcheroo-full -- ARM system (atomic, timing, O3)
Reference data is provided for the 10.linux-boot test case. All of the
tests trigger a CPU switch once per millisecond during the boot
process.
The in-order CPU model was not included in any of the tests as it does
not support CPU handover.
|
|
This patch generalises the address range resolution for the I/O cache
and I/O bridge such that they do not assume a single memory. The patch
involves adding a parameter to the system which is then defined based
on the memories that are to be visible from the I/O subsystem, whether
behind a cache or a bridge.
The change is needed to allow interleaved memory controllers in the
system.
|
|
This patch adds support for reading input traces encoded using
protobuf according to what is done in the CommMonitor.
A follow-up patch adds a Python script that can be used to convert the
previously used ASCII traces to protobuf equivalents. The appropriate
regression input is updated as part of this patch.
|
|
This patch adds packet tracing to the communication monitor using a
protobuf as the mechanism for creating the trace.
If no file is specified, then the tracing is disabled. If a file is
specified, then for every packet that is successfully sent, a protobuf
message is serialized to the file.
|
|
Previous to this change we didn't always set the memory mode which worked as
long as we never attempted to switch CPUs or checked that a CPU was in a
memory system with the correct mode. Future changes will make CPUs verify
that they're operating in the correct mode and thus we need to always set it.
|
|
Most of the test cases currently contain a large amount of duplicated
boiler plate code. This changeset introduces a set of classes that
encapsulates most of the functionality when setting up a test
configuration.
The following base classes are introduced:
* BaseSystem - Basic system configuration that can be used for both
SE and FS simulation.
* BaseFSSystem - Basic FS configuration uni-processor and multi-processor
configurations.
* BaseFSSystemUniprocessor - Basic FS configuration for uni-processor
configurations. This is provided as a way
to make existing test cases backwards
compatible.
Architecture specific implementations are provided for ARM, Alpha, and
X86.
|
|
This patch updates the stats for the regressions that were affected by
the typo in the simple-atomic-mp configuration.
|
|
This patch fixes a minor typo that managed to sneak into the
simple-atomic-mp regression configuration.
|
|
This patch unified the L1 and L2 caches used throughout the
regressions instead of declaring different, but very similar,
configurations in the different scripts.
The patch also changes the default L2 configuration to match what it
used to be for the fs and se scripts (until the last patch that
updated the regressions to also make use of the cache config). The
MSHRs and targets per MSHR are now set to a more realistic default of
20 and 12, respectively.
As a result of both the aforementioned changes, many of the regression
stats are changed. A follow-on patch will bump the stats.
|
|
This patch unifies the naming of the default L1 and L2 caches in the
regression configs to be in line with what is used in the se and fs
scripts.
|
|
This patch favours using SimpleDRAM with the default timing instead of
SimpleMemory for all regressions that involve the o3 or inorder CPU,
or are full system (in other words, where the actual performance of
the memory is important for the overall performance).
Moving forward, the solution for FSConfig and the users of fs.py and
se.py is probably something similar to what we use to choose the CPU
type. I envision a few pre-set configurations SimpleLPDDR2,
SimpleDDR3, etc that can be choosen by a dram_type option. Feedback on
this part is welcome.
This patch changes plenty stats and adds all the DRAM controller
related stats. A follow-on patch updates the relevant statistics. The
total run-time for the entire regression goes up with ~5% with this
patch due to the added complexity of the SimpleDRAM model. This is a
concious trade-off to ensure that the model is properly tested.
|
|
This patch uses the common L1, L2 and IOCache configuration for the
regressions that all share the same cache parameters. There are a few
regressions that use a slightly different configuration (memtest,
o3-timing=mp, simple-atomic-mp and simple-timing-mp), and the latter
are not changed in this patch. They will be updated in a future patch.
The common cache configurations are changed to match the ones used in
the regressions, and are slightly changed with respect to what they
were. Hopefully this means we can converge on a common base
configuration, used both in the normal user configurations and
regressions.
As only regressions that shared the same cache configuration are
updated, no regressions are affected.
|
|
This patch changes the cache-related latencies from an absolute time
expressed in Ticks, to a number of cycles that can be scaled with the
clock period of the caches. Ultimately this patch serves to enable
future work that involves dynamic frequency scaling. As an immediate
benefit it also makes it more convenient to specify cache performance
without implicitly assuming a specific CPU core operating frequency.
The stat blocked_cycles that actually counter in ticks is now updated
to count in cycles.
As the timing is now rounded to the clock edges of the cache, there
are some regressions that change. Plenty of them have very minor
changes, whereas some regressions with a short run-time are perturbed
quite significantly. A follow-on patch updates all the statistics for
the regressions.
|
|
This patch changes the memtest clock from 1THz (the default) to 2GHz,
similar to the CPUs in the other regressions. This is useful as the
caches will adopt the same clock as the CPU. The bus clock rate is
scaled accordingly, and the L1-L2 bus is kept at the CPU clock while
the memory bus is at half that frequency.
A separate patch updates the affected stats.
|
|
This patch unifies the full-system regression config scripts and uses
the BaseCPU convenience method addTwoLevelCacheHierarchy to connect up
the L1s and L2, and create the bus inbetween.
The patch is a step on the way to use the clock period to express the
cache latencies, as the CPU is now the parent of the L1, L2 and L1-L2
bus, and these modules thus use the CPU clock.
The patch does not change the value of any stats, but plenty names,
and a follow-up patch contains the update to the stats, chaning
system.l2c to system.cpu.l2cache.
|