Age | Commit message (Collapse) | Author |
|
This patch ensures that the CPU progress Event is triggered for the new set of
switched_cpus that get scheduled (e.g. during fast-forwarding). it also avoids
printing the interval state if the cpu is currently switched out.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
When using gem5 as a slave simulator, it will not advance the
clock on its own and depends on the master simulator calling
simulate(). This new option lets us use the Python scripts
to do all the configuration while stopping short of actually
simulating anything.
|
|
More documentation at http://gem5.org/Simpoints
Steps to profile, generate, and use SimPoints with gem5:
1. To profile workload and generate SimPoint BBV file, use the
following option:
--simpoint-profile --simpoint-interval <interval length>
Requires single Atomic CPU and fastmem.
<interval length> is in number of instructions.
2. Generate SimPoint analysis using SimPoint 3.2 from UCSD.
(SimPoint 3.2 not included with this flow.)
3. To take gem5 checkpoints based on SimPoint analysis, use the
following option:
--take-simpoint-checkpoint=<simpoint file path>,<weight file
path>,<interval length>,<warmup length>
<simpoint file> and <weight file> is generated by SimPoint analysis
tool from UCSD. SimPoint 3.2 format expected. <interval length> and
<warmup length> are in number of instructions.
4. To resume from gem5 SimPoint checkpoints, use the following option:
--restore-simpoint-checkpoint -r <N> --checkpoint-dir <simpoint
checkpoint path>
<N> is (SimPoint index + 1). E.g., "-r 1" will resume from SimPoint
#0.
|
|
Adds the parameter --num-work-ids to Options.py and reads the parameter
into the System params in Simulation.py. This parameter enables setting
the number of possible work items to different than 16. Support for this
parameter already exists in src/sim/System.py, so this changeset only
affects the Python config files.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
The previous changeset (9816) that fixes the use of max ticks introduced the
variable cpt_starttick, which is used for setting the relative max tick.
Unfortunately, with checkpointing at an instruction count or with simpoints,
the checkpoint tick is not stored conveniently, so to ensure that cpt_starttick
is initialized, set it to 0. Also, if using --rel-max-tick, check the use of
instruction counts or simpoints to warn the user that the max tick setting does
not include the checkpoint ticks.
|
|
This patch contains three fixes to max tick options handling in Options.py and
Simulation.py:
1) Since the global simulator frequency isn't bound until m5.instantiate()
is called, the maxtick resolution needs to happen after this call, since
changes to the global frequency will cause m5.simulate() to misinterpret the
maxtick value. Shuffling this also requires tweaking the checkpoint directory
handling to signal the checkpoint restore tick back to run(). Fixing this
completely and correctly will require storing the simulation frequency into
checkpoints, which is beyond the scope of this patch.
2) The maxtick option in Options.py was defaulted to MaxTicks, so the old code
would always skip over the maxtime part of the conditionals at the beginning
of run(). Change the maxtick default to None, and set the maxtick local
variable in run() appropriately.
3) To clarify whether max ticks settings are relative or absolute, split the
maxtick option into separate options, for relative and absolute. Ensure that
these two options and the maxtime option are handled appropriately to set the
maxtick variable in Simulation.py.
|
|
This patch adds the notion of source- and derived-clock domains to the
ClockedObjects. As such, all clock information is moved to the clock
domain, and the ClockedObjects are grouped into domains.
The clock domains are either source domains, with a specific clock
period, or derived domains that have a parent domain and a divider
(potentially chained). For piece of logic that runs at a derived clock
(a ratio of the clock its parent is running at) the necessary derived
clock domain is created from its corresponding parent clock
domain. For now, the derived clock domain only supports a divider,
thus ensuring a lower speed compared to its parent. Multiplier
functionality implies a PLL logic that has not been modelled yet
(create a separate clock instead).
The clock domains should be used as a mechanism to provide a
controllable clock source that affects clock for every clocked object
lying beneath it. The clock of the domain can (in a future patch) be
controlled by a handler responsible for dynamic frequency scaling of
the respective clock domains.
All the config scripts have been retro-fitted with clock domains. For
the System a default SrcClockDomain is created. For CPUs that run at a
different speed than the system, there is a seperate clock domain
created. This domain incorporates the CPU and the associated
caches. As before, Ruby runs under its own clock domain.
The clock period of all domains are pre-computed, such that no virtual
functions or multiplications are needed when calling
clockPeriod. Instead, the clock period is pre-computed when any
changes occur. For this to be possible, each clock domain tracks its
children.
|
|
This patch enables selection of the memory controller class through a
mem-type command-line option. Behind the scenes, this option is
treated much like the cpu-type, and a similar framework is used to
resolve the valid options, and translate the short-hand description to
a valid class.
The regression scripts are updated with a hardcoded memory class for
the moment. The best solution going forward is probably to get the
memory out of the makeSystem functions, but Ruby complicates things as
it does not connect the memory controller to the membus.
--HG--
rename : configs/common/CpuConfig.py => configs/common/MemConfig.py
|
|
In Simulation.py, calls to m5.simulate(num_ticks) will run the simulated system
for num_ticks after the current tick. Fix calls to m5.simulate in
scriptCheckpoints() and benchCheckpoints() to appropriately handle the maxticks
variable.
|
|
changeset: a4739b6f799d made some changes that where an exit event
should have been returned in place of exit cause. This patch corrects
the error.
|
|
CPU switching consists of the following steps:
1. Drain the system
2. Switch out old CPUs (cpu.switchOut())
3. Change the system timing mode to the mode the new CPUs require
4. Flush caches if switching to hardware virtualization
5. Inform new CPUs of the handover (cpu.takeOverFrom())
6. Resume the system
m5.switchCpus() previously only did step 2 & 5. Since information
about the new processors' memory system requirements is now exposed,
do all of the steps above.
This patch adds automatic memory system switching and flush (if
needed) to switchCpus(). Additionally, it adds optional draining to
switchCpus(). This has the following implications:
* changeToTiming and changeToAtomic are no longer needed, so they have
been removed.
* changeMemoryMode is only used internally, so it is has been renamed
to be private.
* switchCpus requires a reference to the system containing the CPUs as
its first parameter.
WARNING: This changeset breaks compatibility with existing
configuration scripts since it changes the signature of
m5.switchCpus().
|
|
The CPUs supported by the configuration scripts used to be
hard-coded. This was not ideal for several reasons. For example, the
configuration scripts depend on all CPU models even though only a
subset might have been compiled.
This changeset adds a new module to the configuration scripts that
automatically discovers the available CPU models from the compiled
SimObjects. As a nice bonus, the use of introspection allows us to
automatically generate a list of available CPU models suitable for
printing. This list is augmented with the Python doc string from the
underlying class if available.
|
|
The configuration scripts currently hard-code the requirements of each
CPU. This is clearly not optimal as it makes writing new configuration
scripts painful and adding new CPU models requires existing scripts to
be updated. This patch adds the following class methods to the base
CPU and all relevant CPUs:
* memory_mode -- Return a string describing the current memory mode
(invalid/atomic/timing).
* require_caches -- Does the CPU model require caches?
* support_take_over -- Does the CPU support CPU handover?
|
|
The run() method in Simulation.py used to call sys.exit() when the
simulator exits. This is undesirable when user has requested the
simulator to be run in interactive mode since it causes the simulator
to exit rather than entering the interactive Python environment.
|
|
|
|
Used as a command in full-system scripts helps the user ensure the benchmarks have finished successfully.
For example, one can use:
/path/to/benchmark args || /sbin/m5 fail 1
and thus ensure gem5 will exit with an error if the benchmark fails.
|
|
The defer_registration parameter is used to prevent a CPU from
initializing at startup, leaving it in the "switched out" mode. The
name of this parameter (and the help string) is confusing. This patch
renames it to switched_out, which should be more descriptive.
|
|
There is no point in exporting the old drain() method in
Simulate.py. It should only be used internally by doDrain(). This
patch moves the old drain() method into doDrain() and renames
doDrain() to drain().
|
|
Changeset 4f54b0f229b5 removed the call to doDrain in changeToTiming
based on the assumption that the system does not need draining when
running in atomic mode. This is a false assumption since at least the
System class requires the system to be drained before it allows
switching of memory modes. This patch reverts that part of the
changeset.
|
|
When switching from an atomic CPU to any of the timing CPUs, a drain is
unnecessary since no events are scheduled in atomic mode. However, when
trying to switch CPUs starting with a timing CPU, there may be events
scheduled. This change ensures that all events are drained from the system
by calling m5.drain before switching CPUs.
|
|
This patch fixes a bug in scriptCheckpoints, where maxtick was used
undefined. The bug caused checkpointing by means of --take-checkpoints
to fail.
|
|
This patch fixes the checkpointing by ensuring that the directory is
passer to the scriptCheckpoints function, and that the num_checkpoints
is not used before it is initialised.
|
|
This patch adds a --repeat-switch option that will enable repeat core
switching at a user defined period (set with --switch-freq option).
currently, a switch can only occur between like CPU types. inorder CPU
switching is not supported.
*note*
this patch simply allows a config that will perform repeat switching, it
does not fix drain/switchout functionality. if you run with repeat switching
you will hit assertion failures and/or your workload with hang or die.
|
|
This patch moves the code related to checkpointing from the run() function to
several different functions. The aim is to make the code more manageable. No
functionality changes are expected, but since the code is kind of unruly, it
is possible that some change might have creeped in.
|
|
This changes the way in which the cpu class while restoring from a checkpoint
is set. Earlier it was assumed if cpu type with which to restore is not same
as the cpu type with the which to run the simulation, then the checkpoint
should be restored with the atomic cpu. This assumption is being dropped. The
checkpoint can now be restored with any cpu type, the default being atomic cpu.
|
|
This patch changes the se and fs script to use the clock option and
not simply set the CPUs clock to 2 GHz. It also makes a minor change
to the assignment of the switch_cpus clock to allow different clocks.
|
|
The function is presently defined in FSConfig.py, which does not seem to be
the correct place for it.
|
|
Enables the CheckerCPU to be selected at runtime with the --checker option
from the configs/example/fs.py and configs/example/se.py configuration
files. Also merges with the SE/FS changes.
|
|
|
|
|
|
|
|
Currently there is an assumption that restoration from a checkpoint will
happen by first restoring to an atomic CPU and then switching to a timing
CPU. This patch adds support for directly restoring to a timing CPU. It
adds a new option '--restore-with-cpu' which is used to specify the type
of CPU to which the checkpoint should be restored to. It defaults to
'atomic' which was the case before.
|
|
This patch adds a new option for cpu type. This option is of type 'choice'
which is similar to a C++ enum, except that it takes string values as
possible choices. Following options are being removed -- detailed, timing,
inorder.
--HG--
extra : rebase_source : 58885e2e8a88b6af8e6ff884a5922059dbb1a6cb
|
|
|
|
maxinsts & max_inst redundant
prog_intvl and profile seem redundant, but profile looks to be unused
add -p option for progress intervals
|
|
This patch moves the assignment of testsys.switch_cpus, testsys.switch_cpus_1,
switch_cpu_list, and switch_cpu_list1 outside of the for loop so they are
assigned only once, after switch_cpus and switch_cpus_1 are constructed.
|
|
Most of the messages in the config scripts that report a time value already
print "@ tick" followed by the current tick value, but a few were printing
"@ cycle". Since this is a distinction that's frequently confusing to new
users, this changes those message to the more accurate and consistent "@ tick".
|
|
Meant to add these with the previous batch of csets.
|
|
The separate restoreCheckpoint() call is gone; just pass
the checkpoint dir as an optional arg to instantiate().
This change is a precursor to some more extensive
reworking of the startup code.
|
|
Small change to clean up some redundant code.
Should not have any functional impact.
|
|
Enforce that the Python Root SimObject is instantiated only
once. The C++ Root object already panics if more than one is
created. This change avoids the need to track what the root
object is, since it's available from Root.getInstance() (if it
exists). It's now redundant to have the user pass the root
object to functions like instantiate(), checkpoint(), and
restoreCheckpoint(), so that arg is gone. Users who use
configs/common/Simulate.py should not notice.
|
|
See comments in util/checkpoint-tester.py for details.
|
|
|
|
Get rid of misc.py and just stick misc things in __init__.py
Move utility functions out of SCons files and into m5.util
Move utility type stuff from m5/__init__.py to m5/util/__init__.py
Remove buildEnv from m5 and allow access only from m5.defines
Rename AddToPath to addToPath while we're moving it to m5.util
Rename read_command to readCommand while we're moving it
Rename compare_versions to compareVersions while we're moving it.
--HG--
rename : src/python/m5/convert.py => src/python/m5/util/convert.py
rename : src/python/m5/smartdict.py => src/python/m5/util/smartdict.py
|
|
-option to allow threads to run to a max_inst_any_thread which is more useful/quicker in a lot of
cases then always having to figure out what tick to run your simulation to.
|
|
this was double scheduling itself (once in constructor and once in cpu code). also add support for stopping / starting
progress events through repeatEvent flag and also changing the interval of the progress event as well
|
|
|
|
|
|
information about an error.
|
|
doesn't exist
--HG--
extra : convert_revision : c156c49668815755c4c788f807e8eba32151aa24
|