Age | Commit message (Collapse) | Author |
|
Event auto-serialization no longer in use and has been broken ever
since the introduction of PDES support almost two years
ago. Additionally, serializing the individual event queues is
undesirable since it exposes the thread structure of the
simulator. What this means in practice is that the number of threads
in the simulator must be the same when taking a checkpoint and when
loading the checkpoint.
This changeset removes support for the AutoSerialize event flag and
the associated serialization code.
|
|
|
|
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
There are 2 problems with the existing checkpoint and restore code in ruby.
The first is that when the event queue is altered by ruby during serialization,
some events that are currently scheduled cannot be found (e.g. the event to
stop simulation that always lives on the queue), causing a panic.
The second is that ruby is sometimes serialized after the memory system,
meaning that the dirty data in its cache is flushed back to memory too late
and so isn't included in the checkpoint.
These are fixed by implementing memory writeback in ruby, using the same
technique of hijacking the event queue, but first descheduling all events that
are currently on it. They are saved, along with their scheduled time, so that
the event queue can be faithfully reconstructed after writeback has finished.
Events with the AutoDelete flag set will delete themselves when they
are descheduled, causing an error when attempting to schedule them again.
This is fixed by simply not recording them when taking them off the queue.
Writeback is still implemented using flushing, so the cache recorder object,
that is created to generate the trace and manage flushing, is kept
around and used during serialization to write the trace to disk.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
Events expected to be unserialized using an event-specific
unserializeEvent call. This call was never actually used, which meant
the events relying on it never got unserialized (or scheduled after
unserialization).
Instead of relying on a custom call, we now use the normal
serialization code again. In order to schedule the event correctly,
the parrent object is expected to use the
EventQueue::checkpointReschedule() call. This happens automatically
for events that are serialized using the AutoSerialize mechanism.
|
|
Objects that are can be serialized are supposed to inherit from the
Serializable class. This class is meant to provide a unified API for
such objects. However, so far it has mainly been used by SimObjects
due to some fundamental design limitations. This changeset redesigns
to the serialization interface to make it more generic and hide the
underlying checkpoint storage. Specifically:
* Add a set of APIs to serialize into a subsection of the current
object. Previously, objects that needed this functionality would
use ad-hoc solutions using nameOut() and section name
generation. In the new world, an object that implements the
interface has the methods serializeSection() and
unserializeSection() that serialize into a named /subsection/ of
the current object. Calling serialize() serializes an object into
the current section.
* Move the name() method from Serializable to SimObject as it is no
longer needed for serialization. The fully qualified section name
is generated by the main serialization code on the fly as objects
serialize sub-objects.
* Add a scoped ScopedCheckpointSection helper class. Some objects
need to serialize data structures, that are not deriving from
Serializable, into subsections. Previously, this was done using
nameOut() and manual section name generation. To simplify this,
this changeset introduces a ScopedCheckpointSection() helper
class. When this class is instantiated, it adds a new /subsection/
and subsequent serialization calls during the lifetime of this
helper class happen inside this section (or a subsection in case
of nested sections).
* The serialize() call is now const which prevents accidental state
manipulation during serialization. Objects that rely on modifying
state can use the serializeOld() call instead. The default
implementation simply calls serialize(). Note: The old-style calls
need to be explicitly called using the
serializeOld()/serializeSectionOld() style APIs. These are used by
default when serializing SimObjects.
* Both the input and output checkpoints now use their own named
types. This hides underlying checkpoint implementation from
objects that need checkpointing and makes it easier to change the
underlying checkpoint storage code.
|
|
The method Event::initialized() tests if this != NULL as a part of the
expression that tests if an event is initialized. The only case when
this check could be false is if the method is called on a null
pointer, which is illegal and leads to undefined behavior (such as
eating your pets) according to the C++ standard. Because of this,
modern compilers (specifically, recent versions of clang) warn about
this which we treat as an error. This changeset removes the redundant
check to fix said warning.
|
|
This patch adds a 'wakeup' member function to EventQueue which should be
called on an event queue whenever an event is scheduled on the event queue
from outside code within the call tree of the gem5 event loop.
This clearly isn't necessary for normal gem5 EventQueue operation but
becomes the minimum necessary interface to allow hosting gem5's event loop
onto other schedulers where there may be calls into gem5 from external
code which schedules events onto an EventQueue between the current time and
the time of the next scheduled event.
The use case I have in mind is a SystemC hosting where the event loop is:
while (more events) {
wait(time_to_next_event or wakeup)
setCurTick
service events at this time
}
where the 'wait' needs to be woken up if time_to_next_event becomes shorter
due to a scheduled event from SystemC arriving in a gem5 object.
Requiring 'wakeup' to be called is a more efficient interface than
requiring all gem5 event scheduling actions to affect the host scheduler.
This interface could be located elsewhere, say on another global object,
or by being passed by the host scheduler to objects which will schedule
such events, but it seems cleanest to put it on EventQueue as it is
actually a signal to the queue.
EventQueue::wakeup is called for async_event events on event queue 0 as
it's only important that *some* queue be triggered for such events.
|
|
Add some missing initialisation, and fix a handful benign resource
leaks (including some false positives).
|
|
Static analysis unearther a bunch of uninitialised variables and
members, and this patch addresses the problem. In all cases these
omissions seem benign in the end, but at least fixing them means less
false positives next time round.
|
|
Adds DVFS capabilities to gem5, by allowing users to specify lists for
frequencies and voltages in SrcClockDomains and VoltageDomains respectively.
A separate component, DVFSHandler, provides a small interface to change
operating points of the associated domains.
Clock domains will be linked to voltage domains and thus allow separate clock,
but shared voltage lines.
Currently all the valid performance-level updates are performed with a fixed
transition latency as specified for the domain.
Config file example:
...
vd = VoltageDomain(voltage = ['1V','0.95V','0.90V','0.85V'])
tsys.cluster1.clk_domain.clock = ['1GHz','700MHz','400MHz','230MHz']
tsys.cluster2.clk_domain.clock = ['1GHz','700MHz','400MHz','230MHz']
tsys.cluster1.clk_domain.domain_id = 0
tsys.cluster2.clk_domain.domain_id = 1
tsys.cluster1.clk_domain.voltage_domain = vd
tsys.cluster2.clk_domain.voltage_domain = vd
tsys.dvfs_handler.domains = [tsys.cluster1.clk_domain,
tsys.cluster2.clk_domain]
tsys.dvfs_handler.enable = True
|
|
We need the ability to lock event queues to enable device accesses
across threads. The serviceOne() method now takes a service lock prior
to handling a new event. By locking an event queue, a different
thread/eq can effectively execute in the context of the locked event
queue. To simplify temporary event queue migrations, this changeset
introduces the EventQueue::ScopedMigration class that unlocks the
current event queue, locks a new event queue, and updates the current
event queue variable.
In order to prevent deadlocks, event queues need to be released when
waiting on barriers. This is implemented using the
EventQueue::ScopedRelease class. An instance of this class is, for
example, used in the BaseGlobalEvent class to release the event queue
when waiting on the synchronization barrier.
The intended use for this functionality is when devices need to be
accessed across thread boundaries. For example, when fast-forwarding,
it might be useful to run devices and CPUs in separate threads. In
such a case, the CPU locks the device queue whenever it needs to
perform IO. This functionality is primarily intended for KVM.
Note: Migrating between event queues can lead to non-deterministic
timing. Use with extreme care!
--HG--
extra : rebase_source : 23e3a741a1fd73861d1339782dbbe1bc76285315
|
|
This patch adds support for simulating with multiple threads, each of
which operates on an event queue. Each sim object specifies which eventq
is would like to be on. A custom barrier implementation is being added
using which eventqs synchronize.
The patch was tested in two different configurations:
1. ruby_network_test.py: in this simulation L1 cache controllers receive
requests from the cpu. The requests are replied to immediately without
any communication taking place with any other level.
2. twosys-tsunami-simple-atomic: this configuration simulates a client-server
system which are connected by an ethernet link.
We still lack the ability to communicate using message buffers or ports. But
other things like simulation start and end, synchronizing after every quantum
are working.
Committed by: Nilay Vaish
|
|
|
|
This patch enables warnings for missing declarations. To avoid issues
with SWIG-generated code, the warning is only applied to non-SWIG
code.
|
|
|
|
This patch adds a _curTick variable to an eventq. This variable is updated
whenever an event is serviced in function serviceOne(), or all events upto
a particular time are processed in function serviceEvents(). This change
helps when there are eventqs that do not make use of curTick for scheduling
events.
|
|
This patch tidies up the EventManager constructor and prunes a corner
case where the EventManager would initialise its eventq pointer to
NULL. This would cause segmentation faults on actual use and should
never happen.
|
|
This patch makes the Tick unsigned and removes the UTick typedef. The
ticks should never be negative, and there was only one major issue
with removing it, caused by the o3 CPU using a -1 as an initial value.
The patch has no impact on any regressions.
|
|
This patch renames the queue() accessor to the less ambigious
eventQueue, and also removes the cast operator. The queue() member
function cause problems in derived classes that declare members with
the same name, e.g. a MemObject subclass that has a packet queue on
its own. The operator is not causing any harm at this point, but as it
is not used there is little point in keeping it.
|
|
While FastAlloc provides a small performance increase (~1.5%) over regular malloc it isn't thread safe.
After removing FastAlloc and using tcmalloc I've seen a performance increase of 12% over libc malloc
when running twolf for ARM.
|
|
|
|
This patch cleans up a number of minor issues aiming to get closer to
compliance with the C++0x standard as interpreted by gcc and clang
(compile with std=c++0x and -pedantic-errors). In particular, the
patch cleans up enums where the last item was succeded by a comma,
namespaces closed by a curcly brace followed by a semi-colon, and the
use of the GNU-extension typeof (replaced by templated functions). It
does not address variable-length arrays, zero-size arrays, anonymous
structs, range expressions in switch statements, and the use of long
long. The generated CPU code also has a large number of issues that
remain to be fixed, mainly related to overflows in implicit constant
conversion (due to shifts).
|
|
This patch adds a function for replacing the event at the head of the queue
with another event. This helps in running a different set of events. Events
already scheduled can processed by replacing the original head event back.
This function has been specifically added to support cache warmup and
cooldown required for creating and restoring checkpoints.
--HG--
extra : rebase_source : ed6e2905720b6bfdefd020fab76235ccf33d28d1
|
|
Initialize flags via the Event constructor instead of calling
setFlags() in the body of the derived class's constructor. I
forget exactly why, but this made life easier when implementing
multi-queue support.
Also rename Event::getFlags() to isFlagSet() to better match
common usage, and get rid of some unused Event methods.
|
|
At the same time, rename the trace flags to debug flags since they
have broader usage than simply tracing. This means that
--trace-flags is now --debug-flags and --trace-help is now --debug-help
|
|
|
|
This step makes it easy to replace the accessor functions
(which still access a global variable) with ones that access
per-thread curTick values.
|
|
|
|
Also make copy constructor and assignment operator private.
|
|
|
|
- Make the initialized flag always available, not just in debug mode.
- Make the Initialized flag actually use several bits so it is very
unlikely that something that's uninitialized accidentally looks
initialized.
- Add an initialized() function that tells you if the current event is
indeed initialized.
- Clear the flags on delete so it can't be accidentally thought of as
initialized.
- Fix getFlags assert statement. "How did this ever work?"
|
|
Symbolic names should still be used, but this makes it easier to do
things like:
Event::Priority MyObject_Pri = Event::Default_Pri + 1
Remember that higher numbers are lower priority (should we fix this?)
|
|
|
|
|
|
|
|
|
|
|
|
--HG--
rename : src/sim/host.hh => src/base/types.hh
|
|
|
|
|
|
|
|
We need to add a reference when an object is put on the C++ queue, and remove
a reference when the object is removed from the queue. This was not happening
before and caused a memory problem.
|
|
Since the early days of M5, an event needed to know which event queue
it was on, and that data was required at the time of construction of
the event object. In the future parallelized M5, this sort of
requirement does not work well since the proper event queue will not
always be known at the time of construction of an event. Now, events
are created, and the EventQueue itself has the schedule function,
e.g. eventq->schedule(event, when). To simplify the syntax, I created
a class called EventManager which holds a pointer to an EventQueue and
provides the schedule interface that is a proxy for the EventQueue.
The intent is that objects that frequently schedule events can be
derived from EventManager and then they have the schedule interface.
SimObject and Port are examples of objects that will become
EventManagers. The end result is that any SimObject can just call
schedule(event, when) and it will just call that SimObject's
eventq->schedule function. Of course, some objects may have more than
one EventQueue, so this interface might not be perfect for those, but
they should be relatively few.
|
|
Another good reason to avoid this is that swig will try to wrap the friend,
but it won't try to wrap a private static function.
|
|
|
|
|
|
should configure their editors to not insert tabs
|
|
The status quo is preferred since it is less likely that people will
rely on LIFO than FIFO, and when we move to a parallelized M5, no
ordering between events of the same time/priority will be guaranteed.
|
|
linked list sorted by time and priority. For things of the same time
and priority, a second, circularly linked list maintains the data
structure. Events of the same time and priority are now inserted in
FIFO order instead of LIFO order. This dramatically improves the
performance of systems that schedule multiple events at the same time.
The FIFO order version is not preferred to LIFO (because it may cause
people to rely on it), but I'm going to commit it anyway and
immediately commit the preferred LIFO version on top.
|