Age | Commit message (Collapse) | Author |
|
If the CPU has been clock gated for a sufficient amount of time
(configurable via pwrGatingLatency), the CPU will go into the OFF
power state. This does not model hardware, just behaviour.
Change-Id: Ib3681d1ffa6ad25eba60f47b4020325f63472d43
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/3969
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Change-Id: I017852eac183fac3f914fdb96d7e72a56ea9d682
Reviewed-by: Nathanael Premillieu <nathanael.premillieu@arm.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/5121
Reviewed-by: Matthias Jung <jungma@eit.uni-kl.de>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Fixes compile error for gem5.fast on CLANG due to unused variable.
Change-Id: Iabe777a27d75ee8bfa7b214fff577aed3c7582c7
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-on: https://gem5-review.googlesource.com/4980
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
|
|
The size of the store entry in the LSQ is used to indicate a fault in
the execution of the store. At the same time, a store that is
predicated false will also have 0 size in the corresponding store
queue entry. This changeset ensures that we check if the store was
predicated false before checking the size field. This way we avoid
printing stores as faulting when they are only predicated false.
Change-Id: Ie07982197bd73d7b44d26a3257d54ecb103a952a
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/4821
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
The O3CPU allows stores to commit before they are completed and as
soon as they enter the store queue. This is the reason why stores are
verified by the the checker CPU, separately, once they complete
and after they are sent to the memory.
Store conditionals, on the other hand, have an additional writeback
stage in the pipeline as they return their result to a register,
similarly to loads. This is the reason why they do not commit
before they receive a response from the memory. This allows store
conditionals to be verified by the checker CPU as soon as they
commit in the same way as all other non-store insturctions.
At the same time, the presense of a checker CPU should not require
changes to way we handle instructions. This change removes explicit
calls to:
* incorrectly set the extra data of the request to 0 (a subsequent
call to completeAcc already does this without making any ISA
assumptions about the return value of the failed store conditional)
* complete failing store conditionals
Change-Id: If21d70b21caa55b35e9fdcc50f254c590465d3c3
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/4820
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
The kernel stat mechanism should really be refactored and moved somewhere
else, but in the mean time there's some old cruft that can be cleared away.
Change-Id: I21e725de590dda0d20bf3bc675bbe976c7b1bd86
Reviewed-on: https://gem5-review.googlesource.com/4600
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
When a split load hits a memory region where IPRs are mapped, the
Writebackevent which is scheduled for that was carrying a data packet
that was not correctly initialized which caused an assertion to fire
when the Writeback event is processed.
Change-Id: I71a4e291f0086f7468d7e8124a0a8f098088972f
Signed-off-by: Matthias Hille <matthiashille8@gmail.com>
Reported-by: Matthias Hille <matthiashille8@gmail.com>
Reviewed-on: https://gem5-review.googlesource.com/4620
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Gabe Black <gabeblack@google.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
The introduction of a new vector register class broke rename in the O3
CPU due to an unhandled register class in
DefaultRename<Impl>::renameSrcRegs(). This patch fixes adds the
necessary handling to avoid a panic when the vector register file is
used.
Change-Id: Ie380ab35ec4a151db15402f25b25b58931ee0581
Reviewed-by: Giacomo Gabrielli <giacomo.gabrielli@arm.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/4140
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Checkpointing a system with out-of-order CPUs might get stuck if
one of the CPUs has been put to sleep. The quiesce instruction
cannot get drained hence checkpointing never finishes.
This commit resolves that by activating all suspended thread
contexts when draining the system.
Change-Id: I817ab1672b4ead777bd8e12a0445829481c46fdc
Reviewed-by: Sascha Bischoff <sascha.bischoff@arm.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/3970
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
Change-Id: If765c6100d67556f157e4e61aa33c2b7eeb8d2f0
Signed-off-by: Sean Wilson <spwilson2@wisc.edu>
Reviewed-on: https://gem5-review.googlesource.com/3923
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
Reiley's update :) of the isa parser definitions. My addition of the
vector element operand concept for the ISA parser. Nathanael's modification
creating a hierarchy between vector registers and its constituencies to the
isa parser.
Some fixes/updates on top to consider instructions as vectors instead of
floating when they use the VectorRF. Some counters added to all the
models to keep faithful counts.
Change-Id: Id8f162a525240dfd7ba884c5a4d9fa69f4050101
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/2706
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
This patch adds some more functionality to the cpu model and the arch to
interface with the vector register file.
This change consists mainly of augmenting ThreadContexts and ExecContexts
with calls to get/set full vectors, underlying microarchitectural elements
or lanes. Those are meant to interface with the vector register file. All
classes that implement this interface also get an appropriate implementation.
This requires implementing the vector register file for the different
models using the VecRegContainer class.
This change set also updates the Result abstraction to contemplate the
possibility of having a vector as result.
The changes also affect how the remote_gdb connection works.
There are some (nasty) side effects, such as the need to define dummy
numPhysVecRegs parameter values for architectures that do not implement
vector extensions.
Nathanael Premillieu's work with an increasing number of fixes and
improvements of mine.
Change-Id: Iee65f4e8b03abfe1e94e6940a51b68d0977fd5bb
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
[ Fix RISCV build issues and CC reg free list initialisation ]
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/2705
|
|
With the hierarchical RegId there are a lot of functions that are
redundant now.
The idea behind the simplification is that instead of having the regId,
telling which kind of register read/write/rename/lookup/etc. and then
the function panic_if'ing if the regId is not of the appropriate type,
we provide an interface that decides what kind of register to read
depending on the register type of the given regId.
Change-Id: I7d52e9e21fc01205ae365d86921a4ceb67a57178
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
[ Fix RISCV build issues ]
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/2702
|
|
Mimic the changes done on the architectural register indexes on the
physical register indexes. This is specific to the O3 model. The
structure, called PhysRegId, contains a register class, a register
index and a flat register index. The flat register index is kept
because it is useful in some cases where the type of register is not
important (dependency graph and scoreboard for example). Instead
of directly using the structure, most of the code is working with
a const PhysRegId* (typedef to PhysRegIdPtr). The actual PhysRegId
objects are stored in the regFile.
Change-Id: Ic879a3cc608aa2f34e2168280faac1846de77667
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/2701
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
Replace the unified register mapping with a structure associating
a class and an index. It is now much easier to know which class of
register the index is referring to. Also, when adding a new class
there is no need to modify existing ones.
Change-Id: I55b3ac80763702aa2cd3ed2cbff0a75ef7620373
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
[ Fix RISCV build issues ]
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/2700
|
|
Change-Id: Idd5992463bcf9154f823b82461070d1f1842cea3
Signed-off-by: Sean Wilson <spwilson2@wisc.edu>
Reviewed-on: https://gem5-review.googlesource.com/3746
Reviewed-by: Anthony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
If a (regular) store is followed closely enough by a locked load that
overlaps, the LSQ will forward the store's data to the locked load and
never tell the cache about the locked load. As a result, the cache will
not lock the address and all future store-conditional requests on that
address will fail. This patch fixes that by preventing forwarding if
the memory request is a locked load and adding another case to the LSQ
forwarding logic that delays the locked load request if a store in the
LSQ contains all or part of the data that is requested.
[Merge second and last if blocks because their bodies are the same.]
Change-Id: I895cc2b9570035267bdf6ae3fdc8a09049969841
Reviewed-on: https://gem5-review.googlesource.com/2400
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
Maintainer: Jason Lowe-Power <jason@lowepower.com>
|
|
simulations
Modifies the clone system call and adds execve system call. Requires allowing
processes to steal thread contexts from other processes in the same system
object and the ability to detach pieces of process state (such as MemState)
to allow dynamic sharing.
|
|
This changeset adds functionality that allows system calls to retry without
affecting thread context state such as the program counter or register values
for the associated thread context (when system calls return with a retry
fault).
This functionality is needed to solve problems with blocking system calls
in multi-process or multi-threaded simulations where information is passed
between processes/threads. Blocking system calls can cause deadlock because
the simulator itself is single threaded. There is only a single thread
servicing the event queue which can cause deadlock if the thread hits a
blocking system call instruction.
To illustrate the problem, consider two processes using the producer/consumer
sharing model. The processes can use file descriptors and the read and write
calls to pass information to one another. If the consumer calls the blocking
read system call before the producer has produced anything, the call will
block the event queue (while executing the system call instruction) and
deadlock the simulation.
The solution implemented in this changeset is to recognize that the system
calls will block and then generate a special retry fault. The fault will
be sent back up through the function call chain until it is exposed to the
cpu model's pipeline where the fault becomes visible. The fault will trigger
the cpu model to replay the instruction at a future tick where the call has
a chance to succeed without actually going into a blocking state.
In subsequent patches, we recognize that a syscall will block by calling a
non-blocking poll (from inside the system call implementation) and checking
for events. When events show up during the poll, it signifies that the call
would not have blocked and the syscall is allowed to proceed (calling an
underlying host system call if necessary). If no events are returned from the
poll, we generate the fault and try the instruction for the thread context
at a distant tick. Note that retrying every tick is not efficient.
As an aside, the simulator has some multi-threading support for the event
queue, but it is not used by default and needs work. Even if the event queue
was completely multi-threaded, meaning that there is a hardware thread on
the host servicing a single simulator thread contexts with a 1:1 mapping
between them, it's still possible to run into deadlock due to the event queue
barriers on quantum boundaries. The solution of replaying at a later tick
is the simplest solution and solves the problem generally.
|
|
|
|
The target of taken conditional direct branches does not
need to be resolved in IEW: the target can be computed at
decode, usually using the decoded instruction word and the PC.
The higher-than-necessary penalty is taken only on conditional
branches that are predicted taken but miss in the BTB. Thus,
this is mostly inconsequential on IPC if the BTB is big/associative
enough (fewer capacity/conflict misses). Nonetheless, what gem5
simulates is not representative of how conditional branch targets
can be handled.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
cachePorts currently constrains the number of store packets written to the
D-Cache each cycle), but loads currently affect this variable. This leads
to unexpected congestion (e.g., setting cachePorts to a realistic 1 will
in fact allow a store to WB only if no loads have accessed the D-Cache
this cycle). In the absence of arbitration, this patch decouples how many
loads can be done per cycle from how many stores can be done per cycle.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
Modify the opClass assigned to AArch64 FP instructions from SimdFloat* to
Float*. Also create the FloatMemRead and FloatMemWrite opClasses, which
distinguishes writes to the INT and FP register banks.
Change the latency of (Simd)FloatMultAcc to 5, based on the Cortex-A72,
where the "latency" of FMADD is 3 if the next instruction is a FMADD and
has only the augend to destination dependency, otherwise it's 7 cycles.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
|
|
The drain did not wait until stages were ready again. Therefore, as a
result of messages in the TimeBuffer being drain, the state after the
drain was not consistent and asserts fired in some places when the
draining happened after a stage got blocked, but before the notification
arrived to the previous stages.
Change-Id: Ib50b3b40b7f745b62c1eba2931dec76860824c71
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
The quiesce family of magic ops can be simplified by the inclusion of
quiesceTick() and quiesce() functions on ThreadContext. This patch also
gets rid of the FS guards, since suspending a CPU is also a valid
operation for SE mode.
|
|
Add functionality to the BaseCPU that will put the entire CPU
into a low-power idle state whenever all threads in it are idle.
Change-Id: I984d1656eb0a4863c87ceacd773d2d10de5cfd2b
|
|
Fixing an issue with regStats not calling the parent class method
for most SimObjects in Gem5. This causes issues if one adds new
stats in the base class (since they are never initialized properly!).
Change-Id: Iebc5aa66f58816ef4295dc8e48a357558d76a77c
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
This is a re-spin of 20264eb after the revert (bd1c6789) and includes
some fixes of that commit.
|
|
The following patches had unexpected interactions with the current
upstream code and have been reverted for now:
e07fd01651f3: power: Add support for power models
831c7f2f9e39: power: Low-power idle power state for idle CPUs
4f749e00b667: power: Add power states to ClockedObject
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
--HG--
extra : amend_source : 0b6fb073c6bbc24be533ec431eb51fbf1b269508
|
|
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
|
|
Add functionality to the BaseCPU that will put the entire CPU into a low-power
idle state whenever all threads in it are idle.
|
|
fu_pool and inst_queue were using -1 for "no such FU" and -2 for "all those
FUs are busy at the moment" when requesting for a FU and replying. This
patch introduces new constants NoCapableFU and NoFreeFU respectively.
In addition, the condition (idx == -2 || idx != -1) is equivalent to
(idx != -1), so this patch also simplifies that.
--HG--
extra : rebase_source : 4833717b9d1e09d7594d1f34f882e13fc4b86846
|
|
This changeset adds support for changing the simulator output
directory. This can be useful when the simulation goes through several
stages (e.g., a warming phase, a simulation phase, and a verification
phase) since it allows the output from each stage to be located in a
different directory. Relocation is done by calling core.setOutputDir()
from Python or simout.setOutputDirectory() from C++.
This change affects several parts of the design of the gem5's output
subsystem. First, files returned by an OutputDirectory instance (e.g.,
simout) are of the type OutputStream instead of a std::ostream. This
allows us to do some more book keeping and control re-opening of files
when the output directory is changed. Second, new subdirectories are
OutputDirectory instances, which should be used to create files in
that sub-directory.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
[sascha.bischoff@arm.com: Rebased patches onto a newer gem5 version]
Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
|
|
This patch adds assertions that enforce that only invalidating snoops
will ever reach into the logic that tracks in-order load completion and
also invalidation of LL/SC (and MONITOR / MWAIT) monitors. Also adds
some comments to MSHR::replaceUpgrades().
|
|
Writes to locked memory addresses (LLSC) did not wake up the locking
CPU. This can lead to deadlocks on multi-core runs. In AtomicSimpleCPU,
recvAtomicSnoop was checking if the incoming packet was an invalidation
(isInvalidate) and only then handled a locked snoop. But, writes are
seen instead of invalidates when running without caches (fast-forward
configurations). As as simple fix, now handleLockedSnoop is also called
even if the incoming snoop packet are from writes.
|
|
This patch changes how the cache determines if snoops should be
forwarded from the memory side to the CPU side. Instead of having a
parameter, the cache now looks at the port connected on the CPU side,
and if it is a snooping port, then snoops are forwarded. Less error
prone, and less parameters to worry about.
The patch also tidies up the CPU classes to ensure that their I-side
port is not snooping by removing overrides to the snoop request
handler, such that snoop requests will panic via the default
MasterPort implement
|
|
Result of running 'hg m5style --skip-all --fix-control -a'.
|
|
Result of running 'hg m5style --skip-all --fix-white -a'.
|
|
The read() function merely initiates a memory read operation; the
data doesn't arrive until the access completes and a response packet
is received from the memory system. Thus there's no need to provide
a data pointer; its existence is historical.
Getting this pointer out of this internal o3 interface sets the
stage for similar cleanup in the ExecContext interface. Also
found that we were pointlessly setting the contents at this pointer
on a store forward (the useful memcpy happens just a few lines
below the deleted one).
|
|
This patch changes the name of a bunch of packet flags and MSHR member
functions and variables to make the coherency protocol easier to
understand. In addition the patch adds and updates lots of
descriptions, explicitly spelling out assumptions.
The following name changes are made:
* the packet memInhibit flag is renamed to cacheResponding
* the packet sharedAsserted flag is renamed to hasSharers
* the packet NeedsExclusive attribute is renamed to NeedsWritable
* the packet isSupplyExclusive is renamed responderHadWritable
* the MSHR pendingDirty is renamed to pendingModified
The cache states, Modified, Owned, Exclusive, Shared are also called
out in the cache and MSHR code to make it easier to understand.
|
|
This patch adds support to optionally capture the virtual address and asid
for load/store instructions in the elastic traces. If they are present in
the traces, Trace CPU will set those fields of the request during replay.
|
|
This patch replaces the booleans that specified the elastic trace record
type with an enum type. The source of change is the proto message for
elastic trace where the enum is introduced. The struct definitions in the
elastic trace probe listener as well as the Trace CPU replace the boleans
with the proto message enum.
The patch does not impact functionality, but traces are not compatible with
previous version. This is preparation for adding new types of records in
subsequent patches.
|
|
The elastic trace is a type of probe listener and listens to probe points
in multiple stages of the O3CPU. The notify method is called on a probe
point typically when an instruction successfully progresses through that
stage.
As different listener methods mapped to the different probe points execute,
relevant information about the instruction, e.g. timestamps and register
accesses, are captured and stored in temporary InstExecInfo class objects.
When the instruction progresses through the commit stage, the timing and the
dependency information about the instruction is finalised and encapsulated in
a struct called TraceInfo. TraceInfo objects are collected in a list instead
of writing them out to the trace file one a time. This is required as the
trace is processed in chunks to evaluate order dependencies and computational
delay in case an instruction does not have any register dependencies. By this
we achieve a simpler algorithm during replay because every record in the
trace can be hooked onto a record in its past. The instruction dependency
trace is written out as a protobuf format file. A second trace containing
fetch requests at absolute timestamps is written to a separate protobuf
format file.
If the instruction is not executed then it is not added to the trace.
The code checks if the instruction had a fault, if it predicated
false and thus previous register values were restored or if it was a
load/store that did not have a request (e.g. when the size of the
request is zero). In all these cases the instruction is set as
executed by the Execute stage and is picked up by the commit probe
listener. But a request is not issued and registers are not written.
So practically, skipping these should not hurt the dependency modelling.
If squashing results in squashing younger instructions, it may happen that
the squash probe discards the inst and removes it from the temporary
store but execute stage deals with the instruction in the next cycle which
results in the execute probe seeing this inst as 'new' inst. A sequence
number of the last processed trace record is used to trap these cases and
not add to the temporary store.
The elastic instruction trace and fetch request trace can be read in and
played back by the TraceCPU.
|
|
This patch adds probe points in Fetch, IEW, Rename and Commit stages as follows.
A probe point is added in the Fetch stage for probing when a fetch request is
sent. Notify is fired on the probe point when a request is sent succesfully in
the first attempt as well as on a retry attempt.
Probe points are added in the IEW stage when an instruction begins to execute
and when execution is complete. This points can be used for monitoring the
execution time of an instruction.
Probe points are added in the Rename stage to probe renaming of source and
destination registers and when there is squashing. These probe points can be
used to track register dependencies and remove when there is squashing.
A probe point for squashing is added in Commit to probe squashed instructions.
|
|
The assert in lsq_unit_impl.hh line 963 needs pktPending to be initialized to
NULL (I got the assertion failure several times without the fix).
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
Note that the method is not used, and could possibly be deleted.
|
|
|
|
This patch adds explicit overrides as this is now required when using
"-Wall" with clang >= 3.5, the latter now part of the most recent
XCode. The patch consequently removes "virtual" for those methods
where "override" is added. The latter should be enough of an
indication.
As part of this patch, a few minor issues that clang >= 3.5 complains
about are also resolved (unused methods and variables).
|
|
This patch moves away from using M5_ATTR_OVERRIDE and the m5::hashmap
(and similar) abstractions, as these are no longer needed with gcc 4.7
and clang 3.1 as minimum compiler versions.
|
|
The decoder is responsible for splitting instructions in micro
operations (uops). Given that different micro architectures may split
operations differently, this patch allows to specify which micro
architecture each isa implements, so different cores in the system can
split instructions differently, also decoupling uop splitting
(microArch) from ISA (Arch). This is done making the decodification
calls templates that receive a type 'DecoderFlavour' that maps the
name of the operation to the class that implements it. This way there
is only one selection point (converting the command line enum to the
appropriate DecodeFeatures object). In addition, there is no explicit
code replication: template instantiation hides that, and the compiler
should be able to resolve a number of things at compile-time.
|