Age | Commit message (Collapse) | Author |
|
|
|
Currently the sequencer calls the function setMRU that updates the replacement
policy structures with the first level caches. While functionally this is
correct, the problem is that this requires calling findTagInSet() which is an
expensive function. This patch removes the calls to setMRU from the sequencer.
All controllers should now update the replacement policy on their own.
The set and the way index for a given cache entry can be found within the
AbstractCacheEntry structure. Use these indicies to update the replacement
policy structures.
|
|
|
|
These types are being replaced with uint64_t and int64_t.
|
|
Before this patch, while one could declare / define a function with default
argument values, but the actual function call would require one to specify
all the arguments. This patch changes the check for function arguments.
Now a function call needs to specify arguments that are at least as much as
those with default values and at most the total number of arguments taken
as input by the function.
|
|
Both FuncCallExprAST and MethodCallExprAST had code for checking the arguments
with which a function is being called. The patch does away with this
duplication. Now the code for checking function call arguments resides in the
Func class.
|
|
This is in preparation for adding a second arugment to the lookup
function for the CacheMemory class. The change to *.sm files was made using
the following sed command:
sed -i 's/\[\([0-9A-Za-z._()]*\)\]/.lookup(\1)/' src/mem/protocol/*.sm
|
|
The sequencer takes care of llsc accesses by calling upon functions
from the CacheMemory. This is unnecessary once the required CacheEntry object
is available. Thus some of the calls to findTagInSet() are avoided.
|
|
This patch eliminates the type Address defined by the ruby memory system.
This memory system would now use the type Addr that is in use by the
rest of the system.
|
|
Avoid clash between type Addr and variable name Addr.
|
|
|
|
Expose MessageBuffers from SLICC controllers as SimObjects that can be
manipulated in Python. This patch has numerous benefits:
1) First and foremost, it exposes MessageBuffers as SimObjects that can be
manipulated in Python code. This allows parameters to be set and checked in
Python code to avoid obfuscating parameters within protocol files. Further, now
as SimObjects, MessageBuffer parameters are printed to config output files as a
way to track parameters across simulations (e.g. buffer sizes)
2) Cleans up special-case code for responseFromMemory buffers, and aligns their
instantiation and use with mandatoryQueue buffers. These two special buffers
are the only MessageBuffers that are exposed to components outside of SLICC
controllers, and they're both slave ends of these buffers. They should be
exposed outside of SLICC in the same way, and this patch does it.
3) Distinguishes buffer-specific parameters from buffer-to-network parameters.
Specifically, buffer size, randomization, ordering, recycle latency, and ports
are all specific to a MessageBuffer, while the virtual network ID and type are
intrinsics of how the buffer is connected to network ports. The former are
specified in the Python object, while the latter are specified in the
controller *.sm files. Unlike buffer-specific parameters, which may need to
change depending on the simulated system structure, buffer-to-network
parameters can be specified statically for most or all different simulated
systems.
|
|
CacheMemory and DirectoryMemory lookup functions return pointers to entries
stored in the memory. Bring PerfectCacheMemory in line with this convention,
and clean up SLICC code generation that was in place solely to handle
references like that which was returned by PerfectCacheMemory::lookup.
|
|
The RubyCache (CacheMemory) latency parameter is only used for top-level caches
instantiated for Ruby coherence protocols. However, the top-level cache hit
latency is assessed by the Sequencer as accesses flow through to the cache
hierarchy. Further, protocol state machines should be enforcing these cache hit
latencies, but RubyCaches do not expose their latency to any existng state
machines through the SLICC/C++ interface. Thus, the RubyCache latency parameter
is superfluous for all caches. This is confusing for users.
As a step toward pushing L0/L1 cache hit latency into the top-level cache
controllers, move their latencies out of the RubyCache declarations and over to
their Sequencers. Eventually, these Sequencer parameters should be exposed as
parameters to the top-level cache controllers, which should assess the latency.
NOTE: Assessing these latencies in the cache controllers will require modifying
each to eliminate instantaneous Ruby hit callbacks in transitions that finish
accesses, which is likely a large undertaking.
|
|
|
|
The Packet::get() and Packet::set() methods both have very strange
semantics. Currently, they automatically convert between the guest
system's endianness and the host system's endianness. This behavior is
usually undesired and unexpected.
This patch introduces three new method pairs to access data:
* getLE() / setLE() - Get data stored as little endian.
* getBE() / setBE() - Get data stored as big endian.
* get(ByteOrder) / set(v, ByteOrder) - Configurable endianness
For example, a little endian device that is receiving a write request
will use teh getLE() method to get the data from the packet.
The old interface will be deprecated once all existing devices have
been ported to the new interface.
|
|
Context IDs used to be declared as ad hoc (usually as int). This
changeset introduces a typedef for ContextIDs and a constant for
invalid context IDs.
|
|
This patch removes the extraneous flags and attributes from the
request and packet, and simply leaves the new commands. The change
introduced when adding acquire/release breaks all compatibility with
existing traces, and there is really no need for any new flags and
attributes. The commands should be sufficient.
This patch fixes packet tracing (urgent), and also removes the
unnecessary complexity.
|
|
--HG--
extra : rebase_source : 9dba84eaf9c734a114ecd0940e1d505303644064
|
|
This changeset moves the access trace functionality from the
CommMonitor into a separate probe. The probe can be hooked up to any
component that exports probe points of the type ProbePoints::Packet.
This patch moves the dependency on Google's Protocol Buffers library
from the CommMonitor to the MemTraceProbe, which means that the
CommMonitor (including stack distance profiling) no long depends on
it.
|
|
This changeset removes the stack distance calculator hooks from the
CommMonitor class and implements a stack distance calculator as a
memory system probe instead. The probe can be hooked up to any
component that exports probe points of the type ProbePoints::Packet.
|
|
This changeset adds a standardized probe point type to monitor packets
in the memory system and adds two probe points to the CommMonitor
class. These probe points enable monitoring of successfully delivered
requests and successfully delivered responses.
Memory system probe listeners should use the BaseMemProbe base class
to provide a unified configuration interface and reuse listener
registration code. Unlike the ProbeListenerObject class, the
BaseMemProbe allows objects to be wired to multiple ProbeManager
instances as long as they use the same probe point name.
|
|
There are 2 problems with the existing checkpoint and restore code in ruby.
The first is that when the event queue is altered by ruby during serialization,
some events that are currently scheduled cannot be found (e.g. the event to
stop simulation that always lives on the queue), causing a panic.
The second is that ruby is sometimes serialized after the memory system,
meaning that the dirty data in its cache is flushed back to memory too late
and so isn't included in the checkpoint.
These are fixed by implementing memory writeback in ruby, using the same
technique of hijacking the event queue, but first descheduling all events that
are currently on it. They are saved, along with their scheduled time, so that
the event queue can be faithfully reconstructed after writeback has finished.
Events with the AutoDelete flag set will delete themselves when they
are descheduled, causing an error when attempting to schedule them again.
This is fixed by simply not recording them when taking them off the queue.
Writeback is still implemented using flushing, so the cache recorder object,
that is created to generate the trace and manage flushing, is kept
around and used during serialization to write the trace to disk.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
|
|
1. Eliminate state NP in L0 and L1 Caches: The two states 'NP' and 'I' both
mean that the cache block is not present in the cache. 'I' also means that the
cache entry has been allocated. This causes problems when we do not correctly
initialize the cache entry when it is re-used. Hence, this patch eliminates
the state NP altogether. Everytime a new block comes into the cache, a cache
entry is allocated. Everytime a block leaves, the corresponding entry is
deallocated.
2. Separate transient state for instruction fetches: purely for accouting
purposes.
3. Drop state IS_I in L1 Cache and the message type STALE_DATA: when
invalidation is received for a block in IS, the block used to be moved to IS_I.
This meant that the data that would arrive in future would be used but not
stored since the controller lost the permissions after gaining them. This
state is being dropped and now invalidation messages would not processed till
the data has arrived. This also means that STALE_DATA type is not longer
required.
|
|
The level 2 controller has a bug. In one particular action, the data block was
copied from a message irrespective whether the block is dirty or not. In cases
when L1 sends no data, the data value copied was incorrect.
|
|
It is perfectly valid to compare the same message and the greater than
operator should work correctly.
|
|
Added dprintfs and asserts for identifying stall and wait bugs.
|
|
|
|
For many years the slicc symbol table has supported overloaded functions in
external classes. This patch extends that support to functions that are not
part of classes (a.k.a. no parent). For example, this support allows slicc
to understand that mapAddressToRange is overloaded and the NodeID is an
optional parameter.
|
|
This patch changes the router pipeline stages from 4 to 2. The
canonical 4-stage router is conservative while a lower-latency router
with look ahead routing and speculative allocation is well acknowledged.
|
|
Sets m_stage.second to the second parameter of the function.
Then, for every place where advance_stage is called, adds
a cycle to the argument being passed.
|
|
Adds features to allow protocols to reschedule controllers when conditionally
stalling within inport logic or actions. Also insures that resource and
protocol stalls are re-evaluated the next cycle.
|
|
This patch adds support that allows the replacement policy to identify each
cache block's access permission. This information can be useful when making
replacement decisions.
|
|
|
|
The Ruby banked array resource checks (initiated from SLICC) did a check and
allocate at the same time. If a transition needs more than one resource, then
it might check/allocate resource #1, then fail to get resource #2. Another
transition might then try to get the same resources, but in reverse order.
Deadlock.
This patch separates resource checking and resource reservation into two
steps to avoid deadlock.
|
|
It was previously possible for a stalled message to be reordered after an
incomming message. This patch ensures that any stalled message stays in its
original request order.
|
|
Add support for acquire and release requests. These synchronization operations
are commonly supported by several modern instruction sets.
|
|
|
|
This patch adds a few helpful functions that allow .sm files to directly
invalidate all cache blocks using a trigger queue rather than rely on each
individual cache block to be invalidated via requests from the mandatory
queue.
|
|
This patch allows DPRINTFs to be used in SLICC state machines similar to how
they are used by the rest of gem5. Previously all DPRINTFs in the .sm files
had to use the RubySlicc flag.
|
|
|
|
this is in preparation for other replacement policies that take additional
parameters.
|
|
This patch exposes the tag and data array latencies to the SLICC state machines
so that it can be used to determine the correct enqueue latency for response
messages.
|
|
To have multiple Entry types (e.g., a cache Entry type and
a directory Entry type), just declare one of them as a secondary
type by using the pair 'main="false"', e.g.:
structure(DirEntry, desc="...", interface="AbstractCacheEntry",
main="false") {
...and the primary type would be declared:
structure(Entry, desc="...", interface="AbstractCacheEntry") {
|
|
These were not generating the correct c names for types declared within a
machine scope.
|
|
|
|
This patch fixes the type handling when prefix operations are used. Previously
prefix operators would assume a void return type, which made it impossible to
combine prefix operations with other expressions. This patch allows SLICC
programmers to use prefix operations more naturally.
|
|
This patches adds support for transitions of the form:
transition(START, EVENTS, *) { ACTIONS }
This allows a machine to collapse states that differ only in the next state
transition to collapse into one, and can help shorten/simplfy some protocols
significantly.
When * is encountered as an end state of a transition, the next state is
determined by calling the machine-specific getNextState function. The next
state is determined before any actions of the transition execute, and
therefore the next state calculation cannot depend on any of the transition
actions.
|
|
This patch allows SLICC protocols to use more than one message type with a
message buffer. For example, you can declare two in ports as such:
in_port(ResponseQueue_in, ResponseMsg, responseFromDir, rank=3) { ... }
in_port(tgtResponseQueue_in, TgtResponseMsg, responseFromDir, rank=2) { ... }
|
|
|