Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
This patch ensures only aligned access are passed to ruby and includes a fix
to the DPRINTF address print.
|
|
|
|
|
|
|
|
Separate data VCs and ctrl VCs in garnet, as ctrl VCs have 1 buffer per VC,
while data VCs have > 1 buffers per VC. This is for correct power estimations.
|
|
The purpose of this patch is to change the way CacheMemory interfaces with
coherence protocols. Currently, whenever a cache controller (defined in the
protocol under consideration) needs to carry out any operation on a cache
block, it looks up the tag hash map and figures out whether or not the block
exists in the cache. In case it does exist, the operation is carried out
(which requires another lookup). As observed through profiling of different
protocols, multiple such lookups take place for a given cache block. It was
noted that the tag lookup takes anything from 10% to 20% of the simulation
time. In order to reduce this time, this patch is being posted.
I have to acknowledge that the many of the thoughts that went in to this
patch belong to Brad.
Changes to CacheMemory, TBETable and AbstractCacheEntry classes:
1. The lookup function belonging to CacheMemory class now returns a pointer
to a cache block entry, instead of a reference. The pointer is NULL in case
the block being looked up is not present in the cache. Similar change has
been carried out in the lookup function of the TBETable class.
2. Function for setting and getting access permission of a cache block have
been moved from CacheMemory class to AbstractCacheEntry class.
3. The allocate function in CacheMemory class now returns pointer to the
allocated cache entry.
Changes to SLICC:
1. Each action now has implicit variables - cache_entry and tbe. cache_entry,
if != NULL, must point to the cache entry for the address on which the action
is being carried out. Similarly, tbe should also point to the transaction
buffer entry of the address on which the action is being carried out.
2. If a cache entry or a transaction buffer entry is passed on as an
argument to a function, it is presumed that a pointer is being passed on.
3. The cache entry and the tbe pointers received __implicitly__ by the
actions, are passed __explicitly__ to the trigger function.
4. While performing an action, set/unset_cache_entry, set/unset_tbe are to
be used for setting / unsetting cache entry and tbe pointers respectively.
5. is_valid() and is_invalid() has been made available for testing whether
a given pointer 'is not NULL' and 'is NULL' respectively.
6. Local variables are now available, but they are assumed to be pointers
always.
7. It is now possible for an object of the derieved class to make calls to
a function defined in the interface.
8. An OOD token has been introduced in SLICC. It is same as the NULL token
used in C/C++. If you are wondering, OOD stands for Out Of Domain.
9. static_cast can now taken an optional parameter that asks for casting the
given variable to a pointer of the given type.
10. Functions can be annotated with 'return_by_pointer=yes' to return a
pointer.
11. StateMachine has two new variables, EntryType and TBEType. EntryType is
set to the type which inherits from 'AbstractCacheEntry'. There can only be
one such type in the machine. TBEType is set to the type for which 'TBE' is
used as the name.
All the protocols have been modified to conform with the new interface.
|
|
The current implementation of MESI CMP directory protocol is broken.
This patch, from Arkaprava Basu, fixes the protocol.
|
|
Get rid of the Debug class
Get rid of ASSERT and use assert
Use DPRINTFR for ProtocolTrace
|
|
This step makes it easy to replace the accessor functions
(which still access a global variable) with ones that access
per-thread curTick values.
|
|
This patch changes the manner in which data is copied from L1 to L2 cache in
the implementation of the Hammer's cache coherence protocol. Earlier, data was
copied directly from one cache entry to another. This has been broken in to
two parts. First, the data is copied from the source cache entry to a
transaction buffer entry. Then, data is copied from the transaction buffer
entry to the destination cache entry.
This has been done to maintain the invariant - at any given instant, multiple
caches under a controller are exclusive with respect to each other.
|
|
Ran all the source files through 'perl -pi' with this script:
s|\s*(};?\s*)?/\*\s*(end\s*)?namespace\s*(\S+)\s*\*/(\s*})?|} // namespace $3|;
s|\s*};?\s*//\s*(end\s*)?namespace\s*(\S+)\s*|} // namespace $2\n|;
s|\s*};?\s*//\s*(\S+)\s*namespace\s*|} // namespace $1\n|;
Also did a little manual editing on some of the arch/*/isa_traits.hh files
and src/SConscript.
|
|
Two functions in src/mem/ruby/system/PerfectCacheMemory.hh, tryCacheAccess()
and cacheProbe(), end with calls to panic(). Both of these functions have
return type other than void. Any file that includes this header file fails
to compile because of the missing return statement. This patch adds dummy
values so as to avoid the compiler warnings.
|
|
file. These statements have been replaced with warn(), panic() and fatal() defined in src/base/misc.hh
|
|
This diff is for changing the way ASSERT is handled in Ruby. m5.fast
compiles out the assert statements by using the macro NDEBUG. Ruby uses the
macro RUBY_NO_ASSERT to do so. This macro has been removed and NDEBUG has
been put in its place.
|
|
This patch developed by Nilay Vaish converts all the old GEMS-style ruby
debug calls to the appropriate M5 debug calls.
|
|
This change removes some dead code in PhysicalMemory, uses a 64 bit type
for the page pointer in System (instead of 32 bit) and cleans up some style.
|
|
|
|
|
|
Physmem has a parameter to be able to mem map a file, however
it isn't actually used. This changeset utilizes the parameter
so a file can be mmapped.
|
|
Thanks to Joe Gross for finding/testing this.
|
|
These flags were being used to identify what alignment a request needed, but
the same information is available using the request size. This change also
eliminates the isMisaligned function. If more complicated alignment checks are
needed, they can be signaled using the ASI_BITS space in the flags vector like
is currently done with ARM.
|
|
CLREX is the name of an ARM instruction, not a name for this generic flag.
|
|
|
|
If we write back an exclusive copy, we now mark it
as such, so the cache receiving the writeback can
mark its copy as exclusive. This avoids some
unnecessary upgrade requests when a cache later
tries to re-acquire exclusive access to the block.
|
|
Also move the "Fault" reference counted pointer type into a separate file,
sim/fault.hh. It would be better to name this less similarly to sim/faults.hh
to reduce confusion, but fault.hh matches the name of the type. We could change
Fault to FaultPtr to match other pointer types, and then changing the name of
the file would make more sense.
|
|
|
|
a newline by just doing "code()". indent() and dedent() now take a
"count" parameter to indent/dedent multiple levels.
|
|
Corrects an oversight in cset f97b62be544f. The fix there only
failed queued SCUpgradeReq packets that encountered an
invalidation, which meant that the upgrade had to reach the L2
cache. To handle pending requests in the L1 we must similarly
fail StoreCondReq packets too.
|
|
We can't just obliviously return the first valid cache block
we find any more... see comments for details.
|
|
Allow lower-level caches (e.g., L2 or L3) to pass exclusive
copies to higher levels (e.g., L1). This eliminates a lot
of unnecessary upgrade transactions on read-write sequences
to non-shared data.
Also some cleanup of MSHR coherence handling and multiple
bug fixes.
|
|
|
|
|
|
|
|
This patch moves the testers to a new subdirectory under src/cpu and includes
the necessary fixes to work with latest m5 initialization patches.
--HG--
rename : configs/example/determ_test.py => configs/example/ruby_direct_test.py
rename : src/cpu/directedtest/DirectedGenerator.cc => src/cpu/testers/directedtest/DirectedGenerator.cc
rename : src/cpu/directedtest/DirectedGenerator.hh => src/cpu/testers/directedtest/DirectedGenerator.hh
rename : src/cpu/directedtest/InvalidateGenerator.cc => src/cpu/testers/directedtest/InvalidateGenerator.cc
rename : src/cpu/directedtest/InvalidateGenerator.hh => src/cpu/testers/directedtest/InvalidateGenerator.hh
rename : src/cpu/directedtest/RubyDirectedTester.cc => src/cpu/testers/directedtest/RubyDirectedTester.cc
rename : src/cpu/directedtest/RubyDirectedTester.hh => src/cpu/testers/directedtest/RubyDirectedTester.hh
rename : src/cpu/directedtest/RubyDirectedTester.py => src/cpu/testers/directedtest/RubyDirectedTester.py
rename : src/cpu/directedtest/SConscript => src/cpu/testers/directedtest/SConscript
rename : src/cpu/directedtest/SeriesRequestGenerator.cc => src/cpu/testers/directedtest/SeriesRequestGenerator.cc
rename : src/cpu/directedtest/SeriesRequestGenerator.hh => src/cpu/testers/directedtest/SeriesRequestGenerator.hh
rename : src/cpu/memtest/MemTest.py => src/cpu/testers/memtest/MemTest.py
rename : src/cpu/memtest/SConscript => src/cpu/testers/memtest/SConscript
rename : src/cpu/memtest/memtest.cc => src/cpu/testers/memtest/memtest.cc
rename : src/cpu/memtest/memtest.hh => src/cpu/testers/memtest/memtest.hh
rename : src/cpu/rubytest/Check.cc => src/cpu/testers/rubytest/Check.cc
rename : src/cpu/rubytest/Check.hh => src/cpu/testers/rubytest/Check.hh
rename : src/cpu/rubytest/CheckTable.cc => src/cpu/testers/rubytest/CheckTable.cc
rename : src/cpu/rubytest/CheckTable.hh => src/cpu/testers/rubytest/CheckTable.hh
rename : src/cpu/rubytest/RubyTester.cc => src/cpu/testers/rubytest/RubyTester.cc
rename : src/cpu/rubytest/RubyTester.hh => src/cpu/testers/rubytest/RubyTester.hh
rename : src/cpu/rubytest/RubyTester.py => src/cpu/testers/rubytest/RubyTester.py
rename : src/cpu/rubytest/SConscript => src/cpu/testers/rubytest/SConscript
|
|
|
|
when it in received
|
|
the TLB
|
|
|
|
Added an optimization that merges multiple pending GETS requests into a
single request to the owner node.
|
|
This patch allows messages to be stalled in their input buffers and wait
until a corresponding address changes state. In order to make this work,
all in_ports must be ranked in order of dependence and those in_ports that
may unblock an address, must wake up the stalled messages. Alot of this
complexity is handled in slicc and the specification files simply
annotate the in_ports.
--HG--
rename : src/mem/slicc/ast/CheckAllocateStatementAST.py => src/mem/slicc/ast/StallAndWaitStatementAST.py
rename : src/mem/slicc/ast/CheckAllocateStatementAST.py => src/mem/slicc/ast/WakeUpDependentsStatementAST.py
|
|
Patch allows each individual message buffer to have different recycle latencies
and allows the overall recycle latency to be specified at the cmd line. The
patch also adds profiling info to make sure no one processor's requests are
recycled too much.
|
|
This patch tracks the number of cycles a transaction is delayed at different
points of the request-forward-response loop.
|
|
|
|
This fix includes the off-by-one bit selection bug for numa mapping.
|
|
The main purpose for clearing stats in the unserialize process is so
that the profiler can correctly set its start time to the unserialized
value of curTick.
|
|
This patch allows one to disable migratory sharing for those cache blocks that
are accessed by atomic requests. While the implementations are different
between the token and hammer protocols, the motivation is the same. For
Alpha, LLSC semantics expect that normal loads do not unlock cache blocks that
have been locked by LL accesses. Therefore, locked blocks should not transfer
write permissions when responding to these load requests. Instead, only they
only transfer read permissions so that the subsequent SC access can possibly
succeed.
|