Age | Commit message (Collapse) | Author |
|
For systems without caches, the LLSC code does not get snoops for
wake-ups. We add the LLSC code in the abstract memory to do the job
for us.
|
|
This patch extends the MemoryAccess debug flag to report who sent the
requests and the cacheability.
|
|
This patch provides useful printouts throughut the memory system. This
includes pretty-printed cache tags and function call messages
(call-stack like).
|
|
This patch makes the start and end address private in a move to
prevent direct manipulation and matching of ranges based on these
fields. This is done so that a transition to ranges with interleaving
support is possible.
As a result of hiding the start and end, a number of member functions
are needed to perform the comparisons and manipulations that
previously took place directly on the members. An accessor function is
provided for the start address, and a function is added to test if an
address is within a range. As a result of the latter the != and ==
operator is also removed in favour of the member function. A member
function that returns a string representation is also created to allow
debug printing.
In general, this patch does not add any functionality, but it does
take us closer to a situation where interleaving (and more cleverness)
can be added under the bonnet without exposing it to the user. More on
that in a later patch.
|
|
This patch moves all the memory backing store operations from the
independent memory controllers to the global physical memory. The main
reason for this patch is to allow address striping in a future set of
patches, but at this point it already provides some useful
functionality in that it is now possible to change the number of
memory controllers and their address mapping in combination with
checkpointing. Thus, the host and guest view of the memory backing
store are now completely separate.
With this patch, the individual memory controllers are far simpler as
all responsibility for serializing/unserializing is moved to the
physical memory. Currently, the functionality is more or less moved
from AbstractMemory to PhysicalMemory without any major
changes. However, in a future patch the physical memory will also
resolve any ranges that are interleaved and properly assign the
backing store to the memory controllers, and keep the host memory as a
single contigous chunk per address range.
Functionality for future extensions which involve CPU virtualization
also enable the host to get pointers to the backing store.
|
|
This patch removes the unused file parameter from the
AbstractMemory. The patch serves to make it easier to transition to a
separation of the actual contigious host memory backing store, and the
gem5 memory controllers.
Without the file parameter it becomes easier to hide the creation of
the mmap in the PhysicalMemory, as there are no longer any reasons to
expose the actual contigious ranges to the user.
To the best of my knowledge there is no use of the parameter, so the
change should not affect anyone.
|
|
This patch takes the final plunge and transitions from the templated
Range class to the more specific AddrRange. In doing so it changes the
obvious Range<Addr> to AddrRange, and also bumps the range_map to be
AddrRangeMap.
In addition to the obvious changes, including the removal of redundant
includes, this patch also does some house keeping in preparing for the
introduction of address interleaving support in the ranges. The Range
class is also stripped of all the functionality that is never used.
--HG--
rename : src/base/range.hh => src/base/addr_range.hh
rename : src/base/range_map.hh => src/base/addr_range_map.hh
|
|
Despite gzwrite taking an unsigned for length, it returns an int for
bytes written; gzwrite fails if (int)len < 0. Because of this, call
gzwrite with len no larger than INT_MAX: write in blocks of INT_MAX if
data to be written is larger than INT_MAX.
|
|
This patch makes the address-range related members const. The change
is trivial and merely ensures that they can be called on a const
memory.
|
|
Currently when multiple CPUs perform a load-linked/store-conditional sequence,
the loads all create a list of reservations which is then scanned when the
stores occur. A reservation matching the context and address of the store is
sought, BUT all reservations matching the address are also erased at this point.
The upshot is that a store-conditional will remove all reservations even if the
store itself does not succeed. A livelock was observed using 7-8 CPUs where a
thread would erase the reservations of other threads, not succeed, loop and put
its own reservation in again only to have it blown by another thread that
unsuccessfully now tries to store-conditional -- no forward progress was made,
hanging the system.
The correct way to do this is to only blow a reservation when a store
(conditional or not) actually /occurs/ to its address. One thread always wins
(the one that does the store-conditional first).
|
|
Added per-master stats (similar to cache stats) to physmem.
|
|
|
|
This patch removes the assumption on having on single instance of
PhysicalMemory, and enables a distributed memory where the individual
memories in the system are each responsible for a single contiguous
address range.
All memories inherit from an AbstractMemory that encompasses the basic
behaviuor of a random access memory, and provides untimed access
methods. What was previously called PhysicalMemory is now
SimpleMemory, and a subclass of AbstractMemory. All future types of
memory controllers should inherit from AbstractMemory.
To enable e.g. the atomic CPU and RubyPort to access the now
distributed memory, the system has a wrapper class, called
PhysicalMemory that is aware of all the memories in the system and
their associated address ranges. This class thus acts as an
infinitely-fast bus and performs address decoding for these "shortcut"
accesses. Each memory can specify that it should not be part of the
global address map (used e.g. by the functional memories by some
testers). Moreover, each memory can be configured to be reported to
the OS configuration table, useful for populating ATAG structures, and
any potential ACPI tables.
Checkpointing support currently assumes that all memories have the
same size and organisation when creating and resuming from the
checkpoint. A future patch will enable a more flexible
re-organisation.
--HG--
rename : src/mem/PhysicalMemory.py => src/mem/AbstractMemory.py
rename : src/mem/PhysicalMemory.py => src/mem/SimpleMemory.py
rename : src/mem/physical.cc => src/mem/abstract_mem.cc
rename : src/mem/physical.hh => src/mem/abstract_mem.hh
rename : src/mem/physical.cc => src/mem/simple_mem.cc
rename : src/mem/physical.hh => src/mem/simple_mem.hh
|