Age | Commit message (Collapse) | Author |
|
|
|
|
|
By stalling and waiting the mandatory queue instead of recycling it, one can
ensure that no incoming messages are starved when the mandatory queue puts
signficant of pressure on the L1 cache controller (i.e. the ruby memtester).
--HG--
rename : src/mem/slicc/ast/WakeUpDependentsStatementAST.py => src/mem/slicc/ast/WakeUpAllDependentsStatementAST.py
|
|
|
|
|
|
Split out dynamic and static power numbers for printing to ruby.stats
|
|
|
|
|
|
|
|
The packet now identifies whether static or dynamic data has been allocated and
is used by Ruby to determine whehter to copy the data pointer into the ruby
request. Subsequently, Ruby can be told not to update phys memory when
receiving packets.
|
|
|
|
|
|
Move page table walker state to its own object type, and make the
walker instantiate state for each outstanding walk. By storing the
states in a queue, the walker is able to handle multiple outstanding
timing requests. Note that functional walks use separate state
elements.
|
|
In sendSplitData, keep a pointer to the senderState that may be updated after
the call to handle*Packet. This way, if the receiver updates the packet
senderState, it can still be accessed in sendSplitData.
|
|
|
|
|
|
|
|
|
|
This patch ensures only aligned access are passed to ruby and includes a fix
to the DPRINTF address print.
|
|
|
|
|
|
Add checkpointing capability to the Intel 8254 timer, CMOS, I8042,
PS2 Keyboard and Mouse, I82094AA, I8237, I8254, I8259, and speaker
devices
|
|
Add checkpointing capability to the x86 interrupt device and the TLBs
|
|
Calls walker to look up virt. to phys. page mapping
|
|
The x86 local apic now includes a separate latency parameter for interrupts.
|
|
delete
Double packet delete problem is due to an interrupt device deleting a packet that the SimpleTimingPort also deletes. Since MessagePort descends from SimpleTimingPort, simply reimplement the failing code from SimpleTimingPort: recvTiming.
|
|
|
|
Updated patches from Rick Strong's set that modify performance counters for
McPAT
|
|
|
|
Separate data VCs and ctrl VCs in garnet, as ctrl VCs have 1 buffer per VC,
while data VCs have > 1 buffers per VC. This is for correct power estimations.
|
|
|
|
Exclude bzip2 for now. It works, it just takes too long to run.
|
|
|
|
|
|
|
|
Maintain all information about an instruction's fault in the DynInst object rather
than any cpu-request object. Also, if there is a fault during the execution stage
then just save the fault inside the instruction and trap once the instruction
tries to graduate
|
|
not taken delay slots were not being advanced correctly to pc+8, so for those ISAs
we 'advance()' the pcstate one more time for the desired effect
|
|
Give fetch unit it's own parameterizable fetch buffer to read from. Very inefficient
(architecturally and in simulation) to continually fetch at the granularity of the
wordsize. As expected, the number of fetch memory requests drops dramatically
|
|
no need to have separate function name findSplitRequest, just overload the function
|
|
instead of having one cache-unit class be responsible for both data and code
accesses, separate code that is just for fetch in it's own derived class off the
original base class. This makes the code easier to manage as well as handle
future cases of special fetch handling
|
|
set the request to false when the cache port blocks so we dont deadlock.
also, comment out the outstanding address list sanity check for now.
|
|
allow the user to specify how many instructions a pipeline stage can process
on any given cycle (stageWidth...i.e.bandwidth) by setting the parameter through
the python interface rather than compile the code after changing the *.cc file.
(we always had the parameter there, but still used the static 'ThePipeline::StageWidth'
instead)
-
Since StageWidth is now dynamically defined, change the interstage communication
structure to use a vector and get rid of array and array handling index (toNextStageIndex)
since we can just make calls to the list for the same information
|
|
Only execute (resolve) one branch per cycle because handling more than one is
a little more complicated
|
|
use skidbuffer as only location for instructions between stages. before,
we had the insts queue from the prior stage and the skidbuffer for the
current stage, but that gets confusing and this consolidation helps
when handling squash cases
|
|
manage insertion and deletion like a queue but will need
access to internal elements for future changes
Currently, skidbuffer manages any instruction that was
in a stage but could not complete processing, however
we will want to manage all blocked instructions (from prev stage
and from cur. stage) in just one buffer.
|
|
Previous code was marking CPU activity on almost every cycle due to a bug in
tracking the status of pipeline stages. This disables the CPU from sleeping
on long latency stalls and increases simulation time
|
|
--HG--
rename : src/sim/fault.hh => src/sim/fault_fwd.hh
|
|
|
|
This makes sure that the address ranges requested for caches and uncached ports
don't conflict with each other, and that accesses which are always uncached
(message signaled interrupts for instance) don't waste time passing through
caches.
|
|
|