summaryrefslogtreecommitdiff
path: root/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual
diff options
context:
space:
mode:
Diffstat (limited to 'tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual')
-rw-r--r--tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.ini4
-rw-r--r--tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.out4
-rw-r--r--tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/console.system.sim_console54
-rw-r--r--tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/m5stats.txt411
-rw-r--r--tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stderr2
-rw-r--r--tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stdout8
6 files changed, 242 insertions, 241 deletions
diff --git a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.ini b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.ini
index 65401b549..8f75c9525 100644
--- a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.ini
+++ b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.ini
@@ -75,7 +75,7 @@ side_b=system.membus.port[0]
type=TimingSimpleCPU
children=dtb itb
clock=1
-cpu_id=-1
+cpu_id=0
defer_registration=false
dtb=system.cpu0.dtb
function_trace=false
@@ -104,7 +104,7 @@ size=48
type=TimingSimpleCPU
children=dtb itb
clock=1
-cpu_id=-1
+cpu_id=1
defer_registration=false
dtb=system.cpu1.dtb
function_trace=false
diff --git a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.out b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.out
index ed03e445d..9e0948f1e 100644
--- a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.out
+++ b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.out
@@ -90,9 +90,9 @@ max_loads_all_threads=0
progress_interval=0
mem=system.physmem
system=system
+cpu_id=0
itb=system.cpu0.itb
dtb=system.cpu0.dtb
-cpu_id=-1
profile=0
clock=1
defer_registration=false
@@ -118,9 +118,9 @@ max_loads_all_threads=0
progress_interval=0
mem=system.physmem
system=system
+cpu_id=1
itb=system.cpu1.itb
dtb=system.cpu1.dtb
-cpu_id=-1
profile=0
clock=1
defer_registration=false
diff --git a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/console.system.sim_console b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/console.system.sim_console
index 4a397ddbf..27adebb82 100644
--- a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/console.system.sim_console
+++ b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/console.system.sim_console
@@ -3,7 +3,7 @@ M5 console: m5AlphaAccess @ 0xFFFFFD0200000000
memsize 8000000 pages 4000
First free page after ROM 0xFFFFFC0000018000
HWRPB 0xFFFFFC0000018000 l1pt 0xFFFFFC0000040000 l2pt 0xFFFFFC0000042000 l3pt_rpb 0xFFFFFC0000044000 l3pt_kernel 0xFFFFFC0000048000 l2reserv 0xFFFFFC0000046000
- kstart = 0xFFFFFC0000310000, kend = 0xFFFFFC00008064E8, kentry = 0xFFFFFC0000310000, numCPUs = 0x2
+ kstart = 0xFFFFFC0000310000, kend = 0xFFFFFC0000855898, kentry = 0xFFFFFC0000310000, numCPUs = 0x2
CPU Clock at 2000 MHz IntrClockFrequency=1024
Booting with 2 processor(s)
KSP: 0x20043FE8 PTBR 0x20
@@ -16,29 +16,27 @@ M5 console: m5AlphaAccess @ 0xFFFFFD0200000000
Bootstraping CPU 1 with sp=0xFFFFFC0000076000
unix_boot_mem ends at FFFFFC0000078000
k_argc = 0
- jumping to kernel at 0xFFFFFC0000310000, (PCBB 0xFFFFFC0000018180 pfn 1028)
- CallbackFixup 0 18000, t7=FFFFFC0000700000
+ jumping to kernel at 0xFFFFFC0000310000, (PCBB 0xFFFFFC0000018180 pfn 1067)
+ CallbackFixup 0 18000, t7=FFFFFC000070C000
Entering slaveloop for cpu 1 my_rpb=FFFFFC0000018400
- Linux version 2.6.8.1 (binkertn@ziff.eecs.umich.edu) (gcc version 3.4.3) #36 SMP Mon May 2 19:50:53 EDT 2005
+ Linux version 2.6.13 (hsul@zed.eecs.umich.edu) (gcc version 3.4.3) #1 SMP Sun Oct 8 19:52:07 EDT 2006
Booting GENERIC on Tsunami variation DP264 using machine vector DP264 from SRM
Major Options: SMP LEGACY_START VERBOSE_MCHECK
Command line: root=/dev/hda1 console=ttyS0
memcluster 0, usage 1, start 0, end 392
memcluster 1, usage 0, start 392, end 16384
- freeing pages 1030:16384
- reserving pages 1030:1031
+ freeing pages 1069:16384
+ reserving pages 1069:1070
SMP: 2 CPUs probed -- cpu_present_mask = 3
Built 1 zonelists
Kernel command line: root=/dev/hda1 console=ttyS0
- PID hash table entries: 1024 (order 10: 16384 bytes)
+ PID hash table entries: 1024 (order: 10, 32768 bytes)
Using epoch = 1900
Console: colour dummy device 80x25
Dentry cache hash table entries: 32768 (order: 5, 262144 bytes)
Inode-cache hash table entries: 16384 (order: 4, 131072 bytes)
- Memory: 119072k/131072k available (3058k kernel code, 8680k reserved, 695k data, 480k init)
- Mount-cache hash table entries: 512 (order: 0, 8192 bytes)
- per-CPU timeslice cutoff: 374.49 usecs.
- task migration cache decay timeout: 0 msecs.
+ Memory: 118784k/131072k available (3314k kernel code, 8952k reserved, 983k data, 224k init)
+ Mount-cache hash table entries: 512
SMP starting up secondaries.
Slave CPU 1 console command START
SlaveCmd: restart FFFFFC0000310020 FFFFFC0000310020 vptb FFFFFFFE00000000 my_rpb FFFFFC0000018400 my_rpb_phys 18400
@@ -53,16 +51,21 @@ SlaveCmd: restart FFFFFC0000310020 FFFFFC0000310020 vptb FFFFFFFE00000000 my_rpb
Initializing Cryptographic API
rtc: Standard PC (1900) epoch (1900) detected
Real Time Clock Driver v1.12
- Serial: 8250/16550 driver $Revision: 1.90 $ 5 ports, IRQ sharing disabled
+ Serial: 8250/16550 driver $Revision: 1.90 $ 1 ports, IRQ sharing disabled
ttyS0 at I/O 0x3f8 (irq = 4) is a 8250
+ io scheduler noop registered
+ io scheduler anticipatory registered
+ io scheduler deadline registered
+ io scheduler cfq registered
loop: loaded (max 8 devices)
- Using anticipatory io scheduler
nbd: registered device at major 43
- sinic.c: M5 Simple Integrated NIC driver
ns83820.c: National Semiconductor DP83820 10/100/1000 driver.
eth0: ns83820.c: 0x22c: 00000000, subsystem: 0000:0000
eth0: enabling optical transceiver
- eth0: ns83820 v0.20: DP83820 v1.3: 00:90:00:00:00:01 io=0x09000000 irq=30 f=sg
+ eth0: using 64 bit addressing.
+ eth0: ns83820 v0.22: DP83820 v1.3: 00:90:00:00:00:01 io=0x09000000 irq=30 f=h,sg
+ tun: Universal TUN/TAP device driver, 1.6
+ tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
PIIX4: IDE controller at PCI slot 0000:00:00.0
@@ -75,24 +78,23 @@ SlaveCmd: restart FFFFFC0000310020 FFFFFC0000310020 vptb FFFFFFFE00000000 my_rpb
ide0 at 0x8410-0x8417,0x8422 on irq 31
hda: max request size: 128KiB
hda: 511056 sectors (261 MB), CHS=507/16/63, UDMA(33)
+ hda: cache flushes not supported
hda: hda1
hdb: max request size: 128KiB
hdb: 4177920 sectors (2139 MB), CHS=4144/16/63, UDMA(33)
+ hdb: cache flushes not supported
hdb: unknown partition table
- scsi0 : scsi_m5, version 1.73 [20040518], dev_size_mb=8, opts=0x0
- Vendor: Linux Model: scsi_m5 Li Rev: 0004
- Type: Direct-Access ANSI SCSI revision: 03
- SCSI device sda: 16384 512-byte hdwr sectors (8 MB)
- SCSI device sda: drive cache: write back
- sda: unknown partition table
- Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
mice: PS/2 mouse device common for all mice
NET: Registered protocol family 2
- IP: routing cache hash table of 1024 buckets, 16Kbytes
- TCP: Hash tables configured (established 8192 bind 8192)
- ip_conntrack version 2.1 (512 buckets, 4096 max) - 440 bytes per conntrack
+ IP route cache hash table entries: 4096 (order: 2, 32768 bytes)
+ TCP established hash table entries: 16384 (order: 5, 262144 bytes)
+ TCP bind hash table entries: 16384 (order: 5, 262144 bytes)
+ TCP: Hash tables configured (established 16384 bind 16384)
+ TCP reno registered
+ ip_conntrack version 2.1 (512 buckets, 4096 max) - 296 bytes per conntrack
ip_tables: (C) 2000-2002 Netfilter core team
arp_tables: (C) 2002 David S. Miller
+ TCP bic registered
Initializing IPsec netlink socket
NET: Registered protocol family 1
NET: Registered protocol family 17
@@ -101,7 +103,7 @@ SlaveCmd: restart FFFFFC0000310020 FFFFFC0000310020 vptb FFFFFFFE00000000 my_rpb
802.1Q VLAN Support v1.8 Ben Greear <greearb@candelatech.com>
All bugs added by David S. Miller <davem@redhat.com>
VFS: Mounted root (ext2 filesystem) readonly.
- Freeing unused kernel memory: 480k freed
+ Freeing unused kernel memory: 224k freed
init started: BusyBox v1.1.0 (2006.08.17-02:54+0000) multi-call binary
mounting filesystems...
loading script...
diff --git a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/m5stats.txt b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/m5stats.txt
index bf7320067..ff9a06cc7 100644
--- a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/m5stats.txt
+++ b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/m5stats.txt
@@ -1,232 +1,231 @@
---------- Begin Simulation Statistics ----------
-host_inst_rate 825990 # Simulator instruction rate (inst/s)
-host_mem_usage 193572 # Number of bytes of host memory used
-host_seconds 74.01 # Real time elapsed on the host
-host_tick_rate 47654938 # Simulator tick rate (ticks/s)
+host_inst_rate 719379 # Simulator instruction rate (inst/s)
+host_mem_usage 197268 # Number of bytes of host memory used
+host_seconds 92.21 # Real time elapsed on the host
+host_tick_rate 40502079 # Simulator tick rate (ticks/s)
sim_freq 2000000000 # Frequency of simulated ticks
-sim_insts 61131962 # Number of instructions simulated
-sim_seconds 1.763494 # Number of seconds simulated
-sim_ticks 3526987181 # Number of ticks simulated
-system.cpu0.dtb.accesses 1987164 # DTB accesses
-system.cpu0.dtb.acv 291 # DTB access violations
-system.cpu0.dtb.hits 10431590 # DTB hits
-system.cpu0.dtb.misses 9590 # DTB misses
-system.cpu0.dtb.read_accesses 606328 # DTB read accesses
-system.cpu0.dtb.read_acv 174 # DTB read access violations
-system.cpu0.dtb.read_hits 5831565 # DTB read hits
-system.cpu0.dtb.read_misses 7663 # DTB read misses
-system.cpu0.dtb.write_accesses 1380836 # DTB write accesses
-system.cpu0.dtb.write_acv 117 # DTB write access violations
-system.cpu0.dtb.write_hits 4600025 # DTB write hits
-system.cpu0.dtb.write_misses 1927 # DTB write misses
-system.cpu0.idle_fraction 0.984514 # Percentage of idle cycles
-system.cpu0.itb.accesses 2372045 # ITB accesses
-system.cpu0.itb.acv 143 # ITB acv
-system.cpu0.itb.hits 2368331 # ITB hits
-system.cpu0.itb.misses 3714 # ITB misses
-system.cpu0.kern.callpal 145084 # number of callpals executed
+sim_insts 66337257 # Number of instructions simulated
+sim_seconds 1.867449 # Number of seconds simulated
+sim_ticks 3734898877 # Number of ticks simulated
+system.cpu0.dtb.accesses 828318 # DTB accesses
+system.cpu0.dtb.acv 315 # DTB access violations
+system.cpu0.dtb.hits 13264910 # DTB hits
+system.cpu0.dtb.misses 7094 # DTB misses
+system.cpu0.dtb.read_accesses 572336 # DTB read accesses
+system.cpu0.dtb.read_acv 200 # DTB read access violations
+system.cpu0.dtb.read_hits 8201218 # DTB read hits
+system.cpu0.dtb.read_misses 6394 # DTB read misses
+system.cpu0.dtb.write_accesses 255982 # DTB write accesses
+system.cpu0.dtb.write_acv 115 # DTB write access violations
+system.cpu0.dtb.write_hits 5063692 # DTB write hits
+system.cpu0.dtb.write_misses 700 # DTB write misses
+system.cpu0.idle_fraction 0.982517 # Percentage of idle cycles
+system.cpu0.itb.accesses 1888651 # ITB accesses
+system.cpu0.itb.acv 166 # ITB acv
+system.cpu0.itb.hits 1885318 # ITB hits
+system.cpu0.itb.misses 3333 # ITB misses
+system.cpu0.kern.callpal 146863 # number of callpals executed
system.cpu0.kern.callpal_cserve 1 0.00% 0.00% # number of callpals executed
-system.cpu0.kern.callpal_wripir 54 0.04% 0.04% # number of callpals executed
-system.cpu0.kern.callpal_wrmces 1 0.00% 0.04% # number of callpals executed
-system.cpu0.kern.callpal_wrfen 1 0.00% 0.04% # number of callpals executed
-system.cpu0.kern.callpal_wrvptptr 1 0.00% 0.04% # number of callpals executed
-system.cpu0.kern.callpal_swpctx 1182 0.81% 0.85% # number of callpals executed
-system.cpu0.kern.callpal_tbi 42 0.03% 0.88% # number of callpals executed
-system.cpu0.kern.callpal_wrent 7 0.00% 0.89% # number of callpals executed
-system.cpu0.kern.callpal_swpipl 135050 93.08% 93.97% # number of callpals executed
-system.cpu0.kern.callpal_rdps 4795 3.30% 97.28% # number of callpals executed
-system.cpu0.kern.callpal_wrkgp 1 0.00% 97.28% # number of callpals executed
-system.cpu0.kern.callpal_wrusp 5 0.00% 97.28% # number of callpals executed
-system.cpu0.kern.callpal_rdusp 8 0.01% 97.29% # number of callpals executed
-system.cpu0.kern.callpal_whami 2 0.00% 97.29% # number of callpals executed
-system.cpu0.kern.callpal_rti 3431 2.36% 99.65% # number of callpals executed
-system.cpu0.kern.callpal_callsys 364 0.25% 99.90% # number of callpals executed
-system.cpu0.kern.callpal_imb 139 0.10% 100.00% # number of callpals executed
+system.cpu0.kern.callpal_wripir 506 0.34% 0.35% # number of callpals executed
+system.cpu0.kern.callpal_wrmces 1 0.00% 0.35% # number of callpals executed
+system.cpu0.kern.callpal_wrfen 1 0.00% 0.35% # number of callpals executed
+system.cpu0.kern.callpal_wrvptptr 1 0.00% 0.35% # number of callpals executed
+system.cpu0.kern.callpal_swpctx 2962 2.02% 2.36% # number of callpals executed
+system.cpu0.kern.callpal_tbi 47 0.03% 2.40% # number of callpals executed
+system.cpu0.kern.callpal_wrent 7 0.00% 2.40% # number of callpals executed
+system.cpu0.kern.callpal_swpipl 132443 90.18% 92.58% # number of callpals executed
+system.cpu0.kern.callpal_rdps 6236 4.25% 96.83% # number of callpals executed
+system.cpu0.kern.callpal_wrkgp 1 0.00% 96.83% # number of callpals executed
+system.cpu0.kern.callpal_wrusp 2 0.00% 96.83% # number of callpals executed
+system.cpu0.kern.callpal_rdusp 8 0.01% 96.84% # number of callpals executed
+system.cpu0.kern.callpal_whami 2 0.00% 96.84% # number of callpals executed
+system.cpu0.kern.callpal_rti 4200 2.86% 99.70% # number of callpals executed
+system.cpu0.kern.callpal_callsys 317 0.22% 99.91% # number of callpals executed
+system.cpu0.kern.callpal_imb 128 0.09% 100.00% # number of callpals executed
system.cpu0.kern.inst.arm 0 # number of arm instructions executed
-system.cpu0.kern.inst.hwrei 160926 # number of hwrei instructions executed
+system.cpu0.kern.inst.hwrei 160332 # number of hwrei instructions executed
system.cpu0.kern.inst.ivlb 0 # number of ivlb instructions executed
system.cpu0.kern.inst.ivle 0 # number of ivle instructions executed
-system.cpu0.kern.inst.quiesce 1958 # number of quiesce instructions executed
-system.cpu0.kern.ipl_count 140584 # number of times we switched to this ipl
-system.cpu0.kern.ipl_count_0 56549 40.22% 40.22% # number of times we switched to this ipl
-system.cpu0.kern.ipl_count_21 251 0.18% 40.40% # number of times we switched to this ipl
-system.cpu0.kern.ipl_count_22 5487 3.90% 44.31% # number of times we switched to this ipl
-system.cpu0.kern.ipl_count_30 51 0.04% 44.34% # number of times we switched to this ipl
-system.cpu0.kern.ipl_count_31 78246 55.66% 100.00% # number of times we switched to this ipl
-system.cpu0.kern.ipl_good 122461 # number of times we switched to this ipl from a different ipl
-system.cpu0.kern.ipl_good_0 56518 46.15% 46.15% # number of times we switched to this ipl from a different ipl
-system.cpu0.kern.ipl_good_21 251 0.20% 46.36% # number of times we switched to this ipl from a different ipl
-system.cpu0.kern.ipl_good_22 5487 4.48% 50.84% # number of times we switched to this ipl from a different ipl
-system.cpu0.kern.ipl_good_30 51 0.04% 50.88% # number of times we switched to this ipl from a different ipl
-system.cpu0.kern.ipl_good_31 60154 49.12% 100.00% # number of times we switched to this ipl from a different ipl
-system.cpu0.kern.ipl_ticks 3526986735 # number of cycles we spent at this ipl
-system.cpu0.kern.ipl_ticks_0 3501352281 99.27% 99.27% # number of cycles we spent at this ipl
-system.cpu0.kern.ipl_ticks_21 53019 0.00% 99.27% # number of cycles we spent at this ipl
-system.cpu0.kern.ipl_ticks_22 1348211 0.04% 99.31% # number of cycles we spent at this ipl
-system.cpu0.kern.ipl_ticks_30 18326 0.00% 99.31% # number of cycles we spent at this ipl
-system.cpu0.kern.ipl_ticks_31 24214898 0.69% 100.00% # number of cycles we spent at this ipl
-system.cpu0.kern.ipl_used 0.871088 # fraction of swpipl calls that actually changed the ipl
-system.cpu0.kern.ipl_used_0 0.999452 # fraction of swpipl calls that actually changed the ipl
+system.cpu0.kern.inst.quiesce 6637 # number of quiesce instructions executed
+system.cpu0.kern.ipl_count 139203 # number of times we switched to this ipl
+system.cpu0.kern.ipl_count_0 55744 40.05% 40.05% # number of times we switched to this ipl
+system.cpu0.kern.ipl_count_21 245 0.18% 40.22% # number of times we switched to this ipl
+system.cpu0.kern.ipl_count_22 1904 1.37% 41.59% # number of times we switched to this ipl
+system.cpu0.kern.ipl_count_30 410 0.29% 41.88% # number of times we switched to this ipl
+system.cpu0.kern.ipl_count_31 80900 58.12% 100.00% # number of times we switched to this ipl
+system.cpu0.kern.ipl_good 112527 # number of times we switched to this ipl from a different ipl
+system.cpu0.kern.ipl_good_0 55189 49.05% 49.05% # number of times we switched to this ipl from a different ipl
+system.cpu0.kern.ipl_good_21 245 0.22% 49.26% # number of times we switched to this ipl from a different ipl
+system.cpu0.kern.ipl_good_22 1904 1.69% 50.95% # number of times we switched to this ipl from a different ipl
+system.cpu0.kern.ipl_good_30 410 0.36% 51.32% # number of times we switched to this ipl from a different ipl
+system.cpu0.kern.ipl_good_31 54779 48.68% 100.00% # number of times we switched to this ipl from a different ipl
+system.cpu0.kern.ipl_ticks 3734378988 # number of cycles we spent at this ipl
+system.cpu0.kern.ipl_ticks_0 3696326531 98.98% 98.98% # number of cycles we spent at this ipl
+system.cpu0.kern.ipl_ticks_21 53683 0.00% 98.98% # number of cycles we spent at this ipl
+system.cpu0.kern.ipl_ticks_22 224672 0.01% 98.99% # number of cycles we spent at this ipl
+system.cpu0.kern.ipl_ticks_30 128286 0.00% 98.99% # number of cycles we spent at this ipl
+system.cpu0.kern.ipl_ticks_31 37645816 1.01% 100.00% # number of cycles we spent at this ipl
+system.cpu0.kern.ipl_used 0.808366 # fraction of swpipl calls that actually changed the ipl
+system.cpu0.kern.ipl_used_0 0.990044 # fraction of swpipl calls that actually changed the ipl
system.cpu0.kern.ipl_used_21 1 # fraction of swpipl calls that actually changed the ipl
system.cpu0.kern.ipl_used_22 1 # fraction of swpipl calls that actually changed the ipl
system.cpu0.kern.ipl_used_30 1 # fraction of swpipl calls that actually changed the ipl
-system.cpu0.kern.ipl_used_31 0.768781 # fraction of swpipl calls that actually changed the ipl
-system.cpu0.kern.mode_good_kernel 1448
-system.cpu0.kern.mode_good_user 1300
-system.cpu0.kern.mode_good_idle 148
-system.cpu0.kern.mode_switch_kernel 2490 # number of protection mode switches
-system.cpu0.kern.mode_switch_user 1300 # number of protection mode switches
-system.cpu0.kern.mode_switch_idle 2110 # number of protection mode switches
-system.cpu0.kern.mode_switch_good 0.490847 # fraction of useful protection mode switches
-system.cpu0.kern.mode_switch_good_kernel 0.581526 # fraction of useful protection mode switches
+system.cpu0.kern.ipl_used_31 0.677120 # fraction of swpipl calls that actually changed the ipl
+system.cpu0.kern.mode_good_kernel 1095
+system.cpu0.kern.mode_good_user 1095
+system.cpu0.kern.mode_good_idle 0
+system.cpu0.kern.mode_switch_kernel 6628 # number of protection mode switches
+system.cpu0.kern.mode_switch_user 1095 # number of protection mode switches
+system.cpu0.kern.mode_switch_idle 0 # number of protection mode switches
+system.cpu0.kern.mode_switch_good 0.283569 # fraction of useful protection mode switches
+system.cpu0.kern.mode_switch_good_kernel 0.165208 # fraction of useful protection mode switches
system.cpu0.kern.mode_switch_good_user 1 # fraction of useful protection mode switches
-system.cpu0.kern.mode_switch_good_idle 0.070142 # fraction of useful protection mode switches
-system.cpu0.kern.mode_ticks_kernel 23256451 0.66% 0.66% # number of ticks spent at the given mode
-system.cpu0.kern.mode_ticks_user 3397192 0.10% 0.76% # number of ticks spent at the given mode
-system.cpu0.kern.mode_ticks_idle 3500333090 99.24% 100.00% # number of ticks spent at the given mode
-system.cpu0.kern.swap_context 1183 # number of times the context was actually changed
-system.cpu0.kern.syscall 231 # number of syscalls executed
-system.cpu0.kern.syscall_fork 6 2.60% 2.60% # number of syscalls executed
-system.cpu0.kern.syscall_read 17 7.36% 9.96% # number of syscalls executed
-system.cpu0.kern.syscall_write 4 1.73% 11.69% # number of syscalls executed
-system.cpu0.kern.syscall_close 31 13.42% 25.11% # number of syscalls executed
-system.cpu0.kern.syscall_chdir 1 0.43% 25.54% # number of syscalls executed
-system.cpu0.kern.syscall_obreak 11 4.76% 30.30% # number of syscalls executed
-system.cpu0.kern.syscall_lseek 6 2.60% 32.90% # number of syscalls executed
-system.cpu0.kern.syscall_getpid 4 1.73% 34.63% # number of syscalls executed
-system.cpu0.kern.syscall_setuid 2 0.87% 35.50% # number of syscalls executed
-system.cpu0.kern.syscall_getuid 4 1.73% 37.23% # number of syscalls executed
-system.cpu0.kern.syscall_access 9 3.90% 41.13% # number of syscalls executed
-system.cpu0.kern.syscall_dup 2 0.87% 41.99% # number of syscalls executed
-system.cpu0.kern.syscall_open 42 18.18% 60.17% # number of syscalls executed
-system.cpu0.kern.syscall_getgid 4 1.73% 61.90% # number of syscalls executed
-system.cpu0.kern.syscall_sigprocmask 7 3.03% 64.94% # number of syscalls executed
-system.cpu0.kern.syscall_ioctl 9 3.90% 68.83% # number of syscalls executed
-system.cpu0.kern.syscall_readlink 1 0.43% 69.26% # number of syscalls executed
-system.cpu0.kern.syscall_execve 4 1.73% 71.00% # number of syscalls executed
-system.cpu0.kern.syscall_mmap 35 15.15% 86.15% # number of syscalls executed
-system.cpu0.kern.syscall_munmap 2 0.87% 87.01% # number of syscalls executed
-system.cpu0.kern.syscall_mprotect 10 4.33% 91.34% # number of syscalls executed
-system.cpu0.kern.syscall_gethostname 1 0.43% 91.77% # number of syscalls executed
-system.cpu0.kern.syscall_dup2 2 0.87% 92.64% # number of syscalls executed
-system.cpu0.kern.syscall_fcntl 8 3.46% 96.10% # number of syscalls executed
-system.cpu0.kern.syscall_socket 2 0.87% 96.97% # number of syscalls executed
-system.cpu0.kern.syscall_connect 2 0.87% 97.84% # number of syscalls executed
-system.cpu0.kern.syscall_setgid 2 0.87% 98.70% # number of syscalls executed
-system.cpu0.kern.syscall_getrlimit 1 0.43% 99.13% # number of syscalls executed
-system.cpu0.kern.syscall_setsid 2 0.87% 100.00% # number of syscalls executed
-system.cpu0.not_idle_fraction 0.015486 # Percentage of non-idle cycles
+system.cpu0.kern.mode_switch_good_idle <err: div-0> # fraction of useful protection mode switches
+system.cpu0.kern.mode_ticks_kernel 3730042316 99.93% 99.93% # number of ticks spent at the given mode
+system.cpu0.kern.mode_ticks_user 2718822 0.07% 100.00% # number of ticks spent at the given mode
+system.cpu0.kern.mode_ticks_idle 0 0.00% 100.00% # number of ticks spent at the given mode
+system.cpu0.kern.swap_context 2963 # number of times the context was actually changed
+system.cpu0.kern.syscall 179 # number of syscalls executed
+system.cpu0.kern.syscall_fork 7 3.91% 3.91% # number of syscalls executed
+system.cpu0.kern.syscall_read 14 7.82% 11.73% # number of syscalls executed
+system.cpu0.kern.syscall_write 4 2.23% 13.97% # number of syscalls executed
+system.cpu0.kern.syscall_close 27 15.08% 29.05% # number of syscalls executed
+system.cpu0.kern.syscall_chdir 1 0.56% 29.61% # number of syscalls executed
+system.cpu0.kern.syscall_obreak 6 3.35% 32.96% # number of syscalls executed
+system.cpu0.kern.syscall_lseek 7 3.91% 36.87% # number of syscalls executed
+system.cpu0.kern.syscall_getpid 4 2.23% 39.11% # number of syscalls executed
+system.cpu0.kern.syscall_setuid 1 0.56% 39.66% # number of syscalls executed
+system.cpu0.kern.syscall_getuid 3 1.68% 41.34% # number of syscalls executed
+system.cpu0.kern.syscall_access 6 3.35% 44.69% # number of syscalls executed
+system.cpu0.kern.syscall_dup 2 1.12% 45.81% # number of syscalls executed
+system.cpu0.kern.syscall_open 30 16.76% 62.57% # number of syscalls executed
+system.cpu0.kern.syscall_getgid 3 1.68% 64.25% # number of syscalls executed
+system.cpu0.kern.syscall_sigprocmask 8 4.47% 68.72% # number of syscalls executed
+system.cpu0.kern.syscall_ioctl 8 4.47% 73.18% # number of syscalls executed
+system.cpu0.kern.syscall_execve 5 2.79% 75.98% # number of syscalls executed
+system.cpu0.kern.syscall_mmap 17 9.50% 85.47% # number of syscalls executed
+system.cpu0.kern.syscall_munmap 3 1.68% 87.15% # number of syscalls executed
+system.cpu0.kern.syscall_mprotect 4 2.23% 89.39% # number of syscalls executed
+system.cpu0.kern.syscall_gethostname 1 0.56% 89.94% # number of syscalls executed
+system.cpu0.kern.syscall_dup2 2 1.12% 91.06% # number of syscalls executed
+system.cpu0.kern.syscall_fcntl 8 4.47% 95.53% # number of syscalls executed
+system.cpu0.kern.syscall_socket 2 1.12% 96.65% # number of syscalls executed
+system.cpu0.kern.syscall_connect 2 1.12% 97.77% # number of syscalls executed
+system.cpu0.kern.syscall_setgid 1 0.56% 98.32% # number of syscalls executed
+system.cpu0.kern.syscall_getrlimit 1 0.56% 98.88% # number of syscalls executed
+system.cpu0.kern.syscall_setsid 2 1.12% 100.00% # number of syscalls executed
+system.cpu0.not_idle_fraction 0.017483 # Percentage of non-idle cycles
system.cpu0.numCycles 0 # number of cpu cycles simulated
-system.cpu0.num_insts 44155958 # Number of instructions executed
-system.cpu0.num_refs 10463340 # Number of memory references
-system.cpu1.dtb.accesses 323344 # DTB accesses
-system.cpu1.dtb.acv 82 # DTB access violations
-system.cpu1.dtb.hits 4234985 # DTB hits
-system.cpu1.dtb.misses 2977 # DTB misses
-system.cpu1.dtb.read_accesses 222873 # DTB read accesses
-system.cpu1.dtb.read_acv 36 # DTB read access violations
-system.cpu1.dtb.read_hits 2431648 # DTB read hits
-system.cpu1.dtb.read_misses 2698 # DTB read misses
-system.cpu1.dtb.write_accesses 100471 # DTB write accesses
-system.cpu1.dtb.write_acv 46 # DTB write access violations
-system.cpu1.dtb.write_hits 1803337 # DTB write hits
-system.cpu1.dtb.write_misses 279 # DTB write misses
-system.cpu1.idle_fraction 0.993979 # Percentage of idle cycles
-system.cpu1.itb.accesses 912010 # ITB accesses
-system.cpu1.itb.acv 41 # ITB acv
-system.cpu1.itb.hits 910678 # ITB hits
-system.cpu1.itb.misses 1332 # ITB misses
-system.cpu1.kern.callpal 57529 # number of callpals executed
+system.cpu0.num_insts 51973218 # Number of instructions executed
+system.cpu0.num_refs 13496062 # Number of memory references
+system.cpu1.dtb.accesses 477041 # DTB accesses
+system.cpu1.dtb.acv 52 # DTB access violations
+system.cpu1.dtb.hits 4561390 # DTB hits
+system.cpu1.dtb.misses 4359 # DTB misses
+system.cpu1.dtb.read_accesses 328551 # DTB read accesses
+system.cpu1.dtb.read_acv 10 # DTB read access violations
+system.cpu1.dtb.read_hits 2657400 # DTB read hits
+system.cpu1.dtb.read_misses 3911 # DTB read misses
+system.cpu1.dtb.write_accesses 148490 # DTB write accesses
+system.cpu1.dtb.write_acv 42 # DTB write access violations
+system.cpu1.dtb.write_hits 1903990 # DTB write hits
+system.cpu1.dtb.write_misses 448 # DTB write misses
+system.cpu1.idle_fraction 0.994927 # Percentage of idle cycles
+system.cpu1.itb.accesses 1392687 # ITB accesses
+system.cpu1.itb.acv 18 # ITB acv
+system.cpu1.itb.hits 1391015 # ITB hits
+system.cpu1.itb.misses 1672 # ITB misses
+system.cpu1.kern.callpal 74370 # number of callpals executed
system.cpu1.kern.callpal_cserve 1 0.00% 0.00% # number of callpals executed
-system.cpu1.kern.callpal_wripir 51 0.09% 0.09% # number of callpals executed
-system.cpu1.kern.callpal_wrmces 1 0.00% 0.09% # number of callpals executed
-system.cpu1.kern.callpal_wrfen 1 0.00% 0.09% # number of callpals executed
-system.cpu1.kern.callpal_swpctx 451 0.78% 0.88% # number of callpals executed
-system.cpu1.kern.callpal_tbi 12 0.02% 0.90% # number of callpals executed
-system.cpu1.kern.callpal_wrent 7 0.01% 0.91% # number of callpals executed
-system.cpu1.kern.callpal_swpipl 54081 94.01% 94.92% # number of callpals executed
-system.cpu1.kern.callpal_rdps 368 0.64% 95.56% # number of callpals executed
-system.cpu1.kern.callpal_wrkgp 1 0.00% 95.56% # number of callpals executed
-system.cpu1.kern.callpal_wrusp 2 0.00% 95.56% # number of callpals executed
-system.cpu1.kern.callpal_rdusp 2 0.00% 95.57% # number of callpals executed
-system.cpu1.kern.callpal_whami 3 0.01% 95.57% # number of callpals executed
-system.cpu1.kern.callpal_rti 2337 4.06% 99.63% # number of callpals executed
-system.cpu1.kern.callpal_callsys 169 0.29% 99.93% # number of callpals executed
-system.cpu1.kern.callpal_imb 41 0.07% 100.00% # number of callpals executed
+system.cpu1.kern.callpal_wripir 410 0.55% 0.55% # number of callpals executed
+system.cpu1.kern.callpal_wrmces 1 0.00% 0.55% # number of callpals executed
+system.cpu1.kern.callpal_wrfen 1 0.00% 0.56% # number of callpals executed
+system.cpu1.kern.callpal_swpctx 2102 2.83% 3.38% # number of callpals executed
+system.cpu1.kern.callpal_tbi 6 0.01% 3.39% # number of callpals executed
+system.cpu1.kern.callpal_wrent 7 0.01% 3.40% # number of callpals executed
+system.cpu1.kern.callpal_swpipl 65072 87.50% 90.90% # number of callpals executed
+system.cpu1.kern.callpal_rdps 2603 3.50% 94.40% # number of callpals executed
+system.cpu1.kern.callpal_wrkgp 1 0.00% 94.40% # number of callpals executed
+system.cpu1.kern.callpal_wrusp 5 0.01% 94.41% # number of callpals executed
+system.cpu1.kern.callpal_rdusp 1 0.00% 94.41% # number of callpals executed
+system.cpu1.kern.callpal_whami 3 0.00% 94.41% # number of callpals executed
+system.cpu1.kern.callpal_rti 3890 5.23% 99.64% # number of callpals executed
+system.cpu1.kern.callpal_callsys 214 0.29% 99.93% # number of callpals executed
+system.cpu1.kern.callpal_imb 52 0.07% 100.00% # number of callpals executed
system.cpu1.kern.callpal_rdunique 1 0.00% 100.00% # number of callpals executed
system.cpu1.kern.inst.arm 0 # number of arm instructions executed
-system.cpu1.kern.inst.hwrei 63811 # number of hwrei instructions executed
+system.cpu1.kern.inst.hwrei 82881 # number of hwrei instructions executed
system.cpu1.kern.inst.ivlb 0 # number of ivlb instructions executed
system.cpu1.kern.inst.ivle 0 # number of ivle instructions executed
-system.cpu1.kern.inst.quiesce 1898 # number of quiesce instructions executed
-system.cpu1.kern.ipl_count 58267 # number of times we switched to this ipl
-system.cpu1.kern.ipl_count_0 25040 42.97% 42.97% # number of times we switched to this ipl
-system.cpu1.kern.ipl_count_22 5452 9.36% 52.33% # number of times we switched to this ipl
-system.cpu1.kern.ipl_count_30 54 0.09% 52.42% # number of times we switched to this ipl
-system.cpu1.kern.ipl_count_31 27721 47.58% 100.00% # number of times we switched to this ipl
-system.cpu1.kern.ipl_good 57331 # number of times we switched to this ipl from a different ipl
-system.cpu1.kern.ipl_good_0 25007 43.62% 43.62% # number of times we switched to this ipl from a different ipl
-system.cpu1.kern.ipl_good_22 5452 9.51% 53.13% # number of times we switched to this ipl from a different ipl
-system.cpu1.kern.ipl_good_30 54 0.09% 53.22% # number of times we switched to this ipl from a different ipl
-system.cpu1.kern.ipl_good_31 26818 46.78% 100.00% # number of times we switched to this ipl from a different ipl
-system.cpu1.kern.ipl_ticks 3526422675 # number of cycles we spent at this ipl
-system.cpu1.kern.ipl_ticks_0 3497592433 99.18% 99.18% # number of cycles we spent at this ipl
-system.cpu1.kern.ipl_ticks_22 1410084 0.04% 99.22% # number of cycles we spent at this ipl
-system.cpu1.kern.ipl_ticks_30 19740 0.00% 99.22% # number of cycles we spent at this ipl
-system.cpu1.kern.ipl_ticks_31 27400418 0.78% 100.00% # number of cycles we spent at this ipl
-system.cpu1.kern.ipl_used 0.983936 # fraction of swpipl calls that actually changed the ipl
-system.cpu1.kern.ipl_used_0 0.998682 # fraction of swpipl calls that actually changed the ipl
+system.cpu1.kern.inst.quiesce 2511 # number of quiesce instructions executed
+system.cpu1.kern.ipl_count 71371 # number of times we switched to this ipl
+system.cpu1.kern.ipl_count_0 27750 38.88% 38.88% # number of times we switched to this ipl
+system.cpu1.kern.ipl_count_22 1902 2.66% 41.55% # number of times we switched to this ipl
+system.cpu1.kern.ipl_count_30 506 0.71% 42.26% # number of times we switched to this ipl
+system.cpu1.kern.ipl_count_31 41213 57.74% 100.00% # number of times we switched to this ipl
+system.cpu1.kern.ipl_good 55758 # number of times we switched to this ipl from a different ipl
+system.cpu1.kern.ipl_good_0 26928 48.29% 48.29% # number of times we switched to this ipl from a different ipl
+system.cpu1.kern.ipl_good_22 1902 3.41% 51.71% # number of times we switched to this ipl from a different ipl
+system.cpu1.kern.ipl_good_30 506 0.91% 52.61% # number of times we switched to this ipl from a different ipl
+system.cpu1.kern.ipl_good_31 26422 47.39% 100.00% # number of times we switched to this ipl from a different ipl
+system.cpu1.kern.ipl_ticks 3734898431 # number of cycles we spent at this ipl
+system.cpu1.kern.ipl_ticks_0 3704872588 99.20% 99.20% # number of cycles we spent at this ipl
+system.cpu1.kern.ipl_ticks_22 224436 0.01% 99.20% # number of cycles we spent at this ipl
+system.cpu1.kern.ipl_ticks_30 162482 0.00% 99.21% # number of cycles we spent at this ipl
+system.cpu1.kern.ipl_ticks_31 29638925 0.79% 100.00% # number of cycles we spent at this ipl
+system.cpu1.kern.ipl_used 0.781242 # fraction of swpipl calls that actually changed the ipl
+system.cpu1.kern.ipl_used_0 0.970378 # fraction of swpipl calls that actually changed the ipl
system.cpu1.kern.ipl_used_22 1 # fraction of swpipl calls that actually changed the ipl
system.cpu1.kern.ipl_used_30 1 # fraction of swpipl calls that actually changed the ipl
-system.cpu1.kern.ipl_used_31 0.967425 # fraction of swpipl calls that actually changed the ipl
-system.cpu1.kern.mode_good_kernel 465
-system.cpu1.kern.mode_good_user 465
-system.cpu1.kern.mode_good_idle 0
-system.cpu1.kern.mode_switch_kernel 2771 # number of protection mode switches
-system.cpu1.kern.mode_switch_user 465 # number of protection mode switches
-system.cpu1.kern.mode_switch_idle 0 # number of protection mode switches
-system.cpu1.kern.mode_switch_good 0.287392 # fraction of useful protection mode switches
-system.cpu1.kern.mode_switch_good_kernel 0.167809 # fraction of useful protection mode switches
+system.cpu1.kern.ipl_used_31 0.641108 # fraction of swpipl calls that actually changed the ipl
+system.cpu1.kern.mode_good_kernel 1093
+system.cpu1.kern.mode_good_user 662
+system.cpu1.kern.mode_good_idle 431
+system.cpu1.kern.mode_switch_kernel 2354 # number of protection mode switches
+system.cpu1.kern.mode_switch_user 662 # number of protection mode switches
+system.cpu1.kern.mode_switch_idle 2830 # number of protection mode switches
+system.cpu1.kern.mode_switch_good 0.373931 # fraction of useful protection mode switches
+system.cpu1.kern.mode_switch_good_kernel 0.464316 # fraction of useful protection mode switches
system.cpu1.kern.mode_switch_good_user 1 # fraction of useful protection mode switches
-system.cpu1.kern.mode_switch_good_idle no value # fraction of useful protection mode switches
-system.cpu1.kern.mode_ticks_kernel 3525066043 99.96% 99.96% # number of ticks spent at the given mode
-system.cpu1.kern.mode_ticks_user 1294184 0.04% 100.00% # number of ticks spent at the given mode
-system.cpu1.kern.mode_ticks_idle 0 0.00% 100.00% # number of ticks spent at the given mode
-system.cpu1.kern.swap_context 452 # number of times the context was actually changed
-system.cpu1.kern.syscall 98 # number of syscalls executed
-system.cpu1.kern.syscall_fork 2 2.04% 2.04% # number of syscalls executed
-system.cpu1.kern.syscall_read 13 13.27% 15.31% # number of syscalls executed
-system.cpu1.kern.syscall_close 12 12.24% 27.55% # number of syscalls executed
-system.cpu1.kern.syscall_chmod 1 1.02% 28.57% # number of syscalls executed
-system.cpu1.kern.syscall_obreak 4 4.08% 32.65% # number of syscalls executed
-system.cpu1.kern.syscall_lseek 4 4.08% 36.73% # number of syscalls executed
-system.cpu1.kern.syscall_getpid 2 2.04% 38.78% # number of syscalls executed
-system.cpu1.kern.syscall_setuid 2 2.04% 40.82% # number of syscalls executed
-system.cpu1.kern.syscall_getuid 2 2.04% 42.86% # number of syscalls executed
-system.cpu1.kern.syscall_access 2 2.04% 44.90% # number of syscalls executed
-system.cpu1.kern.syscall_open 13 13.27% 58.16% # number of syscalls executed
-system.cpu1.kern.syscall_getgid 2 2.04% 60.20% # number of syscalls executed
-system.cpu1.kern.syscall_sigprocmask 3 3.06% 63.27% # number of syscalls executed
-system.cpu1.kern.syscall_ioctl 1 1.02% 64.29% # number of syscalls executed
-system.cpu1.kern.syscall_execve 3 3.06% 67.35% # number of syscalls executed
-system.cpu1.kern.syscall_mmap 19 19.39% 86.73% # number of syscalls executed
-system.cpu1.kern.syscall_munmap 1 1.02% 87.76% # number of syscalls executed
-system.cpu1.kern.syscall_mprotect 6 6.12% 93.88% # number of syscalls executed
-system.cpu1.kern.syscall_dup2 1 1.02% 94.90% # number of syscalls executed
-system.cpu1.kern.syscall_fcntl 2 2.04% 96.94% # number of syscalls executed
-system.cpu1.kern.syscall_setgid 2 2.04% 98.98% # number of syscalls executed
-system.cpu1.kern.syscall_getrlimit 1 1.02% 100.00% # number of syscalls executed
-system.cpu1.not_idle_fraction 0.006021 # Percentage of non-idle cycles
+system.cpu1.kern.mode_switch_good_idle 0.152297 # fraction of useful protection mode switches
+system.cpu1.kern.mode_ticks_kernel 13359666 0.36% 0.36% # number of ticks spent at the given mode
+system.cpu1.kern.mode_ticks_user 1967356 0.05% 0.41% # number of ticks spent at the given mode
+system.cpu1.kern.mode_ticks_idle 3719571407 99.59% 100.00% # number of ticks spent at the given mode
+system.cpu1.kern.swap_context 2103 # number of times the context was actually changed
+system.cpu1.kern.syscall 150 # number of syscalls executed
+system.cpu1.kern.syscall_fork 1 0.67% 0.67% # number of syscalls executed
+system.cpu1.kern.syscall_read 16 10.67% 11.33% # number of syscalls executed
+system.cpu1.kern.syscall_close 16 10.67% 22.00% # number of syscalls executed
+system.cpu1.kern.syscall_chmod 1 0.67% 22.67% # number of syscalls executed
+system.cpu1.kern.syscall_obreak 9 6.00% 28.67% # number of syscalls executed
+system.cpu1.kern.syscall_lseek 3 2.00% 30.67% # number of syscalls executed
+system.cpu1.kern.syscall_getpid 2 1.33% 32.00% # number of syscalls executed
+system.cpu1.kern.syscall_setuid 3 2.00% 34.00% # number of syscalls executed
+system.cpu1.kern.syscall_getuid 3 2.00% 36.00% # number of syscalls executed
+system.cpu1.kern.syscall_access 5 3.33% 39.33% # number of syscalls executed
+system.cpu1.kern.syscall_open 25 16.67% 56.00% # number of syscalls executed
+system.cpu1.kern.syscall_getgid 3 2.00% 58.00% # number of syscalls executed
+system.cpu1.kern.syscall_sigprocmask 2 1.33% 59.33% # number of syscalls executed
+system.cpu1.kern.syscall_ioctl 2 1.33% 60.67% # number of syscalls executed
+system.cpu1.kern.syscall_readlink 1 0.67% 61.33% # number of syscalls executed
+system.cpu1.kern.syscall_execve 2 1.33% 62.67% # number of syscalls executed
+system.cpu1.kern.syscall_mmap 37 24.67% 87.33% # number of syscalls executed
+system.cpu1.kern.syscall_mprotect 12 8.00% 95.33% # number of syscalls executed
+system.cpu1.kern.syscall_dup2 1 0.67% 96.00% # number of syscalls executed
+system.cpu1.kern.syscall_fcntl 2 1.33% 97.33% # number of syscalls executed
+system.cpu1.kern.syscall_setgid 3 2.00% 99.33% # number of syscalls executed
+system.cpu1.kern.syscall_getrlimit 1 0.67% 100.00% # number of syscalls executed
+system.cpu1.not_idle_fraction 0.005073 # Percentage of non-idle cycles
system.cpu1.numCycles 0 # number of cpu cycles simulated
-system.cpu1.num_insts 16976004 # Number of instructions executed
-system.cpu1.num_refs 4251312 # Number of memory references
+system.cpu1.num_insts 14364039 # Number of instructions executed
+system.cpu1.num_refs 4590544 # Number of memory references
system.disk0.dma_read_bytes 1024 # Number of bytes transfered via DMA reads (not PRD).
system.disk0.dma_read_full_pages 0 # Number of full page size DMA reads (not PRD).
system.disk0.dma_read_txs 1 # Number of DMA read transactions (not PRD).
-system.disk0.dma_write_bytes 2735104 # Number of bytes transfered via DMA writes.
-system.disk0.dma_write_full_pages 306 # Number of full page size DMA writes.
-system.disk0.dma_write_txs 412 # Number of DMA write transactions.
+system.disk0.dma_write_bytes 2702336 # Number of bytes transfered via DMA writes.
+system.disk0.dma_write_full_pages 302 # Number of full page size DMA writes.
+system.disk0.dma_write_txs 408 # Number of DMA write transactions.
system.disk2.dma_read_bytes 0 # Number of bytes transfered via DMA reads (not PRD).
system.disk2.dma_read_full_pages 0 # Number of full page size DMA reads (not PRD).
system.disk2.dma_read_txs 0 # Number of DMA read transactions (not PRD).
@@ -235,7 +234,7 @@ system.disk2.dma_write_full_pages 1 # Nu
system.disk2.dma_write_txs 1 # Number of DMA write transactions.
system.tsunami.ethernet.coalescedRxDesc <err: div-0> # average number of RxDesc's coalesced into each post
system.tsunami.ethernet.coalescedRxIdle <err: div-0> # average number of RxIdle's coalesced into each post
-system.tsunami.ethernet.coalescedRxOk <err: div-0> # average number of RxOk's coalesced into each post
+system.tsunami.ethernet.coalescedRxOk no value # average number of RxOk's coalesced into each post
system.tsunami.ethernet.coalescedRxOrn <err: div-0> # average number of RxOrn's coalesced into each post
system.tsunami.ethernet.coalescedSwi <err: div-0> # average number of Swi's coalesced into each post
system.tsunami.ethernet.coalescedTotal <err: div-0> # average number of interrupts coalesced into each post
diff --git a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stderr b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stderr
index 2191bd088..c8703fde1 100644
--- a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stderr
+++ b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stderr
@@ -3,4 +3,4 @@ Listening for console connection on port 3456
0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
0: system.remote_gdb.listener: listening for remote gdb #1 on port 7001
warn: Entering event queue @ 0. Starting simulation...
-warn: 271342: Trying to launch CPU number 1!
+warn: 271343: Trying to launch CPU number 1!
diff --git a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stdout b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stdout
index 2c496b914..498a94b6f 100644
--- a/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stdout
+++ b/tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/stdout
@@ -5,8 +5,8 @@ The Regents of The University of Michigan
All Rights Reserved
-M5 compiled Oct 5 2006 22:13:02
-M5 started Fri Oct 6 00:26:09 2006
-M5 executing on zizzer.eecs.umich.edu
+M5 compiled Oct 8 2006 21:57:24
+M5 started Sun Oct 8 22:00:29 2006
+M5 executing on zed.eecs.umich.edu
command line: build/ALPHA_FS/m5.opt -d build/ALPHA_FS/tests/opt/quick/10.linux-boot/alpha/linux/tsunami-simple-timing-dual tests/run.py quick/10.linux-boot/alpha/linux/tsunami-simple-timing-dual
-Exiting @ tick 3526987181 because m5_exit instruction encountered
+Exiting @ tick 3734898877 because m5_exit instruction encountered