summaryrefslogtreecommitdiff
path: root/OvmfPkg/OvmfPkg.dec
diff options
context:
space:
mode:
authorLaszlo Ersek <lersek@redhat.com>2016-03-04 19:30:45 +0100
committerLaszlo Ersek <lersek@redhat.com>2016-03-23 17:47:27 +0100
commit7e5b1b670c3822b54d7277a703681a59763169cb (patch)
tree8bac060da0e3059e74d070ea15fef22903c0ab43 /OvmfPkg/OvmfPkg.dec
parentd5371680638151ff1a4332294e60a1c12163752e (diff)
downloadedk2-platforms-7e5b1b670c3822b54d7277a703681a59763169cb.tar.xz
OvmfPkg: PlatformPei: determine the 64-bit PCI host aperture for X64 DXE
The main observation about the 64-bit PCI host aperture is that it is the highest part of the useful address space. It impacts the top of the GCD memory space map, and, consequently, our maximum address width calculation for the CPU HOB too. Thus, modify the GetFirstNonAddress() function to consider the following areas above the high RAM, while calculating the first non-address (i.e., the highest inclusive address, plus one): - the memory hotplug area (optional, the size comes from QEMU), - the 64-bit PCI host aperture (we set a default size). While computing the first non-address, capture the base and the size of the 64-bit PCI host aperture at once in PCDs, since they are natural parts of the calculation. (Similarly to how PcdPciMmio32* are not rewritten on the S3 resume path (see the InitializePlatform() -> MemMapInitialization() condition), nor are PcdPciMmio64*. Only the core PciHostBridgeDxe driver consumes them, through our PciHostBridgeLib instance.) Set 32GB as the default size for the aperture. Issue#59 mentions the NVIDIA Tesla K80 as an assignable device. According to nvidia.com, these cards may have 24GB of memory (probably 16GB + 8GB BARs). As a strictly experimental feature, the user can specify the size of the aperture (in MB) as well, with the QEMU option -fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536 The "X-" prefix follows the QEMU tradition (spelled "x-" there), meaning that the property is experimental, unstable, and might go away any time. Gerd has proposed heuristics for sizing the aperture automatically (based on 1GB page support and PCPU address width), but such should be delayed to a later patch (which may very well back out "X-PciMmio64Mb" then). For "everyday" guests, the 32GB default for the aperture size shouldn't impact the PEI memory demand (the size of the page tables that the DXE IPL PEIM builds). Namely, we've never reported narrower than 36-bit addresses; the DXE IPL PEIM has always built page tables for 64GB at least. For the aperture to bump the address width above 36 bits, either the guest must have quite a bit of memory itself (in which case the additional PEI memory demand shouldn't matter), or the user must specify a large aperture manually with "X-PciMmio64Mb" (and then he or she is also responsible for giving enough RAM to the VM, to satisfy the PEI memory demand). Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Marcel Apfelbaum <marcel@redhat.com> Cc: Thomas Lamprecht <t.lamprecht@proxmox.com> Ref: https://github.com/tianocore/edk2/issues/59 Ref: http://www.nvidia.com/object/tesla-servers.html Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Diffstat (limited to 'OvmfPkg/OvmfPkg.dec')
-rw-r--r--OvmfPkg/OvmfPkg.dec5
1 files changed, 5 insertions, 0 deletions
diff --git a/OvmfPkg/OvmfPkg.dec b/OvmfPkg/OvmfPkg.dec
index d4ee152b16..97ffb8749b 100644
--- a/OvmfPkg/OvmfPkg.dec
+++ b/OvmfPkg/OvmfPkg.dec
@@ -131,6 +131,11 @@
gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0|UINT64|0x24
gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0|UINT64|0x25
+ ## The 64-bit MMIO aperture shared by all PCI root bridges.
+ #
+ gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0|UINT64|0x26
+ gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0|UINT64|0x27
+
[PcdsFeatureFlag]
gUefiOvmfPkgTokenSpaceGuid.PcdSecureBootEnable|FALSE|BOOLEAN|3
gUefiOvmfPkgTokenSpaceGuid.PcdQemuBootOrderPciTranslation|TRUE|BOOLEAN|0x1c