Section 1 – Plan, Install and Upgrade VMware ESX/ESXi
- Objective 1.1 — Install VMware ESX/ESXi on local storage
Which partition is required to store core dumps for debugging and for VMware technical support?
ESX and vCenter Server Installation Guide ESX 4.0 vCenter Server 4.0, page 62.
vmkcore – Used to store core dumps for debugging and technical support.
Which statement is true about running an ESX Server virtual machine on a CIFS share?
A.ESX Server requires gigabit Ethernet adapter in order for CIFS to be used as datastore.
B.ESX Server must be on the same LAN as the CIFS server.
C.ESX Server does not support datastore on CIFS.
D.ESX Server must be granted as a trusted member of the CIFS server.
ESX Configuration Guide ESX 4.0 vCenter Server 4.0, page 77. Depending on the type of storage you use, datastores can be backed by the following file system formats:
1. Virtual Machine File System (VMFS) – High-performance file system optimized for storing virtual machines. Your host can deploy a VMFS datastore on any SCSI-based local or networked storage device, including Fibre Channel and iSCSI SAN equipment.
As an alternative to using the VMFS datastore, your virtual machine can have direct access to raw devices and use a mapping file (RDM) as a proxy.
2. Network File System (NFS) – File system on a NAS storage device. ESX supports NFS version 3 over TCP/IP. The host can access a designated NFS volume located on an NFS server, mount the volume, and use it for any storage needs.
During ESX Server 4.0 installation, selecting “Create a default network for virtual machines” will cause virtual machines to _____.
A.share a network adapter with the service console
B.share a bond with all available network adapters
C.share a port group on VLAN 1
D.share an internal only virtual switch
If you select Create a default network for virtual machines, your virtual machines share a network adapter with the service console.
Where is LUN masking configured? (Choose two.)
A.on the Fibre Switch
B.on the host
C.on the storage processor
D.on the Ethernet switch
E.on the firewall
LUN Masking in a SAN
There are three places where LUN Masking can be implemented. The first is in the storage, the second is in the servers [the host], and the third is either in a device through which all of the I/O passes or the SAN itself, [the switch]. Each of these has its benefits.
In practice, LUN Masking at a customer site is implemented in multiple ways reflecting the different methods used by each vendor.
To implement LUN Masking in the switch would require that a time consuming table look-up be performedthat is not currently possible within Implementing LUN Masking due to memory constraints on the fibre channel switch ASIC. This means that all of the data would need to be staged to a central cache before being forwarded on. This is simply not possible with today’s technology without increasing the latency by a factor of 10 to 100 times.
Therefore the only practical place to perform LUN masking is on the host or on the storage processor, (B & C).
What is a valid reason for choosing to boot from local storage rather than choosing to boot from SAN?
A.MSCS is not supported on boot from SAN.
B.RDM is not supported on boot from SAN.
C.VMotion is not supported on boot from SAN.
D.There is no way to restrict sharing of boot LUNs between ESX Servers on boot from SAN.
Fibre Channel SAN Configuration Guide ESX 4.0 ESXi 4.0 vCenter Server 4.0, page 43.
Boot from SAN Overview
Before you consider how to set up your system for boot from SAN, decide whether it makes sense for your environment.
Use boot from SAN in the following circumstances:
1. If you do not want to handle maintenance of local storage.
2. If you need easy cloning of service consoles.
3. In diskless hardware configurations, such as on some blade systems.
You should not use boot from SAN in the following situations:
1. If you are using Microsoft Cluster Service. [A above].
2. If I/O contention might occur between the service console and Vmkernel
NOTE With ESX Server 2.5, you could not use boot from SAN together with RDM. With ESX 3.x or later, this restriction is removed
A company decides to replace one 8-CPU host with four dual-CPU hosts. This Virtual Infrastructure uses server-based licensing. How many new licenses will be required?
VMware Multi-Core Pricing & Licensing Policy
How does this policy affect my licensing costs on servers with less than 6 cores per processor? When upgrading your hardware to multi-core technology, you do not need to pay additional licensing fees for a processor with up to 6-cores per processor. For example, if you purchase a two-socket server with each socket populated with a 6-core processor, you need to purchase only two processor licenses of VMware vSphere or related products for that server.
Under the original configuration an 8-CPU host requires (8 x 1) 8 CPU licenses. Under the new configuration four dual-CPU hosts requires (4 x 2) 8 CPU licenses.
In order to upgrade to vSphere 4, an ESX host must have a /boot partition of at least:
vSphere Upgrade Guide ESX 4.0 ESXi 4.0 vCenter Server 4.0 vSphere Client 4.0 Page 73
Direct, in-place upgrade from ESX 2.5.5 to ESX 4.0 is not supported, even if you upgrade to ESX 3.x as an intermediary step. The default ESX 2.5.5 installation creates a /boot partition that is too small to enable upgrades to ESX 4.0. As an exception, if you have a non-default ESX 2.5.5 installation on which at least 100MB of space is available on the /boot partition, you can upgrade ESX 2.5.5 to ESX 3.x and then to ESX 4.0.
The vSphere 4 Host Update Utility upgrades the (Choose Two):
A.service console if present
C.virtual machine hardware
vSphere Upgrade Guide, ESX 4.0, ESXi 4.0, vCenter Server 4.0, vSphere Client 4.0, page 67
‘vSphere Host Update Utility Graphical utility for standalone hosts. Allows you to perform remote upgrades of ESX 3.x/ESXi 3.5 hosts to ESX 4.0/ESXi 4.0. vSphere Host Update Utility upgrades the virtual machine kernel (vmkernel) and the service console, where present. vSphere Host Update Utility does not upgrade VMFS datastores or virtual machine guest operating systems.’
Before you upgrade an ESX/ESXi host (Choose Three):
A.verify current hardware is supported per the vSphere Systems Compatibility Guide
B.run the VMware CPU Identification Utility
C.run the vSphere 4 Pre-Upgrade Script from the command line
D.schedule a maintenance window for 32-bit hardware
E.compare the md5sum of the downloaded file to the value on the VMware download website
Ensure that the hardware and/or virtual machine meets the minimum system requirements for VMware vCenter 4.0
You can use vSphere Host Update Utility to upgrade ESX 3.x to ESX 4.0 and ESXi 3.5 hosts to ESXi 4.0. You cannot use vSphere Host Update Utility to convert ESX hosts to ESXi hosts, or the reverse. When you select a host to be upgraded, the tool performs an automated host compatibility check as a preupgrade step. The check verifies that each host is compatible with ESX 4.0/ESXi 4.0, including the required CPU, and has adequate boot and root partition space. In addition to the automated preupgrade script, you can specify a postupgrade configuration script to ease deployment.
Note the preupgrade script is automated and so does not need to be run explicitly.
ESX and vCenter Server Installation Guide, ESX 4.0 vCenter Server 4.0, page 13.
ESX Hardware Requirements * 64-Bit Processor
The following ESX versions are supported for direct upgrading to vSphere 4 (Choose Two):
D.ESX 3.0.0, 3.0.1, 3.0.2
vSphere Upgrade Guide ESX 4.0, ESXi 4.0, vCenter Server 4.0, vSphere Client 4.0, Page ‘Upgrade Support for ESX/ESXi:
ESX 3.0.0, ESX 3.0.1, ESX 3.0.2, ESX 3.0.3, ESX, ESXi 3.5 – Yes
ESX 2.5.5 Limited Support’
Which of the following are true regarding the ESX Service Console file system structure (Choose Two)?
A.separate mount points are created for /tmp, /var/log and swap by default
B.running out of space on /var/log can cause vSphere Client connectivity disruptions
C.running out of space on / can cause vSphere Client connectivity disruptions
D.separate mount points are created for /var/log and swap by default
Mastering VMware vSphere 4 Page 24.
Default VMware ESX Partition Scheme
Mount Point Name Type Size /boot Ext3 250MB / Ext3 5000MB (5GB) (none) Swap 600MB /var/log Ext3 2000MB (2GB) (none) Vmkcore 100MB
The / (or “root”) partition stores the ESX system and all files not stored in another custom partition. If this partition is filled to capacity, the ESX host could crash. It is imperative to prevent this.
Note: The explanation uses partition sizes from VI3 days, not vSphere 4.