#
mv /etc/grub.d/20_linux_xen /etc/grub.d/09_linux_xen
#
update-grub
xen-create-image
command, which largely automates the task. The only mandatory parameter is --hostname
, giving a name to the domU; other options are important, but they can be stored in the /etc/xen-tools/xen-tools.conf
configuration file, and their absence from the command line doesn't trigger an error. It is therefore important to either check the contents of this file before creating images, or to use extra parameters in the xen-create-image
invocation. Important parameters of note include the following:
--memory
, to specify the amount of RAM dedicated to the newly created system;
--size
and --swap
, to define the size of the “virtual disks” available to the domU;
--debootstrap
, to cause the new system to be installed with debootstrap
; in that case, the --dist
option will also most often be used (with a distribution name such as wheezy).
--dhcp
states that the domU's network configuration should be obtained by DHCP while --ip
allows defining a static IP address.
--dir
option, is to create one file on the dom0 for each device the domU should be provided. For systems using LVM, the alternative is to use the --lvm
option, followed by the name of a volume group; xen-create-image
will then create a new logical volume inside that group, and this logical volume will be made available to the domU as a hard disk drive.
#
xen-create-image --hostname testxen --dhcp --dir /srv/testxen --size=2G --dist=wheezy --role=udev
[...] General Information -------------------- Hostname : testxen Distribution : wheezy Mirror : http://ftp.debian.org/debian/ Partitions : swap 128Mb (swap) / 2G (ext3) Image type : sparse Memory size : 128Mb Kernel path : /boot/vmlinuz-3.2.0-4-686-pae Initrd path : /boot/initrd.img-3.2.0-4-686-pae [...] Logfile produced at: /var/log/xen-tools/testxen.log Installation Summary --------------------- Hostname : testxen Distribution : wheezy IP-Address(es) : dynamic RSA Fingerprint : 0a:6e:71:98:95:46:64:ec:80:37:63:18:73:04:dd:2b Root Password : 48su67EW
vif*
, veth*
, peth*
and xenbr0
. The Xen hypervisor arranges them in whichever layout has been defined, under the control of the user-space tools. Since the NAT and routing models are only adapted to particular cases, we will only address the bridging model.
xend
daemon is configured to integrate virtual network interfaces into any pre-existing network bridge (with xenbr0
taking precedence if several such bridges exist). We must therefore set up a bridge in /etc/network/interfaces
(which requires installing the bridge-utils package, which is why the xen-utils-4.1 package recommends it) to replace the existing eth0 entry:
auto xenbr0 iface xenbr0 inet dhcp bridge_ports eth0 bridge_maxwait 0
xm
command. This command allows different manipulations on the domains, including listing them and, starting/stopping them.
#
xm list
Name ID Mem VCPUs State Time(s) Domain-0 0 463 1 r----- 9.8 #
xm create testxen.cfg
Using config file "/etc/xen/testxen.cfg". Started domain testxen (id=1) #
xm list
Name ID Mem VCPUs State Time(s) Domain-0 0 366 1 r----- 11.4 testxen 1 128 1 -b---- 1.1
testxen
domU uses real memory taken from the RAM that would otherwise be available to the dom0, not simulated memory. Care should therefore be taken, when building a server meant to host Xen instances, to provision the physical RAM accordingly.
hvc0
console, with the xm console
command:
#
xm console testxen
[...] Debian GNU/Linux 7.0 testxen hvc0 testxen login:
xm pause
and xm unpause
commands. Note that even though a paused domU does not use any processor power, its allocated memory is still in use. It may be interesting to consider the xm save
and xm restore
commands: saving a domU frees the resources that were previously used by this domU, including RAM. When restored (or unpaused, for that matter), a domU doesn't even notice anything beyond the passage of time. If a domU was running when the dom0 is shut down, the packaged scripts automatically save the domU, and restore it on the next boot. This will of course involve the standard inconvenience incurred when hibernating a laptop computer, for instance; in particular, if the domU is suspended for too long, network connections may expire. Note also that Xen is so far incompatible with a large part of ACPI power management, which precludes suspending the host (dom0) system.
shutdown
command) or from the dom0, with xm shutdown
or xm reboot
.
init
process, and the resulting set looks very much like a virtual machine. The official name for such a setup is a “container” (hence the LXC moniker: LinuX Containers), but a rather important difference with “real” virtual machines such as provided by Xen or KVM is that there's no second kernel; the container uses the very same kernel as the host system. This has both pros and cons: advantages include excellent performance due to the total lack of overhead, and the fact that the kernel has a global vision of all the processes running on the system, so the scheduling can be more efficient than it would be if two independent kernels were to schedule different task sets. Chief among the inconveniences is the impossibility to run a different kernel in a container (whether a different Linux version or a different operating system altogether).
/sys/fs/cgroup
. The /etc/fstab
should therefore include the following entry:
# /etc/fstab: static file system information. [...] cgroup /sys/fs/cgroup cgroup defaults 0 0
/sys/fs/cgroup
will then be mounted automatically at boot time; if no immediate reboot is planned, the filesystem should be manually mounted with mount /sys/fs/cgroup
.
/etc/network/interfaces
, moving the configuration for the physical interface (for instance eth0
) to a bridge interface (usually br0
), and configuring the link between them. For instance, if the network interface configuration file initially contains entries such as the following:
auto eth0 iface eth0 inet dhcp
#auto eth0 #iface eth0 inet dhcp auto br0 iface br0 inet dhcp bridge-ports eth0
eth0
as well as the interfaces defined for the containers.
/etc/network/interfaces
file then becomes:
# Interface eth0 is unchanged auto eth0 iface eth0 inet dhcp # Virtual interface auto tap0 iface tap0 inet manual vde2-switch -t tap0 # Bridge for containers auto br0 iface br0 inet static bridge-ports tap0 address 10.0.0.1 netmask 255.255.255.0
br0
interface.
root@mirwiz:~#
lxc-create -n testlxc -t debian
Note: Usually the template option is called with a configuration file option too, mostly to configure the network. For more information look at lxc.conf (5) debootstrap is /usr/sbin/debootstrap Checking cache download in /var/cache/lxc/debian/rootfs-wheezy-amd64 ... Downloading debian minimal ... I: Retrieving Release I: Retrieving Release.gpg [...] Root password is 'root', please change ! 'debian' template installed 'testlxc' created root@mirwiz:~#
/var/cache/lxc
, then moved to its destination directory. This allows creating identical containers much more quickly, since only copying is then required.
--arch
option to specify the architecture of the system to be installed and a --release
option if you want to install something else than the current stable release of Debian. You can also set the MIRROR
environment variable to point to a local Debian mirror.
/var/lib/lxc/testlxc/config
) and add a few lxc.network.*
entries:
lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 4a:49:43:49:79:20
br0
bridge on the host; and that its MAC address will be as specified. Should this last entry be missing or disabled, a random MAC address will be generated.
lxc.utsname = testlxc
root@mirwiz:~#
lxc-start --daemon --name=testlxc
root@mirwiz:~#
lxc-console -n testlxc
Debian GNU/Linux 7 testlxc tty1 testlxc login:
root
Password: Linux testlxc 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@testlxc:~#
ps auxwf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 10644 824 ? Ss 09:38 0:00 init [3] root 1232 0.0 0.2 9956 2392 ? Ss 09:39 0:00 dhclient -v -pf /run/dhclient.eth0.pid root 1379 0.0 0.1 49848 1208 ? Ss 09:39 0:00 /usr/sbin/sshd root 1409 0.0 0.0 14572 892 console Ss+ 09:39 0:00 /sbin/getty 38400 console root 1410 0.0 0.1 52368 1688 tty1 Ss 09:39 0:00 /bin/login -- root 1414 0.0 0.1 17876 1848 tty1 S 09:42 0:00 \_ -bash root 1418 0.0 0.1 15300 1096 tty1 R+ 09:42 0:00 \_ ps auxf root 1411 0.0 0.0 14572 892 tty2 Ss+ 09:39 0:00 /sbin/getty 38400 tty2 linux root 1412 0.0 0.0 14572 888 tty3 Ss+ 09:39 0:00 /sbin/getty 38400 tty3 linux root 1413 0.0 0.0 14572 884 tty4 Ss+ 09:39 0:00 /sbin/getty 38400 tty4 linux root@testlxc:~#
/var/lib/lxc/testlxc/rootfs
). We can exit the console with Control+a q.
--daemon
option of lxc-start
. We can interrupt the container with a command such as lxc-kill --name=testlxc
.
/etc/default/lxc
, is relatively straightforward; note that the container configuration files need to be stored in /etc/lxc/auto/
; many users may prefer symbolic links, such as can be created with ln -s /var/lib/lxc/testlxc/config /etc/lxc/auto/testlxc.config
.
qemu-*
commands: it is still about KVM.
/proc/cpuinfo
.
virtual-manager
is a graphical interface that uses libvirt to create and manage virtual machines.
apt-get install qemu-kvm libvirt-bin virtinst virt-manager virt-viewer
. libvirt-bin provides the libvirtd
daemon, which allows (potentially remote) management of the virtual machines running of the host, and starts the required VMs when the host boots. In addition, this package provides the virsh
command-line tool, which allows controlling the libvirtd
-managed machines.
virt-install
, which allows creating virtual machines from the command line. Finally, virt-viewer allows accessing a VM's graphical console.
eth0
physical interface and a br0
bridge, and that the former is connected to the latter.
/var/lib/libvirt/images/
) is fine.
root@mirwiz:~#
mkdir /srv/kvm
root@mirwiz:~#
virsh pool-create-as srv-kvm dir --target /srv/kvm
Pool srv-kvm created root@mirwiz:~#
virt-install
's most important options. This command registers the virtual machine and its parameters in libvirtd, then starts it so that its installation can proceed.
#
virt-install --connect qemu:///system --virt-type kvm --name testkvm --ram 1024 --disk /srv/kvm/testkvm.qcow,format=qcow2,size=10 --cdrom /srv/isos/debian-7.2.0-amd64-netinst.iso --network bridge=br0 --vnc --os-type linux --os-variant debianwheezy
Starting install... Allocating 'testkvm.qcow' | 10 GB 00:00 Creating domain... | 0 B 00:00 Cannot open display: Run 'virt-viewer --help' to see a full list of available command line options. Domain installation still in progress. You can reconnect to the console to complete the installation process.
The --connect option specifies the “hypervisor” to use. Its form is that of an URL containing a virtualization system (xen:// , qemu:// , lxc:// , openvz:// , vbox:// , and so on) and the machine that should host the VM (this can be left empty in the case of the local host). In addition to that, and in the QEMU/KVM case, each user can manage virtual machines working with restricted permissions, and the URL path allows differentiating “system” machines (/system ) from others (/session ).
| |
Since KVM is managed the same way as QEMU, the --virt-type kvm allows specifying the use of KVM even though the URL looks like QEMU.
| |
The --name option defines a (unique) name for the virtual machine.
| |
The --ram option allows specifying the amount of RAM (in MB) to allocate for the virtual machine.
| |
The --disk specifies the location of the image file that is to represent our virtual machine's hard disk; that file is created, unless present, with a size (in GB) specified by the size parameter. The format parameter allows choosing among several ways of storing the image file. The default format (raw ) is a single file exactly matching the disk's size and contents. We picked a more advanced format here, that is specific to QEMU and allows starting with a small file that only grows when the virtual machine starts actually using space.
| |
The --cdrom option is used to indicate where to find the optical disk to use for installation. The path can be either a local path for an ISO file, an URL where the file can be obtained, or the device file of a physical CD-ROM drive (i.e. /dev/cdrom ).
| |
The --network specifies how the virtual network card integrates in the host's network configuration. The default behavior (which we explicitly forced in our example) is to integrate it into any pre-existing network bridge. If no such bridge exists, the virtual machine will only reach the physical network through NAT, so it gets an address in a private subnet range (192.168.122.0/24).
| |
--vnc states that the graphical console should be made available using VNC. The default behavior for the associated VNC server is to only listen on the local interface; if the VNC client is to be run on a different host, establishing the connection will require setting up an SSH tunnel (see Section 9.2.1.3, “Creating Encrypted Tunnels with Port Forwarding”). Alternatively, the --vnclisten=0.0.0.0 can be used so that the VNC server is accessible from all interfaces; note that if you do that, you really should design your firewall accordingly.
| |
The --os-type and --os-variant options allow optimizing a few parameters of the virtual machine, based on some of the known features of the operating system mentioned there.
|
virt-viewer
can be run from any graphical environment to open the graphical console (note that the root password of the remote host is asked twice because the operation requires 2 SSH connections):
$
virt-viewer --connect qemu+ssh://root@server/system testkvm
root@server's password: root@server's password:
libvirtd
for the list of the virtual machines it manages:
#
virsh -c qemu:///system list --all Id Name State ---------------------------------- - testkvm shut off
#
virsh -c qemu:///system start testkvm
Domain testkvm started
vncviewer
):
#
virsh -c qemu:///system vncdisplay testkvm
:0
virsh
subcommands include:
reboot
to restart a virtual machine;
shutdown
to trigger a clean shutdown;
destroy
, to stop it brutally;
suspend
to pause it;
resume
to unpause it;
autostart
to enable (or disable, with the --disable
option) starting the virtual machine automatically when the host starts;
undefine
to remove all traces of the virtual machine from libvirtd
.
debootstrap
, as described above. But if the virtual machine is to be installed with an RPM-based system (such as Fedora, CentOS or Scientific Linux), the setup will need to be done using the yum
utility (available in the package of the same name).
yum.conf
file containing the necessary parameters, including the path to the source RPM repositories, the path to the plugin configuration, and the destination folder. For this example, we will assume that the environment will be stored in /var/tmp/yum-bootstrap
. The file /var/tmp/yum-bootstrap/yum.conf
file should look like this:
[main] reposdir=/var/tmp/yum-bootstrap/repos.d pluginconfpath=/var/tmp/yum-bootstrap/pluginconf.d cachedir=/var/cache/yum installroot=/path/to/destination/domU/install exclude=$exclude keepcache=1 #debuglevel=4 #errorlevel=4 pkgpolicy=newest distroverpkg=centos-release tolerant=1 exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 metadata_expire=1800
/var/tmp/yum-bootstrap/repos.d
directory should contain the descriptions of the RPM source repositories, just as in /etc/yum.repos.d
in an already installed RPM-based system. Here is an example for a CentOS 6 installation:
[base] name=CentOS-6 - Base #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6 [updates] name=CentOS-6 - Updates #baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6 [extras] name=CentOS-6 - Extras #baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6 [centosplus] name=CentOS-6 - Plus #baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
pluginconf.d/installonlyn.conf
file should contain the following:
[main] enabled=1 tokeep=5
rpm
databases are correctly initialized, with a command such as rpm --rebuilddb
. An installation of CentOS 6 is then a matter of the following:
yum -c /var/tmp/yum-bootstrap/yum.conf -y install coreutils basesystem centos-release yum-basearchonly initscripts