Выбрать главу

    bridge_maxwait 0

After rebooting to make sure the bridge is automatically created, we can now start the domU with the Xen control tools, in particular the xm command. This command allows different manipulations on the domains, including listing them and, starting/stopping them.

xm list

Name                            ID Mem VCPUs  State   Time(s)

Domain-0                         0 940     1 r-----   3896.9

xm create testxen.cfg

Using config file "/etc/xen/testxen.cfg".

Started domain testxen (id=1)

xm list

Name                            ID Mem VCPUs  State   Time(s)

Domain-0                         0 873     1 r-----   3917.1

testxen                          1 128     1 -b----      3.7

CAUTION Only one domU per image!

While it is of course possible to have several domU systems running in parallel, they will all need to use their own image, since each domU is made to believe it runs on its own hardware (apart from the small slice of the kernel that talks to the hypervisor). In particular, it isn't possible for two domU systems running simultaneously to share storage space. If the domU systems are not run at the same time, it is however quite possible to reuse a single swap partition, or the partition hosting the /home filesystem.

Note that the testxen domU uses real memory taken from the RAM that would otherwise be available to the dom0, not simulated memory. Care should therefore be taken, when building a server meant to host Xen instances, to provision the physical RAM accordingly.

Voilà! Our virtual machine is starting up. We can access it in one of two modes. The usual way is to connect to it “remotely” through the network, as we would connect to a real machine; this will usually require setting up either a DHCP server or some DNS configuration. The other way, which may be the only way if the network configuration was incorrect, is to use the hvc0 console, with the xm console command:

xm console testxen

[...]

Starting enhanced syslogd: rsyslogd.

Starting periodic command scheduler: cron.

Starting OpenBSD Secure Shell server: sshd.

Debian GNU/Linux 6.0 testxen hvc0

testxen login:

One can then open a session, just like one would do if sitting at the virtual machine's keyboard. Detaching from this console is achieved through the Control+] key combination.

TIP Getting the console straight away

Sometimes one wishes to start a domU system and get to its console straight away; this is why the xm create command takes a -c switch. Starting a domU with this switch will display all the messages as the system boots.

TOOL ConVirt

ConVirt (in the convirt package, previously XenMan) is a graphical interface allowing controlling Xen domains installed on a machine. It provides most of the features of the xm command.

Once the domU is up, it can be used just like any other server (since it is a GNU/Linux system after all). However, its virtual machine status allows some extra features. For instance, a domU can be temporarily paused then resumed, with the xm pause and xm unpause commands. Note that even though a paused domU does not use any processor power, its allocated memory is still in use. It may be interesting to consider the xm save and xm restore commands: saving a domU frees the resources that were previously used by this domU, including RAM. When restored (or unpaused, for that matter), a domU doesn't even notice anything beyond the passage of time. If a domU was running when the dom0 is shut down, the packaged scripts automatically save the domU, and restore it on the next boot. This will of course involve the standard inconvenience incurred when hibernating a laptop computer, for instance; in particular, if the domU is suspended for too long, network connections may expire. Note also that Xen is so far incompatible with a large part of ACPI power management, which precludes suspending the host (dom0) system.

DOCUMENTATION xm options

Most of the xm subcommands expect one or more arguments, often a domU name. These arguments are well described in the xm(1) manual page.

Halting or rebooting a domU can be done either from within the domU (with the shutdown command) or from the dom0, with xm shutdown or xm reboot.

GOING FURTHER Advanced Xen

Xen has many more features than we can describe in these few paragraphs. In particular, the system is very dynamic, and many parameters for one domain (such as the amount of allocated memory, the visible hard drives, the behaviour of the task scheduler, and so on) can be adjusted even when that domain is running. A domU can even be migrated across servers without being shut down, and without losing its network connections! For all these advanced aspects, the primary source of information is the official Xen documentation.

→ http://www.xen.org/support/documentation.html

12.2.2. LXC

Even though it's used to build “virtual machines”, LXC is not, strictly speaking, a virtualization system, but a system to isolate groups of processes from each other even though they all run on the same host. It takes advantages of a set of recent evolutions in the Linux kernel, collectively known as control groups, by which different sets of processes called “groups” have different views of certain aspects of the overall system. Most notable among these aspects are the process identifiers, the network configuration, and the mount points. Such a group of isolated processes will not have any access to the other processes in the system, and its accesses to the filesystem can be restricted to a specific subset. It can also have its own network interface and routing table, and it may be configured to only see a subset of the available devices present on the system.

These features can be combined to isolate a whole process family starting from the init process, and the resulting set looks very much like a virtual machine. The official name for such a setup is a “container” (hence the LXC moniker: LinuX Containers), but a rather important difference with “real” virtual machines such as provided by Xen or KVM is that there's no second kernel; the container uses the very same kernel as the host system. This has both pros and cons: advantages include the total lack of overhead and therefore performance costs, and the fact that the kernel has a global vision of all the processes running on the system, so the scheduling can be more efficient than it would be if two independent kernels were to schedule different task sets. Chief among the inconveniences is the impossibility to run a different kernel in a container (whether a different Linux version or a different operating system altogether).

NOTE LXC isolation limits

LXC containers do not provide the level of isolation achieved by heavier emulators or virtualizers. In particular: