Выбрать главу

Now that our virtual machine image is ready, let's start the container:

root@scouzmir:~# lxc-start --name=testlxc

INIT: version 2.86 booting

Activating swap...done.

Cleaning up ifupdown....

Checking file systems...fsck 1.41.3 (12-Oct-2008)

done.

Setting kernel variables (/etc/sysctl.conf)...done.

Mounting local filesystems...done.

Activating swapfile swap...done.

Setting up networking....

Configuring network interfaces...Internet Systems Consortium DHCP Client V3.1.1

Copyright 2004-2008 Internet Systems Consortium.

All rights reserved.

For info, please visit http://www.isc.org/sw/dhcp/

Listening on LPF/eth0/52:54:00:99:01:01

Sending on   LPF/eth0/52:54:00:99:01:01

Sending on   Socket/fallback

DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 4

DHCPOFFER from 192.168.1.2

DHCPREQUEST on eth0 to 255.255.255.255 port 67

DHCPACK from 192.168.1.2

bound to 192.168.1.243 -- renewal in 1392 seconds.

done.

INIT: Entering runleveclass="underline"  3

Starting OpenBSD Secure Shell server: sshd.

Debian GNU/Linux 5.0 scouzmir console

scouzmir login: root

Password:

Linux scouzmir 2.6.32-5-686 #1 SMP Tue Mar 8 21:36:00 UTC 2011 i686

The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

scouzmir:~# ps auxwf

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND

root         1  0.0  0.2   1984   680 ?        Ss   08:44   0:00 init [3]

root       197  0.0  0.1   2064   364 ?        Ss   08:44   0:00 dhclient3 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp3/dhclien

root       286  0.1  0.4   2496  1256 console  Ss   08:44   0:00 /bin/login --

root       291  0.7  0.5   2768  1504 console  S    08:46   0:00  \_ -bash

root       296  0.0  0.3   2300   876 console  R+   08:46   0:00      \_ ps auxwf

root       287  0.0  0.2   1652   560 tty1     Ss+  08:44   0:00 /sbin/getty 38400 tty1 linux

root       288  0.0  0.2   1652   560 tty2     Ss+  08:44   0:00 /sbin/getty 38400 tty2 linux

root       289  0.0  0.2   1652   556 tty3     Ss+  08:44   0:00 /sbin/getty 38400 tty3 linux

root       290  0.0  0.2   1652   560 tty4     Ss+  08:44   0:00 /sbin/getty 38400 tty4 linux

scouzmir:~#

We are now in the container; our access to the processes is restricted to only those started from the container itself, and our access to the filesystem is similarly restricted to the dedicated subset of the full filesystem (/var/lib/lxc/testlxc/rootfs), in which root's password is initially set to root.

Should we want to run the container as a background process, we would invoke lxc-start with the --daemon option. We can then interrupt the container with a command such as lxc-kill --name=testlxc.

The lxc package contains an initialization script that can automatically start one or several containers when the host boots; its configuration file, /etc/default/lxc, is relatively straightforward; note that the container configuration files need to be stored in /etc/lxc/; many users may prefer symbolic links, such as can be created with ln -s /var/lib/lxc/testlxc/config /etc/lxc/testlxc.config.

GOING FURTHER Mass virtualization

Since LXC is a very lightweight isolation system, it can be particularly adapted to massive hosting of virtual servers. The network configuration will probably be a bit more advanced than what we described above, but the “rich” configuration using tap and veth interfaces should be enough in many cases.

It may also make sense to share part of the filesystem, such as the /usr and /lib subtrees, so as to avoid duplicating the software that may need to be common to several containers. This will usually be achieved with lxc.mount.entry entries in the containers configuration file. An interesting side-effect is that the processes will then use less physical memory, since the kernel is able to detect that the programs are shared. The marginal cost of one extra container can then be reduced to the disk space dedicated to its specific data, and a few extra processes that the kernel must schedule and manage.

We haven't described all the available options, of course; more comprehensive information can be obtained from the lxc(7) and lxc.conf(5) manual pages and the ones they reference.

12.2.3. Virtualization with KVM

KVM, which stands for Kernel-based Virtual Machine, is first and foremost a kernel module providing most of the infrastructure that can be used by a virtualizer, but it is not a virtualizer by itself. Actual control for the virtualization is handled by a QEMU-based application. Don't worry if this section mentions qemu-* commands: it's still about KVM.

Unlike other virtualization systems, KVM was merged into the Linux kernel right from the start. Its developers chose to take advantage of the processor instruction sets dedicated to virtualization (Intel-VT and AMD-V), which keeps KVM lightweight, elegant and not resource-hungry. The counterpart, of course, is that KVM only works on i386 and amd64 processors, and only those recent enough to have these instruction sets.

With Red Hat actively supporting its development, KVM looks poised to become the reference for Linux virtualization.

12.2.3.1. Preliminary Steps

Unlike such tools as VirtualBox, KVM itself doesn't include any user-interface for creating and managing virtual machines. The qemu-kvm package only provides an executable able to start a virtual machine, as well as an initialization script that loads the appropriate kernel modules.

Fortunately, Red Hat also provides another set of tools to address that problem, by developing the libvirt library and the associated virtual-manager tools. libvirt allows managing virtual machines in a uniform way, independently of the virtualization system involved behind the scenes (it currently supports QEMU, KVM, Xen, LXC, OpenVZ, VirtualBox, VMWare and UML). virtual-manager is a graphical interface that uses libvirt to create and manage virtual machines.

We first install the required packages, with apt-get install qemu-kvm libvirt-bin virtinst virtual-manager virt-viewer. libvirt-bin provides the libvirtd daemon, which allows (potentially remote) management of the virtual machines running of the host, and starts the required VMs when the host boots. In addition, this package provides the virsh command-line tool, which allows controlling the libvirtd-managed machines.