The Xen of Access Grid
(A guide for AG mechanics)



Introduction

When capturing and streaming multiple video sources in an Access Grid node, the ideal node configuration consists of one computer to deal with video display issues (and perhaps audio), another separate computer to deal with the video capture issues. Distributing the workload between display and capture machines in this way enables higher peak performance and less stressed average performance. However another important benefit, often overlooked or ignored, is that of enhanced useability.

A separate instance of vic, the current AG video tool, is required for each video capture stream. These are normally instantiated via VideoProducer services on the capture machine, while a single vic is run on the display machine via a VideoConsumer service. All local and remote video streams are viewed using this single VideoConsumer instance of vic.

In a single machine node, all the capture and display instances of vic need to be run on the same machine. As the number of venue participants increases, so does the number of video streams in each of the vic instances, raising the level of screen clutter on the console, easily resulting in confusion for the operator. Adding other applications like a SharedPresentation or VenueVNC increases the clutter and potential for confusion even further.

Despite the disadvantages of single machine nodes with multilple cameras, there are various pressures driving sites into such a situation. Certainly, it used to be smart to have a single machine node back in the "old days" of AG1. The lower cost of one machine, rather than two, is another obvious pressure.


Xen is a machine virtualisation system, enabling multiple operating systems to run concurrently in the same physical machine. We use XEN to enable both a display and a capture operating system to run together on a single machine. The 4 input capture card is visible only to the capture virtual machine (VM) which runs all the VideoProducers required, while the display VM runs a single VideoConsumer, as well as the AudioService. The node operator deals almost exclusively with the display VM, while the capture VM just runs invisibly in the background.


XEN Concepts for the AG

If you're already familiar with Xen concepts, you may with to go directly to the step by step guide to configuring a Xen Access Grid node. Otherwise, we'll briefly look at some elements of Xen and how they relate to an Access Grid installation. This should be read alongside the step by step guide. Over time, links will be added at the bottom of each paragraph, pointing to more detailed explanations of that paragraph's contents.

Xen enables multiple instances of an operating system to run concurrently on a single machine. One of these instances, or domains, can be thought of as the master, or zero domain i.e. domain 0 (dom0, for short). There can be a number of non-primary, or user, domains. We only want a total of two domains altogether (display and capture) and since one of them is dom0, lets name the other as domU. In our AG setup, we'll use dom0 for the display machine, domU for the video capture machine.

Its easiest if each domain can run in its own disk partition (don't forget about separate swap partitions for each domain too). Just install Linux in each of target partitions, but don't bother setting up lilo or grub for any except that destined to become dom0. Kernels installed at this time will not be used (except initially while setting up on dom0); new kernels generated by the Xen build process will be run instead. So far, this is just about the same as building a normal multi boot machine (except don't forget the separate swap partitions). Building the Xen tools & kernels is all done from dom0 (more correctly, pre-dom0, if you haven't built the Xen tools yet). Xen requires the GRUB boot loader, so it will need to be installed (if not already) on dom0 before running as a Xen system.
Its recommended that a source code build and install is performed. This enables later changes to be made to the kernel build options. We used the test version (there are stable, test and unstable versions available as source code releases). The Xen build tools will patch their own version of the Linux kernel source code, which will be downloaded if not found in the top level build directory xen-2.0-testing (which is created when the xen-2.0-testing-src.tgz is unpacked). Based on this stock Linux kernel source code, Xen will build a suitably patched kernel and modules for each of domains we intend to use  in directories linux-2.6.12-xen0 and linux-2.6.12-xenU (where the "linux-2.6.12" part of the name may be different, depending on your kernel version).

When the xen0 & xenU kernels have been built, configure GRUB to load the new xen0 kernel. Part of this involves letting the new kernel now how much memory it shoud use (the balance will be available to xenU) and which hardware to ignore. In particular, the capture card(s) should be ignored, so as to be available to domU when it runs as the capture system.

You should now be running the dom0 system; make any adjustments you may need (X configuration etc.). Now is the time to configure the second domain, domU. This is done by creating a file with configuration statements in the /etc/xen directory. This file contains details such as whcih kernel to use, how much of the remaining available memory will be used, which devices (capture cards) will be visible and which disk partitions will be visible. Each user domain (we're only using one in this setup) has its own configuration file.

When the configuration file is complete, the new user domain can be started with a command such as:
    /usr/sbin/xm create v1video
(where v1video is the name of the domU configuration file). This step can be automated later, by creating a link to the configuration file from the /etc/xen/auto directory; domU would then boot automatically every time dom0 is started. For now, check that domU is listed in the result of the command:
    /usr/sbin/xm list
(you should see both dom0 and domU listed)

If domU is running, a console for it can be created with the command:
    /usr/sbin/xm console v1video

The main configuration of domU itself involves the windowing system. Since dom0 has access to the screen, it is unavailable to domU, so domU cannot run X as a capture machine normally would. Instead of running the "normal" X, we run a virtual frame buffer version of X, called Xvfb, on display :1. Xvfb is a normal part of an X installation on a modern Linux installation. How this (starting Xvfb) is automated to occur at each boot time will differ on different Linux distributions. Also, /etc/profile (or some other suitable file) should be edited so that :1 is the default display e.g.
    export DISPLAY=:1
Now, when a user logs in, any application requiring a screen (e.g. vic) is able to run even though it has no physical screen to display on; it uses a virtual frame buffer instead.


On both systems, create the user accounts necessary to run the AG toolkit. Also install the AG toolkit itself. On domU, the AGServiceManager will always need to be running. This is most easily achieved via an init.d and/or rc*.d script/link to start it at boot time.

On the dom0 display system, run the VenueClient as a normal user. The domU system should be available as a distinct machine on the network, so its video capture services should be able to be added to the node's resources as usual, using the Preferences->Manage My Node dialogue.

Congratulations! You now have a Xenified AG node.

Don't forget to see the FAQ and the step by step guide to installing and configuring Xen.