Re: [Forum] MOSIX

From: Svend Erik Venstrup - Nielsen <venstrup@mail1.stofanet.dk>
Date: Wed Jun 28 2000 - 13:18:51 CEST

>
>
> a Literaturlister

Building Linux Clusters

                          By David HM Spector
                          1st Edition July 2000 (est.)
                          1-56592-625-0,
                         368 pages (est.), $44.95 (est.), Includes CD-ROM

From scientific applications to transaction processing, clustering technology
provides an affordable, scalable
computing solution. Building Linux Clusters introduces the reader to the basics of
cluster installation and
configuration, complete with a CD including cluster installation programs and tools
for parallel programming.

Beta Sample Chapter
   Chapter 9
    Application Examples

    After all this talk about the potential of clusters, and the various ways a
cluster can be put together and managed, you probably would like to see some
    applications in action.

    The choice of applications to include here has been difficult. It would have
been very easy to include some examples of high-energy physics applications.
    Molecular modelling code would have made for some large, impressive bits of
code. These examples would not illustrate how easily clusters can be used for
    down-to-earth applications. Instead, I have chosen to document a few smaller,
more interesting applications; one that might be of use to you if you are
    running your cluster at home, and another that might come in handy in a parallel
development setting for developing applications that need to move data
    more effectively.

    The first application is called mp3pvm, written by Brian Guarraci. mp3pvm is an
audio application that is useful if you use MP3 audio players on your Linux
    systems or if you own a hand-held MP3 player such as a Diamond RIO.

    The second application is called PVMPOV, a parallel version of the popular
ray-tracing application Persistence of Vision ("POV"). With POV you can
    generate breathtakingly real images or even render frames for computer-generated
animation; with PVMPOV, that process can be sped up by orders of
    magnitude.

    The final application is called PVFS, the Parallel Virtual File System, written
by Matthew M. Cettei, Walter B. Ligon III, and Robert B. Ross at the Parallel
    Architecture Research Lab at Clemson University. PVFS allows you to construct an
extremely high-performance filesystem out of a cluster.

    mp3pvm

    mp3pvm is a tool that will allow you to use a Linux cluster to create MP3 (MPEG
Layer 3) files from music CDs that can be played on popular hand-held
    devices like the Diamond RIO.[1]

    MP3[2] is a specification for a very high-quality audio recording format that
can rival CDs in its fidelity. MP3 has become the preferred format for small
    independent artists who typically don't have large recording contracts. It
enables them to get their music in front on an increasingly techno-savvy audience.
    All they have to do is put a file up on a web site, and people download it to
play on an MP3-capable device.

    It is also possible to take off-the-shelf commercial audio CDs and pull out the
individual tracks. They can be on a computer hard disk to be downloaded to
    devices like the RIO or even used to make a personal "jukebox" where audio
tracks are serviced up on demand from a server connected to an MP3 player.

        Something to Think About

        Before we get too far into making music CDs into portable MP3 files, I must
point out a few things about this process that are important from a legal point
        of view. The very existence of devices that can play copyrighted music on
devices other than those that were originally intended is, to say the least, an
        extremely contentious issue.

        The Recording Industry Association of America (RIAA) has been fighting
increased piracy of copyrighted music. This involves people copying music off of
        audio CDs, converting the tracks to MP3 format, and then making them
available over the Internet, often violating copyright laws. RIAA feels that devices

        such as the Diamond RIO and programs that create MP3 files contribute to
this piracy problem and take money away from the artists who create the music.

        The use of this tool is meant as a demonstration of how to use a cluster to
make a library of recordings of music that you own for playback on an MP3 player.
        Using this program to illegally copy music is, obviously, against U.S. law
and would violate international agreements on copyrights and intellectual property.
        Having said all of that, U.S. law allows the purchaser of audio CDs and
other electronic media to re-record copies of copyrighted works for their own
        personal use as long as the copies are not distributed to others, whether
for free or for profit.

    Obviously, neither I nor O'Reilly and Associates condone the use of this program
for any unlawful purpose.

    Now that we have covered the required legalisms, let's take a look at how you
can use a cluster to make your CD collection a little more portable.

    How It Works

    mp3pvm uses some very simple clustering techniques to speed up the process of
MP3 conversion.

    First, a music CD is placed in the CD-ROM drive of the master node. A program is
invoked that reads the music on the CD as though each song was just a
    data track (which it is, after all) on a regular CD-ROM. The audio tracks are
then stored on a shared (cluster-wide) disk as individual files.

    Next, by using a divide-and-conquer method, the individual audio tracks are
processed by nodes in the cluster. This is done by another program, called an
    MP3 encoder, that translates the CD audio format files into the portable MP3
format files. The MP3 than can be put into a database and played on a Linux
    workstation with an MP3 player or downloaded to MP3 players and enjoyed
on-the-go.

    Sounds simple, no? It is a pretty straightforward process, and you may be
wondering: "why anyone would bother to make this a cluster project?" MP3
    conversion is an interesting problem for a cluster because the conversion
process is very CPU-intensive. Reading and converting a 72-minute audio CD can
    take over two hours depending on how dense the data in the individual tracks is
(tracks that have more bits of silence are easier to process than tracks that
    have continuous sound).

    Using a distributed process speeds up the conversion from audio to MP3 format by
orders of magnitude for every compute node that added to the process,
    once the tracks are read off of the CD.

    What's Required

    In order to use the mp3pvm program, you will need to have the following

        A master node with a CD-ROM drive. No audio card is required since the
conversion software will be reading the audio tracks as "data" rather than
        attempting to play them though an audio card connected to the system.

        A shared disk space across the cluster. This can be accomplished via NFS, or
the automounter. (Or, with modifications to the mp3pvm program, it
        should be possible to actually parcel out the data tracks to slave nodes,
but that would consume a lot of network bandwidth.)

        Slave nodes. The more, the better.

        cdparanoia--an Open Source program that can read audio CDs and rewrite the
audio track as data streams onto another medium such as a files on a
        hard disk.

        bladeenc--an Open Source program that can translate various audio formats
into MP3.

    Building and Installation

    Building this package is pretty easy. All of the software you need has been
included on the Linux Clusters CD-ROM.

    In order to install mp3pvm, you will need to mount the Linux Clusters CD-ROM;
this will need to be done by the superuser (root). You will need to "su" to
    root, or have the administrator of your cluster complete this step in order to
copy the software.

    Mounting the CD-ROM

    With the CD in the CD-ROM drive, execute the following command, which will mount
the CD-ROM and make it available to the system:

    [root@master /root]# mount -r /dev/cdrom /mnt/cdrom

    You can check to see if the CD-ROM is correctly mounted by using the df command.
The actual sizes of the partitions listed may be different than what is
    shown here, but they should look something like this:

    Filesystem 1k-blocks Used Available Use% Mounted on
    /dev/sda1 248847 47711 188286 20% /
    /dev/sda5 5050844 11271 4778117 0% /home
    /dev/sda9 248847 431 235566 0% /tmp
    /dev/sda6 1018298 650548 315139 67% /usr
    /dev/sda7 995115 76970 866739 8%
/usr/local
    /dev/sda8 893986 33052 814749 4% /var
    /dev/scd0 589998 589998 0 100% /mnt/cdrom

    The important part of this listing is the last line where we can see that the
CD-ROM is indeed mounted and online.

    Once the CD-ROM has been mounted, all of the required software can be copied
with a single command:

    [root@master /root]# cp -Rp /mnt/cdrom/ExampleApps/mp3pvm /tmp/mp3pvm

    The data will be copied into the temp directory. If you would like to copy the
programs to another directory, just change the target directory on the command
    line. For the purposes of this example, we will assume that the files are in
/tmp/mp3pvm.

    The directory contains three gzip'd tar files:

    -rw-rw-r-- 1 root root 137035 Aug 22 23:17
bladeenc-082-src-stable.tar.gz
    -rw-rw-r-- 1 root root 97126 Aug 22 23:18
cdparanoia-III-alpha9.6.src.tgz
    -rw-r--r-- 1 root root 4817 Aug 22 23:14 mp3pvm-0.3.tar.gz

    The first file is the bladeenc MP3 encoder that translates the audio files into
MP3 format; the second is the cdparanoia program that reads the audio tracks
    off the CD-ROM; and finally, the third is the mp3pvm package that will perform
the conversion in parallel on the cluster.

    Each package should be uncompressed and untar'd with the following command:

    root@master /root]# tar zxvf filename

    where filename is one of the files listed above. Each file should be processed
in turn. The tar command will print out an in-depth listing of all of the files that

    are being unpacked from each archive as they are being processed.

    At the end of the process, there will be three directories created that can be
listed (along with the original gzip'd tar files) as follows:

    [root@master /root]# ls -FC
    bladeenc-082-src-stable/ cdparanoia-III-alpha9.6/ mp3pvm/
    bladeenc-082-src-stable.tar.gz cdparanoia-III-alpha9.6.src.tgz
mp3pvm-0.3.tar.gz

    The next step is to build each piece of software and install it in an accessible
place on each node of the cluster.

    Building bladeenc

    The bladeenc encoder comes pre-configured and will compile on any Linux system.

    To build the encoder, change to the blade encoder source directory by typing:

    [root@master /root]# cd bladeenc-082-src-stable

    Then, start the build process by typing "make" at the shell prompt:

    [root@master /root]# make
    gcc -O2 -m486 -malign-jumps=2 -malign-loops=2 -funroll-all-loops -c -o
bladesys.o bladesys.c
    gcc -O2 -m486 -malign-jumps=2 -malign-loops=2 -funroll-all-loops -c -o bladtab.o
bladtab.c
                            :
    gcc -o bladeenc bladesys.o bladtab.o codec.o common.o encode.o
formatbitstream2.o huffman.o l3bitstream.o l3psy.o loop.o main.o mdct.o
    reservoir.o samplein.o strupr.o subs.o tables.o -lm

    The process will complete very quickly; the resulting binary will be called
"bladeenc." This binary should be installed in a generally accessible place, such as

    /usr/local/bin on the master node as well as any cluster nodes on which you wish
to run the mp3pvm application.

    Building cdparanoia

    To build the cdparanoia, change to the cdparanoia directory by typing:

    [root@master /root]# cd .../

    to move back up to the mp3pvm distribution directory which houses the
applications, and then type:

    [root@master /root]# cd cdparanoia-III-alpha9.6

    to enter the cdparanoia source directory. Unlike the blade encoder, cdparanoia
uses the GNU autoconfigure script to set up the build parameters for the
    package. Part of the autoconfigure process allows the builder to specify where
the resulting binary should be installed.

    To configure the package, type the command:

    [root@master /root]# ./configure --prefix=/usr/local

    The script will print out information about the packages and configuration of
the system it is running on, and it will create a makefile that can be used to
    compile the package:

    bash# ./configure --prefix=/usr/local
    loading cache ./config.cache
    checking host system type... i686-unknown-linux
    checking for ranlib... (cached) ranlib
    checking for ar... (cached) ar
    checking for install... (cached) install
    checking how to run the C preprocessor... (cached) gcc -E
    checking for ANSI C header files... (cached) yes
    checking size of short... (cached) 2
    checking size of int... (cached) 4
    checking size of long... (cached) 4
    checking size of long long... (cached) 8
    checking for linux/sbpcd.h... (cached) no
    checking for linux/ucdrom.h... (cached) no
    checking whether make sets ${MAKE}... (cached) yes
    checking for working const... (cached) yes
    creating ./config.status
    creating Makefile
    creating interface/Makefile
    creating paranoia/Makefile

    When the configure script completes, type make at the shell prompt to compile
the package:

    [root@master cdparanoia-III-alpha9.6]# make
    make cdda_interface.a CFLAGS="-O -Dsize16='short' -Dsize32='int' "
    make[2]: Entering directory `/tmp/mp3pvm/cdparanoia-III-alpha9.6/interface'
    gcc -O -Dsize16='short' -Dsize32='int' -c scan_devices.c
    scan_devices.c: In function `cdda_find_a_cdrom':
    scan_devices.c:69: warning: passing arg 4 of `idmessage' makes pointer from
integer without a cast
            :
    make[1]: Leaving directory `/tmp/mp3pvm/cdparanoia-III-alpha9.6'
    strip cdparanoia

    Once the build process has completed, all that's left is to install the binary
in the directory that was specified in the configure command by typing make
    install at the command prompt:

    [root@master cdparanoia-III-alpha9.6]# make install

    The makefile will then install the executable and its manual page:

    install -m 0755 ./cdparanoia /usr/local/bin
    install -m 0644 ./cdparanoia.1 /usr/local/man/man1

    Depending upon what installation directory was specified to the configure
command, you may have to be root in order to actually install the files.

    Building mp3pvm

    To build the mp3pvm parallel application itself, change to the mp3pvm directory
by typing:

    [root@master /root]# cd .../

    to move back up to the mp3pvm distribution directory, which houses the
applications, and then:

    [root@master /root]# cd mp3pvm

    to change to the mp3pvm source directory.

    In order to correctly operate, the mp3pvm application needs to know where to put
files that it reads off of the audio CD. The information that controls this
    aspect of the program's operation is in the file mp3pvm.c in a line of code near
the top of the file that looks like this:

    /* the common directory where wav files are put and mp3 are created */
    #define WAVDIR "/home/music"

    You will need to edit this file and change /home/music to the name of the
directory where you wish to store the audio tracks and MP3 file that is accessible
to
    all of the cluster nodes you plan on using.

    Providing that your cluster has been built from the CD-ROM supplied with this
book, all of the environment variables should be set and the PVM software
    installed so that mp3pvm can be compiled with a single command:

    [root@master /root]# aimk
    making in LINUX/ for LINUX
    gcc -g -I/usr/local/pvm3/include -DSYSVSIGNAL -DNOWAIT3
-DRSHCOMMAND=\"/usr/bin/rsh\" -DNEEDENDIAN -
    DFDSETNOTSTRUCT -DHASERRORVARS -DCTIMEISTIMET -DSYSERRISCONST -o mp3pvm
/root/mp3pvm/mp3pvm.c -L
    /usr/local/pvm3/lib/LINUX -lpvm3 -lgpvm3
    mv mp3pvm /usr/local/pvm3/bin/LINUX

    The aimk program is a PVM utility that automatically builds and links a PVM
application. Once the application is built, aimk will attempt to install the
    binary, called mp3pvm, into the PVM binaries directory. On clusters built with
the Linux Clusters CD-ROM, the default location for PVM applications is
    /usr/local/pvm3/bin/LINUX/. If you are not the superuser, the installation
process will return an error message when an attempt is made to copy the
    executable into this PVM binaries directory. As with the other files, you will
have to "su" to root or have the cluster administrator install the mp3pvm
    executable. Alternatively, you can run the mp3pvm application in the build
directory, but you will have to ensure that all nodes in the cluster have this
    executable in the same place so that the parallel virtual machine can find it
when it needs to start the application.

    Installing the Files Cluster-wide

    Before these programs can be used to make MP3 files on a cluster, each node must
have copies of all three of these applications: bladeenc, cdparanoia, and
    mp3pvm.

    Assuming that these applications are installed in the following locations:

        /usr/local/bin/bladeenc

        /usr/local/bin/cdparanoia

        /usr/local/pvm3/LINUX/mp3pvm

    the files should be copied using rcp, ftp, or any other method to each of the
cluster nodes that you wish to use for MP3 processing. All copies of these
    applications must be installed in the same place on each node. If the
applications are missing on some nodes, or installed in different places on some of
the
    nodes, the mp3pvm application will fail. This is because the mp3pvm application
will try to launch applications based on where is finds them on the master
    node of the parallel virtual machine.

    How To Use It

    The mp3pvm application is very easy to use. The whole system can be up and
running in three easy steps:

    Step 1: Starting the virtual machine

    Start the parallel virtual machine software by typing the command pvm:

    [spector@master /home/music]# pvm

    The PVM system will respond with a PVM prompt:

    pvm>

    If your system was set up using the cluster management system from the Linux
Clusters CD-ROM, then all of the nodes in your cluster should already be
    known to PVM, and you can list them by entering the conf command:

    pvm> conf
    5 hosts, 1 data format
                        HOST DTID ARCH SPEED DSIG
    master.cluster.ny.zeitgeist.com 40000 LINUX 1000 0x00408841
                       node2 80000 LINUX 1000 0x00408841
                       node3 c0000 LINUX 1000 0x00408841
                       node4 100000 LINUX 1000 0x00408841
                       node5 140000 LINUX 1000 0x00408841
    pvm>

    In this example, there are five nodes in the parallel virtual machine.

    If the PVM returns with simply the name of the node you are on (presumably the
master node), such as:

    pvm> conf
    1 host, 1 data format
                        HOST DTID ARCH SPEED DSIG
    master.cluster.ny.zeitgeist.com 40000 LINUX 1000 0x00408841
    pvm>

    you will need to add the nodes that you wish to use with the mp3pvm application
by hand. To accomplish this, type the add command, along with the name of
    the node to be added to the parallel virtual machine:

    pvm> add node2.cluster.ny.zeitgeist.com
    1 successful
                        HOST DTID
    node2.cluster.ny.zeitgeist.com 80000
    pvm>

    This will need to be done for each of the slave/compute nodes that will be used.
Once all the desired nodes have been added, the conf listing will look very
    much like the first one shown above.

    Once the PVM is set up, exit from the PVM shell by typing quit. This will place
you back at the command shell.

    Step 2: Setting up the CD

    Place an audio CD that you wish to convert to MP3 format in the CD-ROM drive of
the master node of the cluster. You do not need to mount the CD (which
    isn't mountable anyway--audio CDs are not a recognized filesystem under Linux).

    For our example, we'll use the soundtrack to The Rocky Horror Picture Show, a CD
that has 16 tracks totalling 54 minutes and 45 seconds.

    Step 3: Starting the parallel application

    Start the mp3pvm parallel application by typing the full path name of the
application, plus the number of tracks that should be converted. For example:

    [spector@master /home/music]# /usr/local/pvm3/bin/LINUX/mp3pvm 16

    Of course, if your copy of mp3pvm is located someplace else, you should use the
appropriate path name for your installation. The application will start up and
    report on its progress on a track by track basis:

    Preparing to translate 16 tracks.
    Spawning 17 tasks ... SUCCESSFUL
    Broadcasting init info
    Waiting for tasks to init.
    Farming...
    Collector finished: track1.wav
    Sending WID:(track1.wav) to worker #0
    Collector finished: track2.wav
        :

    At the same time the output is going to standard output, it might be interesting
to see what XPVM thinks is going on. XPVM (see Chapter 7) is an execution
    monitor for PVM applications. It can be started up by opening another X terminal
or other command shell, and typing xpvm at the shell prompt.

    XPVM shows a very different view of the mp3pvm application in action.

    The XPVM window shown in Figure 9-1 shows the master node in communication with
four slave nodes. The task/time graph in the bottom half of the
    window shows that the first task is the copy of mp3pvm running on the master
node. The message lines on the graph show the master spawning subordinate
    tasks on other nodes in the parallel virtual machine and giving them work to do.

     Figure 9-1. XPVM view of mp3pvm
               activity

    You may notice that there is a flurry of activity right at the start, and then
all of the activity bars for the slave nodes are white while the master node is
    highlighted.

    A more detailed look at this may be seen in "task versus time" display by using
the middle mouse button (or its equivalent if you are not using a three-button
    mouse) to select an area of the task graph. When you release the mouse button,
the display will "zoom in" on the tasks and show greater detail of the
    interaction between the PVM tasks.

    Figure 9-2 shows that the master node is starting up, spawning the worker tasks,
and then processing the audio CD itself, pulling the tracks and putting them
    in the shared disk space. While the master task is processing the CD, the slaves
are sleeping, waiting to be assigned a track to convert to MP3 format.

     Figure 9-2. A detailed look at the
      startup communications of the
          mp3pvm application

    The processing will continue for quite some time; the exact time depends upon
the speed of the nodes in your cluster and how large the input audio files are.

    It is also possible, using other display options in XPVM, to watch the output
from the individual PVM worker processes, as is shown in Figure 9-3. This
    display is accessible from the XPVM "Views" menu.

       Figure 9-3. XPVM's task
          output display

    At the end, you will see a message from the master copy of the mp3pvm
application indicating that the MP3 conversion process has completed:

        :
    Collector finished: track16.wav
    Sending WID:(track16.wav) to worker #15
    Collector is done
    Cleaning up... 18 in group
    [spector@master /home/music]#

    To check on the work, simply get a file listing of the shared directory where
mp3pvm was configured to place the MP3 output files. For example, if mp3pvm
    was configured to use the directory /home/music as in our examples, the first
set of files will be the MP3 files, and then the original audio (known canonically
    as "wave" files) files:

    [spector@master /home/music]# ls /home/music
    track1.mp3 track2.mp3 track3.wav track4.mp3 track5.mp3 track6.mp3
    track7.mp3 track8.mp3 track9.mp3 track10.mp3 track11.mp3 track12.mp3
    track13.mp3 track14.mp3 track15.mp3 track16.mp3 track1.wav track2.wav
    track3.wav track4.mp3 track5.wav track6.wav track7.wav track8.wav
    track9.wav track10.wav track11.wav track12.wav track13.wav track14.wav
    track15.wav track16.wav

    All of the files have been copied from the CD and converted to MP3 format.

    They are now ready to be played either via an MP3 player on a desktop-style
machine or to be downloaded to a hand-held MP3 player.

    Possible extensions

    Obviously this is a minimalist application, there are a number of areas where it
could be improved to make transforming audio files into MP3 easier. Some
    areas that might be easy targets for enhancement are:

    Automatic track counting
        This would enable the mp3pvm application to figure out on its own how many
tracks are present on the target CD. Right now, the number of tracks to
        process must be specified on the command line. Conversely, custom track
selection would allow the user to select which tracks would be processed. This
        would be useful where you want to pull your favorite tracks from a CD and
leave the ones you don't like.

    Album and track name processing
        This would allow the mp3pvm application to make the resulting MP3 files more
easily identifiable. Right now the tracks are simply numbers (e.g.,
        "track1.mp3") and you have to rename them by hand if you want to know which
track is which.

        This can be accomplished by reading a binary ID number stored on every
commercially produced CD. This "disc ID" can be used to find out the title,
        artist, and other information about the CD and can be used to query any of
several databases on the Internet that store album names, track titles, and
        even lyrics of entire songs. This kind of functionality would be very useful
in a "jukebox" type of application where you might want to be able to
        automatically process CDs in your collection or to be able to display this
kind of information to the user in a MP3 player application.

    Database integration
        Hooks could be added to mp3pvm that could allow the program to automatically
store the resulting MP3 files in a database, which would come in handy
        in the previously mentioned jukebox application. As the application stands
right now, there is a lot of grunt work involved in taking the MP3 files
        generated by this process and doing something interesting with them.

    PVMPOV

    PVMPOV, or the Parallel Virtual Machine version of the Persistence of Vision ray
tracing system, is a way to use clusters to generate realistic images. The
    PVM extensions to POV were originally written by the PVM team at Oak Ridge
National Labs and extended by Harald Deischinger who is the current
    maintainer of the PVM patches to the POV package.

    PVMPOV can be used either for static images or to generate frames for computer
generated animation. The results are quite striking, as seen in Figure 9-4,
    which is an image generated from the demo files included with the standard POV
package. Although the black and white version here doesn't do justice to
    the depth and complexity of the color version, it is plain to see that the image
is quite detailed. The coolest thing about it is that it was generated completely
    inside the memory of a computer.

     Figure 9-4. A ray-traced sunset from the
     POV samples, designed by Dan Farmer

    How Does POV Work?

    Ray tracing is, literally, tracing the path of a light beam as it reflects off
of an object. The POV ray tracing system works by reading a set of textual
    instructions that describe a scene, and then applying a system of algorithms
that figure out what would happen (i.e., how something would look) if a beam of
    light were cast on objects in the scene, and then the reflected light were
captured on film.

    The description file sets up a scene and the objects in it down to the last
detail of their characteristics. The format of the description file is pretty
    straightforward; it consists of three main sections:

    Includes section
        This section is where pre-defined objects, textures, and other tools are
included from a defined set of library files and made available to subsequent
        parts of the description file.

    Camera section
        This section defines where the "camera" that records the scene is placed.
This is defined in terms of X, Y, and Z coordinates that represent vertical,
        horizontal, and depth parameters.

    Objects section
        This section defines the elements of the scene itself. It is in this section
where, as in the sunset example, the sun, the glowing halo around the sun, and
        the sea itself are defined. This section also defines where in the scene
each object is, relative to the other objects in the scene.

    The complete description file for the sunset image is defined by less than 93
lines of image definitions. The file itself is composed of vanilla text and looks
like
    the fragment shown here, which is from the sunset3.pov file:

    #include "colors.inc"
    #include "textures.inc"
    #include "skies.inc"
    #include "metals.inc"

    camera {
        location <0, 0.075, -0.45>
        up y
        right <4/3, 0, 0>
        direction z
        look_at <0, 0.075, 0>
    }

    // Dark cloudy sky
    sky_sphere {
        pigment {
            wrinkles
            turbulence 0.3
            omega 0.707
            octaves 5
            color_map {
                [0.0 color DustyRose * 2.5]
                [0.2 color Orange ]
                [0.8 color SlateBlue * 0.25]
                [1.0 color SkyBlue]
            }
            scale <0.5, 0.1, 1000>
        }
    }
    :
    :
    :
    #end of file

    This file fragment shows the three major sections of the POV scene description,
starting with the include directives, the camera settings, and then the
    beginnings of the object descriptions, here describing the parameters of the
cloudy sky shown in the final image.

    Where To Get It

    Thanks to Chris Cason, the current leader of the POV efforts, we are privileged
to be able to distribute both the complete sources to the POVRAY package as
    well as a set of binaries that will work on an Intel-based Linux cluster.
Updates to the POV software is freely available from the POV Ray Tracing home page
    at http://www.povray.org.

    The sources for the main package and the patches required to make POVRAY work on
a Beowulf cluster under the PVM package are included on the
    CD-ROM in the ExampleApps directory in the POVPVM subdirectory.

    How To Install It

    If you have built your cluster from the CD-ROM supplied with this book, then the
PVMPOV version of the executables will be installed by default and should
    run exactly as described in the next section.

    If you are running your cluster on a different architecture, or if you have
retrieved a newer version of the POVRAY software from the POV FTP site, you will
    need to re-apply the patches that are used to make a parallelized version of the
software for cluster use.

    The PVM patches to POVRAY are very easy to install. The entire operation should
take only a few minutes once you have the POV source code. Here's a
    step-by-step guide to applying the patches and making a parallel version of POV.
It is important to point out that these patches are not part of the POV
    distribution, and if you were to ask the POV authors about them, they would
disavow any knowledge of them--in other words, these patches are very cool,
    useful, and unsupported.

    Step 1: Getting the POV sources

    The POV sources are available from either the POV home page at
http://www.povray.org or via anonymous FTP at ftp.pov.org. The sources for the Unix
    version of POV are in the directory /pub/povray/Official/Unix, the files that
required to build the package are povuni_d.tgz and povuni_s.tgz. The first file is a

    collection of data files, documentation, and examples that are part of the POV
sources, and the second file is the actual source code itself. You should put
    these files someplace easily accessible that has at least 20Mb of free space;
for the purposes of rest of these examples, we will presume the sources are in /tmp.

    The following script is an example of an ftp session to retrieve the entire POV
package:

    gatekeeper.zeitgeist.com % ftp ftp.povray.org
    Connected to ftp.povray.org.
    220 ProFTPD 1.2.0pre3 Server (ftp.povray.org) [ftp.povray.org]
    Name (ftp.povray.org:spector): ftp
    331 Anonymous login ok, send your complete e-mail address as password.
    Password: your-email@yoursite.org
    230 Anonymous access granted, restrictions apply.
    Remote system type is UNIX.
    Using binary mode to transfer files.
    ftp> cd pub/povray/Official/Unix
    250-POVUNI_S.TGZ - Official POV-Ray 3.1e C source code for Unix systems.
                    This file is not required if you have one of the complete
                    POV-Ray Unix distributions such as POVLINUX.TGZ. If
                    compiling POV-Ray separately, you MUST get POVUNI_D.TGZ.
     POVUNI_D.TGZ - POV-Ray 3.1e Documentation and Scene files for Unix. This
                    archive is not needed if you already have another POV-Ray
                    3.1e distribution such as POVLINUX.TGZ, POVMSDOS.ZIP, etc.
                    They ARE needed if you only have the POVUNI_S.TGZ archive.
    250 CWD command successful.
    ftp> get povuni_d.tgz
    local: povuni_d.tgz remote: povuni_d.tgz
    200 PORT command successful.
    150 Opening BINARY mode data connection for povuni_d.tgz (911334 bytes).
    226 Transfer complete.
    911334 bytes received in 69.3 secs (13 Kbytes/sec)
    ftp> get povuni_s.tgz
    local: povuni_s.tgz remote: povuni_s.tgz
    200 PORT command successful.
    150 Opening BINARY mode data connection for povuni_s.tgz (945669 bytes).
    226 Transfer complete.
    945669 bytes received in 71.1 secs (13 Kbytes/sec)
    ftp> bye
    221 Goodbye.

    Step 2: Unpacking the sources

    If we use the /tmp directory as our starting point, you should unpack the
sources by using the following commands:

    [spector@master]# tar zxvf povuni_s.tgz
    [spector@master]# tar zxvf povuni_d.tgz

    These commands will show you what they are doing as the files are being
unpacked. Once both tar commands have both been executed, you will have a
    directory called povrayNN where the "NN" is a version number. In the case of
this example, the directory is called povray31 because the version that was
    current at the time of this book was being written was POV version 3.1g; your
version may be of a newer vintage.

    If you change directory to the POV directory and get a directory listing, you
should see that the POV distribution consists of a number of documentation files,
    some initialization files, and a sources directory:

    [spector@master]# ls

    CMPL_Unix.doc compile.doc pngflc.ini povwhere.get res640.ini
slow.ini zipfli.ini
    README.unix gamma.gif pngfli.ini rerunpov.sh res800.ini source
    allscene gamma.gif.txt povlegal.doc res.ini revision.doc
tgaflc.ini
    allscene.ini htm2html povray.1 res120.ini runpov.sh
tgafli.ini
    allscene.sh include povray.ini res1k.ini scenes
xpovicon.xpm
    betanews.txt install povuser.txt res320.ini shapes.pov
zipflc.ini

    It is in this directory that you will unpack the sources to the patch files that
will enable POV to work on a cluster.

    Step 3: Unpacking the PVMPOV patches

    The PVM patches for the POV system are available on the Linux Clusters CD-ROM.
If we presume that the CD-ROM is mounted on the device /dev/cdrom
    and that your current working directory is the povray directory unpacked in the
last step, then the patches can be unpacked directly with one command:

    [spector@master]# tar zxvf /mnt/cdrom/ExampleApps/POV/pvmpov-3.1.tgz

    This will extract five files and one directory of sources into the current
directory. The files are documentation, the patch file itself, and a small script
that will
    apply the patches to the POVRAY source files.

    Step 4: Patching POV

    Apply the patch to the sources by running the inst-pvm shell script, as in:

    [spector@tmaster povray31]# ./inst-pvm
    Trying to apply the patch.

    Searching for rejected files

    If you see nothing listed between the "trying to apply..." and "searching..."
lines, the patch was successfully applied to the POV sources, and you can continue
    to Step 5 and build the modified sources.

    If there are problems with the patch (for example, some of the patches are
misaligned with regard to the current version of the source), you will get error
    messages from the patch program, as in this next listing:

    [spector@tmaster povray31]# ./inst-pvm
    Trying to apply the patch.

    2 out of 18 hunks FAILED -- saving rejects to source/povray.c.rej
    2 out of 8 hunks FAILED -- saving rejects to source/render.c.rej
    1 out of 2 hunks FAILED -- saving rejects to source/render.h.rej

    Searching for rejected files

    ./source/povray.c.rej
    ./source/render.c.rej
    ./source/render.h.rej

    If this happens, all is not lost! It's pretty easy to look at the .rej files and
then compare them to the sources and insert the patches by hand. The patch program
    just makes things a little more convenient.

    Step 5: Building the patched POV

    Building the package is pretty easy, there are two commands that need to be
executed to build the bulk of the package:

    [spector@master]# cd source/unix

    This will place you in the appropriate directory to build the binaries; then
type:

    [spector@master]# make newunix
    [spector@master]# make newxwin

    The build will continue for quite a while; POV is a very large package.
Eventually it will complete, and you will then want to build the PVM binary that can

    take advantage of the cluster.

    To install the main POV executables' type make install at the shell prompt; this
will install the files x-povray and povray in the directory /usr/local/bin. If
    you would like to install these programs somewhere else, you will need to modify
the makefile to point to the appropriate place.

    Lastly, as root, you should copy all the installed files to all the nodes of the
cluster you wish to use for ray tracing. POV will need the executables and the
    supporting files in order to operate on each of the nodes

    Step 6: Building the PVM-specific component

    To start this process, change directory to the pvm directory, as in:

    [spector@master]# cd ../pvm

    Next, you will need to invoke the aimk utility, which is a PVM tool that is used
to build PVM applications:

    [spector@master]# aimk

    The aimk utility will read the makefile in this directory and build the PVMPOV
application. If you are logged or su'd to root, aimk will automatically install
    the PVMPOV executable in the default location for PVM binaries on your cluster.
In the case of clusters built with the Linux Clusters CD-ROM, that's
    location will be /usr/local/pvm3/bin/LINUX/. If you are not root, you will need
to install as root in a place that is globally accessible. In either case, you
should,
    as with the x-povray and povray binaries, ensure that copies of the PVMPOV are
on all the nodes of your cluster that you would like to use for ray tracing.

    How to Use It

    POV is a very easy program to use once you have a scene description file that
you want to render. For this example, we'll use the files that generated the pretty
    sunset image from earlier in this chapter. These files can be found in the
pov3demo directory in the showoff subdirectory.

    In order to render this image, we'll need two files: a .pov file that describes
the scene and an initialization or .ini file that can be used by the povpvm program
    to set some basic parameters. In this case the files should be sunset3.pov and
sunset3.ini.

    The "showoff" directory in some of the standard POV distributions may not have
an initialization file for all of the demo scene descriptions. If there is not
    one for this file, just copy any of these files to a new file named sunset3.ini
and edit the first line of the file so that it uses the sunset3.pov scene
description.

    Once you have the .ini and .pov files, you are ready to start rendering.

    Actually running the application is quite easy and can be started with a single
command line.

    Before running the application, you should start up the parallel virtual machine
with the number of nodes that you wish to use, then the following steps will
    begin the rendering process.

    Step 1: Copying the scene and initialization files

    The scene and initialization files will have to be available on each node where
PVMPOV runs. If home directories are shared with NFS or the automounter,
    placing a copy in your home directory will suffice. Otherwise, the files will
have to be copies to each node in the parallel virtual machine.

    Step 2: Running PVMPOV

    The PVMPOV application can be run quite simply with one command:

    [spector@master]# pvmpov -NS/usr/local/bin/pvmpov
/home/spector/pov3demo/showoff/sunset3.ini

    The -NS directive[3] tells PVMPOV where to find itself--usually this isn't
necessary, but it ensures that the PVMPOV application can be found on all of the
    compute nodes, even if there are differences in your PATH environment variables
on the different compute nodes.

    This will start as many slave tasks as you have compute nodes defined in your
parallel virtual machine.

    POV will print out a lot of information about the job as it starts up that
relate to the job parameters and files that will be included to generate the
ray-traced
    scene.

    When the scene is complete, POV will print out some statistics about the scene
and how the slave/compute nodes performed, and the resulting image file will
    be left in the same directory.

    Step 3: Viewing the results

    The output file is in what is known as tga file format. This is a
high-resolution graphics file format that can be viewed with most viewers, including
the ee
    ("Electric Eyes") application that comes with Linux.

    To view the sunset file, start up ee with the filename sunset3.tga as its
argument, as in:

    [spector@master]# ee /home/spector/pov3demo/showoff/sunset3.tga

    The image that is displayed should be very close to what was shown in Figure
9-4, except with much better colors.

    POV As a Clustered Application

    PVMPOV as a clustered application is very interesting to examine. If we were to
run the XPVM execution monitor, and then rerun PVMPOV we would see
    just how much computation is done in parallel. A small snippet of this activity
can be seen in the screen shot of XPVM shown in Figure 9-5, which shows the
    intense back and forth communications between the master PVMPOV process and the
five slave processes that each render a portion of the final image.

       Figure 9-5. Communications
       activity rendering sunet3.pov

    At another level, the degree of utilization of the cluster nodes can be seen in
the graph shown in Figure 9-6.

      Figure 9-6. CPU utilization over
     time while rendering sunset3.pov

    Over 85% of the total time used in the cluster is spent actually rendering the
image; the rest is used by the master in communication with the compute-bound
    processes getting pieces of the image back so that it can be assembled into the
final image.

    Another interesting comparison is to look at the output of the POV application
as it renders the same scene file both on a single CPU and on a five-node
    parallel virtual machine:

    On the five-node cluster:

    POV-Ray statistics for 5/5 slaves:
    Done Tracing
    sunset3.pov Statistics, Resolution 360 x 400
    ----------------------------------------------------------------------------
    Pixels: 153288 Samples: 310536 Smpls/Pxl: 2.03
    Rays: 612870 Saved: 0 Max Level: 0/5
    ----------------------------------------------------------------------------
    Ray->Shape Intersection Tests Succeeded Percentage
    ----------------------------------------------------------------------------
    Plane 1124477 251054 22.33
    Sphere 132760 94792 71.40
    Bounding Box 646086 164572 25.47
    Light Buffer 257237 257237 100.00
    Vista Buffer 754668 680972 90.23
    ----------------------------------------------------------------------------
    Calls to Noise: 3104570 Calls to DNoise: 3810795
    ----------------------------------------------------------------------------
    Halo Samples: 384200 Supersamples: 0
    Shadow Ray Tests: 511686 Succeeded: 24361
    Reflected Rays: 225846
    Transmitted Rays: 76488
    ----------------------------------------------------------------------------
    Smallest Alloc: 24 bytes Largest: 40004
    Peak memory used: 74488 bytes
    ----------------------------------------------------------------------------
    Time For Trace: 0 hours 0 minutes 5.0 seconds (5 seconds)
        Total Time: 0 hours 0 minutes 5.0 seconds (5 seconds)

    On a single node:

    sunset3.pov Statistics, Resolution 360 x 400
    ----------------------------------------------------------------------------
    Pixels: 144360 Samples: 279096 Smpls/Pxl: 1.93
    Rays: 557264 Saved: 0 Max Level: 5/5
    ----------------------------------------------------------------------------
    Ray->Shape Intersection Tests Succeeded Percentage
    ----------------------------------------------------------------------------
    Plane 1045275 229499 21.96
    Sphere 126858 90738 71.53
    Bounding Box 595304 157503 26.46
    Light Buffer 234745 234745 100.00
    Vista Buffer 659743 598771 90.76
    ----------------------------------------------------------------------------
    Calls to Noise: 2790140 Calls to DNoise: 3445800
    ----------------------------------------------------------------------------
    Halo Samples: 367250 Supersamples: 0
    Shadow Ray Tests: 488093 Succeeded: 23590
    Reflected Rays: 205072
    Transmitted Rays: 73096
    ----------------------------------------------------------------------------
    Smallest Alloc: 10 bytes Largest: 12308
    Peak memory used: 178437 bytes
    ----------------------------------------------------------------------------
    Time For Trace: 0 hours 0 minutes 38.0 seconds (38 seconds)
        Total Time: 0 hours 0 minutes 38.0 seconds (38 seconds)

    Obviously ray-tracing on a cluster can save a lot of time. Of course, this is
not a very valid bench mark; it's more of a feel-good test. Clusters will be better
in
    some applications and worse in others; it all depends on the image being
rendered. For example, if you are using PVMPOV to render cells for animation, you
    are probably better off dedicating a single compute node to each frame rather
than trying to have the cluster intermix slices of many cells--the
    communications overhead will be lower and the individual frames will probably
run to completion faster.

    Where to Go From Here

    This is just the tip of the iceberg in rendering. There are numerous tools and
utilities that can be used with a ray tracer such as POV (and PVMPOV isn't the
    only ray tracing package in the world--it just happens to run on a cluster and
is readily obtainable); I would recommend taking a look at the POV web site for
    some links to tools that can be used to make your own models for rendering.
There are also links there to other ray tracing packages as well as lots of good
    information on advanced computer graphics.

    PVFS

    PVFS, the Parallel Virtual File System, is a way to use a Beowulf-style clusters
for something other than strictly computational tasks.

    PVFS is a package that allows a cluster of workstations to be used as a
high-performance filesystem. PVFS can be used to implement at RAID-like Storage
    Area Network (SAN) that can deliver very high performance with very low
computational overhead. Such a file service is very useful for parallel applications

    that need access to high volume data delivery such as data mining and other
database-like applications.

    Overview

    PVFS can work on either multiple disks on a single machine, or on multiple disks
spread across multiple machines on a network. It delivers its high
    performance by a combination of data striping (splitting up data over a number
of disks or servers) and multiple network interfaces that are used to deliver
    data to applications that consume it.

    Disk striping

    Disk striping is a process by which a filesystem is spread out over more than
one physical device. The interface to the filesystem remains the same from the
    operating system perspective, but the underlying software that manages a striped
filesystem does the hard work of getting required bits of data as they are
    needed off of the various devices where the data is and recombines it so it can
be presented to a calling program. This is not a new concept, it has been used in
    filesystems for many years in the form of RAID systems.

    PVFS services

    PVFS brings this concept to a new level in that it changes the dynamics of this
concept in several ways. First, PVFS moves the striping out of
    operating-system level and makes is a user-level process. PVFS allows these
virtual files to be spread out over an arbitrary number of hosts on regular
    network workstations that may be doing other things (i.e., the devices do not
have to be dedicated to PVFS).

    Lastly, PVFS provides a set of tuning utilities that allow the performance of
the filesystem to be tuned to meet the need of applications and a programmatic
    interface that allows parallel programs to bypass the regular Unix filesystem
calls to grab data in using a streams interface that allows for higher levels of
    thoughput than the regular Unix filesystem would allow.

    Installation

    PVFS and its supporting libraries are supplied in both source and RPM format on
the Linux Clusters CD-ROM. As with the mp3pvm example above, you
    will need to have the Linux Clusters CD-ROM mounted on your master node.

    Presuming the Linux Clusters CD-ROM is mounted on /dev/cdrom, the RPMs for PVFS
should be copied to a convenient place on your master node. The
    usual place is /usr/src/redhat/RPMS, and the files can be copied with the single
command:

    [spector@master /home/music]# cp /dev/cdrom/ExampleApps/PVFS/*.rpm
/usr/src/redhap/RPMS/

    This will copy two RPM files, glibc-objs-libio-2.0.6-2.i386.rpm and
pvfs-1.2.3-1.src.rpm, into the default location for packages.

    glibc-objs-libio-2.0.6-2.i386.rpm is set of extensions to the standard GNU C
libraries that support the specialized I/O capabilities that PVFS provides, while
    pvfs-1.2.3-1.src.rpm is the PVFS code itself. These RPMs have been built to
install into the /usr tree. This is because they are so closely linked to other
    system-level libraries that they use. Most of the packages supplied on the Linux
Clusters CD-ROM are installed into /usr/local to ensure that they aren't
    overwritten during any upgrades. If you wish to have PVFS installed in
/usr/local or some other part of the filesystem, you will have to compile the
package
    from the sources, which are in the same directory that the binary RPMs are in.
The PVFS packages is only a few megabytes, but the supporting glibc sources
    take about 40MB and will take the better part of an hour to compile, even on a
fast machine.

    Configuration

    PVFS, since it can be used to create a filesystem that spans multiple machines,
has very stringent configuration parameters that must be followed in order for
    the product to work correctly. The following steps will ensure that everything
is set up correctly for use.

    Step 1: Selecting a home

    PVFS requires two directories in which to store files and filesystem meta-data
(data about the filesystem). The place that you select should have enough disk
    space to hold whatever information you are planning to make available via PVFS.

    Since PVFS is a virtual filesystem, as discussed, it is implemented as a
distributed filesystem whose data exists across several actual filesystems on
several
    machines. If you wanted to make a 1GB filesystem in PVFS across four nodes, you
would want to reserve at least 250MB of space on each machine. You will
    actually need to have a bit more because space is needed for the meta-data
directory that will contain information about the files on the distributed
    filesystem.

    By default, PVFS looks for the directories /pvfs and /pvfs_data.

    Step 2: Creating the directories

    The directories for the PVFS data and meta-data can be located anywhere, but
it's easiest from a configuration point of view to make these directories on
    their own filesystems mount at the root, or as links from the root filesystem
where PFVS expects to find them so you don't have to make too many
    modifications to the package defaults.

    If you are planning on using existing partitions, for example on an extra
partition called /spare, you will want to use mkdir to make directories as follows:

    [spector@master /home/music]# mkdir /spare/pvfs
    [spector@master /home/music]# mkdir /spare/pvfs_data

    You could then use links from the root filesystem such as:

    [spector@master /home/music]# ln -s /spare/pvfs /pvfs
    [spector@master /home/music]# ln -s /spare/pvfs_data /pvfs_data

    to allow PVFS to find the directories in the default locations.

    As a last step, you should set the ownership of the /pvfs directory so that it
has no explicit privilege with:

    [spector@master /home/music]# chown nobody.nobody /spare/pvfs

    Step 3: Configuring PVFS

    PVFS has several configuration files that need to be set up before the daemons
can be started to initialize the system, both on the master node that controls
    the operation of PVFS and on each node that will be used as a slave node where
pieces of the virtual filesystems will reside.

    Fortunately, the initialization is performed with a configuration program so the
configuration files don't have to be hand-edited. To start the configuration,
    invoke the configuration program, /usr/pvfs/bin/mkiodtab, on the master node.
This will need to be done as root:

    [spector@master /home/music]# /usr/pvfs/bin/mkiodtab

    The mkiodtab configurator will ask for a several key pieces of information,
including the name of the PVFS root directory, permissions for the directory and
    the host names for the master node and the slave nodes. The process will need to
be repeated for each node that will participate in the parallel virtual
    filesystem:

    This is the iodtab setup for the Parallel Virtual File System.
    It will also make the .pvfsdir file in the root directory.

    Enter the root directory:
    /pvfs
    Enter the user id of directory:
    root
    Enter the group id of directory:
    root
    Enter the mode of the root directory:
    777
    Enter the hostname that will run the manager:
    localhost
    Searching for host...success
    Enter the port number on the host for manager:
    (Port number 3000 is the default) return

    Enter the I/O nodes: (can use form node1, node2, ... or
    nodename{#-#,#,#})
    localhost,node5
    Searching for hosts...success
    I/O nodes: localhost node5
    Enter the port number for the iods:
    (Port number 7000 is the default) return

    Done!

    In the example configuration session listed above, the user input is listed in
bold. The key points that need to be noted are the /pvfs directory should be the
    place where you want to house the virtual filesystem. If you have made links
from the root filesystem to someplace where the actual PVFS directory is
    located, you can use the "/pvfs" directory as indicated in the session example.
It is also important when configuring the non-manager nodes to use the
    hostname of the actual manager node and not "localhost" as in the example.

    Finally, you will need to set an environment variable that forces a library to
be loaded, which will allow the PVFS to be tied into the regular processing of
    commands:

    [spector@master /home/music]# LD_PRELOAD=/usr/lib/libpvs.so ; eport LD_PRELOAD

    If your shell is not a bash or ksh derivative, you will have to use whatever
syntax is used in your shell to define shell variables. This shell variable will
need
    to be defined in every process and on each node that uses the parallel virtual
filesystem.

    The LD_PRELOAD environment variable controls which libraries are loaded and
available before the standards libc library in the chain of execution. What
    this means is the for every command that is linked with libc (such as ls, rm,
mkdir, etc.) the libraries listed in the LD_PRELOAD variable will be search
    first for any shared library code. Since commands like ls use standard I/O
routines which are in libc, the program loader will look in the libpvfs.so file for
    routines needed by ls and will execute those routines as though they were the
ones that ls expected to see; the libpvfs.so code will then execute the "real" libc
    code once it has finished doing whatever PVFS operations that were required by
the command. In this way PVFS adds support for a non-standard filesystem
    transparently.

    PVS Uses in Parallel Programming

    One of the most interesting uses for PVFS is as an enhanced I/O system for
parallel programs on a cluster.

    With most Unix systems, including Linux, the filesystem supports very primitive
operations. It is possible to:

        Open a file

        Close a file

        Read or write a character

        Read or write a buffer

        Seek to an arbitrary position in the file

    A full description of the capabilities of PVFS is beyond the scope of this
discussion, but it's worth nothing some of the advanced features of PVFS. A
    specialized file system like PVS can provide capabilities not available in
Linux's ext2 file system. These include I/O needs, such as changing the default
block
    size delivered by a vendor write call, or filespace operations, such as directly
stripping the file system to increase throughput for parallel processes.

    1. Diamond Multimedia is the creator of the RIO; information on the RIO
hand-held music player can be found at http://www.diamondmm.com/rio.

    2. MP3 information can be found at innumerable places on the Internet, but one
of the most popular is http://www.mp3.com where MP3 players, music, and
    other related materials can be found.

    3. POV has one of the longest lists of command-line options in the history of
computing. For the sake of brevity they will not be recounted here, but you can
    see all of them by typing man povray for the generic POV options and man pvmpov
for options specific to the parallelized version.

>
>
> b Oversigt over projektet
>
> c Links til tilsvarende projekter
>
> d Målbeskrvelse
>
> e Sponsorer
>
> f .Fysiske faliliteter
> 1 Lokaler
> 2 Varighed
> 3 Maskiner
> 4 Software
>
> g Tekniske beskrivelser
>
> Det er hvad der falder mig ind, der er sikker meget mere
>
> Der kunde oprettes et diskusions område.
> på vor hjemmeside
>
> Bjarkes indlæg på et møde ( Husker ikke dato ) kunde renskrives
> og lægge ind som start.
>
> Hilsen svend
>
> (-;>/
>
Received on Wed, 28 Jun 2000 13:18:51 +0200

This archive was generated by hypermail 2.1.8 : Tue Jul 19 2005 - 16:01:19 CEST