[Forum] MOSIX 1

From: Svend Erik Venstrup - Nielsen <venstrup@mail1.stofanet.dk>
Date: Thu Jun 29 2000 - 13:49:47 CEST

    a Literaturlister


> Building Linux Clusters
> By David HM Spector
> 1st Edition July 2000 (est.)
> 1-56592-625-0,
> 368 pages (est.), $44.95 (est.), Includes CD-ROM
> >From scientific applications to transaction processing, clustering technology
> provides an affordable, scalable computing solution. Building Linux Clusters
> introduces the reader to the basics of cluster installation and configuration,
> complete with a CD including cluster installation programs and tools for parallel
> programming.

                             Interested in learning more about clusters? You'll find
                             two sessions on clusters at the O'Reilly Open Source
                             Software Convention:

                                Introduction to Cluster Building--Get an
                                 overview of free and commercial solutions
                                 available in the Linux environment.

                                Installing Linux Clusters and Server Farms
                                 with VACCINE --Join in a demonstration of a
                                 highly scalable system for setting up large Linux


Barak A., Guday S. and Wheeler R., The MOSIX Distributed Operating System, Load
Balancing for UNIX. Lecture Notes in Computer Science, Vol. 672, Springer-Verlag,

Relevant Papers

Amar L., Barak A., Eizenber A. and Shiloh A., The MOSIX Scalable Cluster File Systems
for LINUX (ftp), June 2000.

 McClure S. and Wheeler R., MOSIX: How Linux Clusters Solve Real World Problems (ftp),
Proc. 2000 USENIX Annual Tech. Conf., pp. 49-56,
 San Diego, CA., June 2000.

 Barak A., La'adan O. and Shiloh A., Scalable Cluster Computing with MOSIX for LINUX
(ftp), Proc. Linux Expo '99, pp. 95-100, Raleigh, N.C.,
 May 1999.

 Barak A., Gilderman I. and Metrik I., Performance of the Communication Layers of
TCP/IP with the Myrinet Gigabit LAN (ftp), Computer
 Communications, Vol. 22, No. 11, July 1999.

 Amir Y., Awerbuch B., Barak A., Borgstrom R.S. and Keren A., An Opportunity Cost
Approach for Job Assignment in a Scalable Computing Cluster (ftp), Proc. 10th Inter.
Conf. on Parallel and Distributed Computing and Systems (PDCS'98), pp. 639-645, Las
Vegas, Nevada, Oct. 1998.

 Barak A. and Braverman A., Memory Ushering in a Scalable Computing Cluster (ftp),
Journal of Microprocessors and Microsystems, Vol. 22, No.
 3-4, Aug. 1998.

 Barak A. and La'adan O., The MOSIX Multicomputer Operating System for High Performance
Cluster Computing (ftp), Journal of Future
 Generation Computer Systems, Vol. 13, No. 4-5, pp. 361-372, March 1998.

 Barak A., Braverman A., Gilderman I. and La'adan O., Performance of PVM with the MOSIX
Preemptive Process Migration (ftp) , Proc. 7th Israeli
 Conf. on Computer Systems and Software Engineering, Herzliya, pp. 38-45, June 1996.

> b Oversigt over projektet

> c Links til tilsvarende projekter


    This application is called mp3pvm, written by Brian Guarraci. mp3pvm is an
  audio application that is useful if you use MP3 audio players on your Linux
      systems or if you own a hand-held MP3 player such as a Diamond RIO.

      mp3pvm is a tool that will allow you to use a Linux cluster to create MP3 (MPEG
  Layer 3) files from music CDs that can be played on popular hand-held
      devices like the Diamond RIO.[1]

      MP3[2] is a specification for a very high-quality audio recording format that
  can rival CDs in its fidelity. MP3 has become the preferred format for small
      independent artists who typically don't have large recording contracts. It
  enables them to get their music in front on an increasingly techno-savvy audience.
      All they have to do is put a file up on a web site, and people download it to
  play on an MP3-capable device.

      It is also possible to take off-the-shelf commercial audio CDs and pull out the
  individual tracks. They can be on a computer hard disk to be downloaded to
      devices like the RIO or even used to make a personal "jukebox" where audio
  tracks are serviced up on demand from a server connected to an MP3 player.



     This application is called PVMPOV, a parallel version of the popular
  ray-tracing application Persistence of Vision ("POV"). With POV you can
      generate breathtakingly real images or even render frames for computer-generated
  animation; with PVMPOV, that process can be sped up by orders of

    PVMPOV, or the Parallel Virtual Machine version of the Persistence of Vision ray
tracing system, is a way to use clusters to generate realistic images. The
    PVM extensions to POV were originally written by the PVM team at Oak Ridge
National Labs and extended by Harald Deischinger who is the current
    maintainer of the PVM patches to the POV package.

    PVMPOV can be used either for static images or to generate frames for computer
generated animation. The results are quite striking, as seen in Figure 9-4,
    which is an image generated from the demo files included with the standard POV
package. Although the black and white version here doesn't do justice to
    the depth and complexity of the color version, it is plain to see that the image
is quite detailed. The coolest thing about it is that it was generated completely
    inside the memory of a computer.

 Where To Get It

    Thanks to Chris Cason, the current leader of the POV efforts, we are privileged
to be able to distribute both the complete sources to the POVRAY package as
    well as a set of binaries that will work on an Intel-based Linux cluster.
Updates to the POV software is freely available from the POV Ray Tracing home page
    at http://www.povray.org.


  This application is called PVFS, the Parallel Virtual File System, written
  by Matthew M. Cettei, Walter B. Ligon III, and Robert B. Ross at the Parallel
  Architecture Research Lab at Clemson University. PVFS allows you to construct an
  extremely high-performance filesystem out of a cluster.

PVFS, the Parallel Virtual File System, is a way to use a Beowulf-style clusters
for something other than strictly computational tasks.

PVFS is a package that allows a cluster of workstations to be used as a
high-performance filesystem. PVFS can be used to implement at RAID-like Storage
Area Network (SAN) that can deliver very high performance with very low
computational overhead. Such a file service is very useful for parallel applications

    that need access to high volume data delivery such as data mining and other
database-like applications.


    PVFS can work on either multiple disks on a single machine, or on multiple disks
spread across multiple machines on a network. It delivers its high
    performance by a combination of data striping (splitting up data over a number
of disks or servers) and multiple network interfaces that are used to deliver
    data to applications that consume it.



This page explains how MOSIX can be used to configure different cluster environments,
including time-sharing, cluster partitioning and
 batch. For simplicity, it is assumed that the cluster consists of a pool of
(dedicated) servers and a set of (personal) workstations.
 Multi-user, time sharing environment

 Single-pool mode - all the computers (servers and workstations) are used as a single
MOSIX cluster:

    install the same "mosix.map" file on all computers, containing the IP addresses of
all your computers.

 Advantage and disadvantage: your workstation is part of the pool.

 Server-pool mode - servers are shared while the workstations are not part of the

    install the same "mosix.map" file on all servers, containing only the IP addresses
of your servers.

 Advantage and disadvantage: remote processes will not move to your workstation. You
need to login to one of the servers to use the cluster.

 Adaptive-pool mode - servers are shared while the workstations join or leave the
cluster, e.g. from 5PM to 8AM:

    install the same "mosix.map" file on all computers, containing the IP addresses of
all the servers and workstations, then add lines in each
    workstation's "crontab" that run "mosctl expel" and "mosctl noblock" at designated

 Another possibility is that a workstation joins the cluster if it is inactive for some
time, e.g., 30 minutes, and leaves the cluster when the owner

    use a simple script, that frequently filters the output of the `w' command to
decide whether MOSIX should be activated or deactivated.

 Advantage and disadvantage: remote processes can use your workstation when you are not
using it.

 Half-duplex pool - servers are shared while workstations can send processes to the

    insert a "mosctl expel" line in the MOSIX start-up scripts
("/etc/rc.d/init.d/mosix") of your workstations.

 Advantage and disadvantage: your workstation is part of the cluster only for your
 Dynamic cluster partitioning

 Dynamic cluster partitioning to sub-clusters - there are many ways to achieve a
dynamic configuration. Here is one suggestion:

    Assign unique MOSIX node-numbers to all your computers, save those numbers in
    Plan the possible cluster-combinations in advance, where each node belongs to no
more than one cluster.
    Prepare a directory per combination, preferably in a common (NFS) directory, say
"/usr/clusters/combination{n}/", containing
    mosix-configuration files for all the clusters, e.g.
    To change to combination #n, run on all your computers "mosconf {n}", where
"mosconf" is the following shell-script:

    me=`cat /etc/mospe`
    for conf in /usr/clusters/combination$1/*
       case `awk '{if('$me' >= $1 && $3 != "ALIAS" && '$me' < $1 + $3)\
             print "Found" ; next}' < $conf` in Found)
             setpe -w -f $conf
             exit ;;
    # no configuration found
    setpe -off


 Batch mode - the cluster is configured as in the Multi-user, time sharing, server-pool
mode, except that users can access the servers only via a program that queues requests
in a common directory. A daemon program on the server(s) then dequeues and process
those requests.

Distribution terms
 MOSIX Copyright © 1998, 1999, 2000 The Hebrew University of Jerusalem. All rights
reserved. MOSIX for Linux is subject to the GNU General Public License version 2, as
 published by the Free Software Foundation.


 Users of MOSIX are invited to send to: mosix@cs.huji.ac.il improvements or extensions
and to grant the MOSIX team the rights to redistribute these changes.
 Source distributions

 To install MOSIX, download the file that corresponds to your Linux distribution. After
downloading, ``gunzip'' and ``untar'' that file, do
 not unpack the resulting files. Follow the (automatic or manual) installation

    MOSIX 0.97.6 for Linux 2.2.16 (http, ftp) (latest)
    MOSIX 0.97.3 for Linux 2.2.14 (http, ftp)

 Each package includes installation, upgrade, UN-installation instructions and a
reference manual
 Note 1: older (non-latest) distributions are known to have Linux and MOSIX bugs;
 Note 2: the MOSIX package applies only to the official Linux kernel sources.

 RPM distributions

 To install MOSIX using RPMs, download 2 files:

    The generic MOSIX kernel RPM (pre-configured for both uni-processor and SMP, with
no sound, video for Linux and joystick
    A distribution specific MOSIX RPM

 After downloading, install the RPMs with the "rpm -Uvh" command, e.g.,
 rpm -Uvh kernel-MOSIX-0.97.5-2.2.15i686.i386.rpm
 rpm -Uvh mosix-redhat-0.97.5-3.i386.rpm
 and reboot.

 Generic MOSIX kernel 0.97.6 RPM [~3.9MB]: (http, ftp) , Updated June 11,2000

 Distribution specific MOSIX RPMs for:

    RedHat 6.0, 6.1, 6.2 [108KB] ( http, ftp) , Updated June 11,2000
    SuSE 6.0, 6.1, 6.2, 6.3 [112KB] ( http, ftp) , Updated June 11,2000

 For questions about RPMs please contact arielez@cs.huji.ac.il

 Other RPM distributions

 Mandrake RPMs (kernel and utils.)


> d Målbeskrvelse
> e Sponsorer
> f .Fysiske faliliteter
> 1 Lokaler
> 2 Varighed
> 3 Maskiner
> 4 Software
> g Tekniske beskrivelser

Der er nogle ting vedr sammenkædning af SCSI controlers, som jeg er ved at undersøge,
det kommer senere,

> Det er hvad jeg har fundet i løbet af formiddagen. Hvor skal mødet vedr. dette være
> på tirsdag, er det muligt for Knud at låne det sedvanlige lokale

Egentlig vilde jeg også gerne have nogle kommentarar.

> Hilsen svend
> (-;>/
Received on Thu Jun 29 13:29:17 2000

This archive was generated by hypermail 2.1.8 : Tue Jul 19 2005 - 16:01:20 CEST