CREATE Logo

The CREATE Real-time Applications Manager (CRAM) Project

Downloads


Project Description

CSL/CRAM configurationThe CREATE Real-time Applications Manager--CRAM--is a framework for developing, deploying, and managing distributed realtime software. It has evolved through three implementations over the space of five years. The background of CRAM is the work done since the early 1990s on distributed processing environments (DPEs), which started in the telecommunications industry (see the section on background below). 

This document presents an overview of distributed processing and software  environments to support it, and describes the CRAM architecture and implementation. The intended audience is application developers who wish to use CRAM to manage their systems.

Introductory Scenario

Distributed applications are software programs where the components (services) run on different computers. Often, this corresponds to the client/server design pattern. As an initial scenario to motivate the need for distributed real-time systems, imagine a multi-computer music performance application where one computer is reading data from one or more input devices (MIDI devices, head trackers, computer vision systems, etc.) and mapping this controller data onto concrete parameters for a given set of synthesis programs. We can call this program the input server. In our scenario, another computer is running a synthesis server, which takes commands from the input server, and performs software sound synthesis, sending its output over the network to some an output server. The output server is a program that reads sound sample blocks (coming in via the network) from one or more synthesis servers and mixes and spatializes them.

CRAM Manager Screen The problem now arises of how we are to start and stop this application in a controlled manner. One could do it manually; logging in to the various machines to start the output server, then the synthesis server, then the input server. To better manage distributed applications, one needs software tools that can remotely start, stop, and monitor software on several computers connected by a network. This is the task of distributed processing environments (DPEs).

The Parts of a DPE

A DPE generally consist of at least three components: a node manager, a service interface, and a system manager. The node manager is a simple daemon (a stand-alone program) that is assumed to be running on each computer that the DPE intends to manage. Node managers accept commands from the network application manager to start/stop/monitor remote services. Their simplest role is as a "remote execution" server. A DPE service interface is a simple set of functions that applications need to implement in order to be managed by a DPE. This functionality is normally packaged as a software class that a developer includes in an application. The service interface functions generally include basic start/stop methods, and some sort of status query; they are used by the node manager to control the service. The third component is the system manager; it uses node managers to start the components of a distributed application. DPE systems often use databases to describe network hardware facilities and applications.

In CRAM, the node manager is a small program written in C++ that uses a simple socket-based protocol to talk to its services. The system manager uses the same protocol to communicate withy node managers. The service interface component that is incorporated into application programs includes code that implements this protocol, and starts a "listener" thread when the application starts. This thread waits for commands from the node manager to control the application.

There are two features that are often considered optional in DPEs, but are central to CRAM: fault-tolerance and load-balancing. For applications that require robust software (e.g., musical performance), the system must be able to identify and recover from a hardware or software fault within a small number of seconds. For large-scale systems that are to handle dynamic processing and I/O loads (e.g., musical performance), some manner of planning-time as well as run-time load-balancing is also necessary. We will discuss these features more below.

Using CRAM

To start and manage a distributed application using CRAM, we first assume that the network nodes (computers) are known, and that we have access to a database with information about them. We also assume that  node manager daemon programs are running on the nodes of the network. Lastly, we assume that the software we want to use is installed on the computers, or on a file server to which the network nodes have access.

We describe applications as collections of services running on nodes. A service is just an application program, written in any arbitrary language, that implements the CRAM service interface functions (including the command listener thread). The three services (input, synthesis, output) that we introduced in the scenario above are candidate CRAM services.

To manage the example music performance application using CRAM, we need to store a description of it in the CRAM database. This simply means that we define which service is to run on which node. If we have computers named waltz, jerk, and belly (all of our computers are named after dances), then we would send the following SQL (structured query language) command to the database:

insert into applications (name, services) values ('SirenCSL', 
    '{ "jerk.input_server", "belly.synthesis_server", "waltz. output_server" }' );

Once this is done, we can start up the CRAM system manager. When it starts, it loads the database tables that describe nodes on the network, types of services and their options, and applications. We can use the system manager to make sure that the application will run (i.e., that the nodes are all on and have node managers running on them), and then tell it to start the application. When we do this, the system manager sends messages to each of the node managers requesting that then create and initialize the services described in the database. Once these are all ready, the application is started by sending "start" messages to the services.

While the application is running, we can use the system manager to monitor its status.

For more details, down-load the other documentation and the source code.

For project information, please contact Stephen Pope, CREATE, Dept. of Music, UCSB, email stp@create.ucsb.edu.

Return to CREATE Home