LJ Archive

Large-Scale Linux Configuration Management

Paul Anderson

Issue #72, April 2000

Mr. Anderson describes some general principles and techniques for installing and maintaining configurations on a large number of hosts and describes in detail the local configuration system at Edinburgh University.

The difficulty of installing and setting up Linux is often mentioned as one of the reasons it is not more widely used. People usually assume that editing the traditional UNIX configuration files is more difficult than using the graphical interfaces provided by operating systems like Microsoft Windows. For a novice user with a single machine, this may be true, and most commercial UNIX vendors now supply GUI-based tools for at least some aspects of system configuration. Under Linux, projects like COAS (see Resources 1) and the Red Hat distribution are starting to cater to this need.

For a large installation with tens or hundreds of machines, the GUI approach does not work—entering individual configuration data for 200 machines is simply not practical. As well as the ability to install large numbers of machines, big sites usually need more control over the configuration; for example, they might need to install new machines with a configuration which is guaranteed to be identical to an existing one. Machines are also likely to need periodical reconfiguring as their use changes, or simply to keep up to date with the latest software and patches.

To do this effectively requires a good deal of automation, and large UNIX sites have been developing their own tools for many years (see Resources 2). The flexibility and accessibility of UNIX configuration files makes Linux particularly suitable for automation, and those sites attempting to install and manage large numbers of NT systems are often likely to find the process more difficult (see Resources 3).

The Division of Informatics at Edinburgh University has over 500 UNIX machines, with a wide variety of different configurations. Most of them are installed and maintained automatically using the LCFG (Local ConFiGuration) system, originally developed several years ago (see Resources 4). Both client and server configurations can be easily reproduced to replace failed machines or to create tens of identical systems for a new laboratory. Reconfiguration is thus a continuous process; for example, machines adjust every night to ensure they are carrying the latest versions of the required software. Linux (we use a version of the Red Hat distribution) has proven itself well-suited to this environment, and it has recently overtaken Solaris to become the most popular desktop system, both for staff use and student laboratories.

Make-Up of a Good Configuration System

An automatic configuration system should be able to build working machines from scratch with no manual intervention. This includes configuration of the basic operating system (disk partitions, network adaptors), loading of required software, and configuration of application-specific services such as web servers. This allows failed machines to be recreated quickly, using replacement hardware, and new machines to be installed efficiently, even by junior staff. As a side effect, it also avoids the need for backups of any system partition.

The set of configuration information that drives this build process defines the personality of an individual machine, and it is extremely useful if this specification is available in an explicit form (such as a plaintext file or a database). Machines can then be cloned simply by copying their specification and applying the automatic build. This is important for installing multiple similar machines, such as in a student laboratory. The master copy of the specification should be held remotely from the machine, so that it is available even when the machine is down. This allows programs to automatically verify individual configurations and even the relationships between machines, such as ensuring every client's specified DNS server is actually configured to run a name daemon. The specification can also be generated from higher-level descriptions of a machine's function. An inheritance model is very useful, since many machine configurations can be conveniently described as small variations of a generic configuration for a particular class.

Traditional configuration systems are often static, in the sense that the configuration is applied only at the time the machine is installed. Most vendor-supplied installation processes fall into this category, as do systems based on cloning by copying disk images. If subsequent changes to the configuration have to be applied manually, the configuration is almost certain to “rot”, and it is impossible to be confident that all machines are correctly configured. Obvious misconfigurations simply result in users having malfunctioning machines. More subtle misconfigurations may go unnoticed and pose serious security problems, for example. Even though a fully dynamic system is not practical, an ideal system will continually adjust the configuration to conform to the specification. Some parameters can be changed immediately to track a change in the specification; some, such as a network address, may be changed only when the machine reboots; and others, such as a disk partitioning, may require a complete rebuild.

If a configuration system is incomplete and manual intervention is necessary, many of the benefits are lost. However, constructing a comprehensive system to cover every conceivable parameter is clearly impractical. The key problem is trying to create an extensible framework flexible enough to allow new parameters and components to be incorporated with little effort. An individual instance of the system can then evolve at a particular site to suit the local requirements. If it is going to be extended on demand by working administrators, the framework needs to be extremely lightweight and comprehensible in a short amount of time. It must be easy to create components in a familiar language, and to interface them to new subsystems which require configuration. Open-source software is an advantage, since it is often easy to base a new extension on one that already exists.

The LCFG Framework

Before the introduction of LCFG, we were configuring machines using a typical range of techniques, including vendor installs and disk copying (cloning). These were followed by the application of a monolithic script which applied assorted “tweaks” for all the different configuration variations. This met with virtually none of the requirements listed above and was a nightmare to manage.

The available alternatives ranged from large commercial systems (too expensive and probably too inflexible) to systems developed at individual sites for their own use (often not much of an improvement over our existing process). More recently, interesting tools such as COAS and the GNU cfengine (see Resources 5) have appeared, but we are still not aware of any comparable system which addresses quite the same set of requirements as LCFG.

Given limited development resources, we attempted to design an initial system as a number of independent subsystems, intending to use temporary implementations for some of the ones where we could leverage existing technology:

  • Resource Repository: design a standard syntax for representing resources (individual configuration parameters). These would be stored in a central place where they could be analysed and processed as well as distributed to individual machines.

  • Resource Compiler: preprocess the resources so that we could create configurations by inheritance and avoid specifying large numbers of low-level resources explicitly.

  • Distribution Mechanism: distribute the master copy of the resources to clients on demand in a robust way.

  • Component Framework: provide a framework which allows components to be easily written for configuring new subsystems and services, using the resources from the repository.

  • Core Components: implement a number of core components, including basic OS installation and the standard daemons. We wanted some of these to act as exemplars to make it as easy as possible for other people to create new components.

The Resources

Items of configuration data are represented as key,value pairs, in a way similar to X resources. The key consists of three parts: the hostname, the component and the attribute. For example, the nameserver (cul) for the host wyrgly is configured by the DNS component:

wyrgly.dns.servers: cul.dcs.ed.ac.uk

Notice that this specification is a rather abstract representation, not directly tied to the form in which the configuration is actually required by the machine, in this case, as a line in the resolv.conf file. This allows the same representation to be used for different platforms, and it permits high-level programs to analyse and generate the resources easily . The LCFG components on each machine are responsible for translating these resources into the appropriate form for the particular platform. COAS uses a similar representation for configuration parameters.

The resources are currently stored in simple text files, with one file per host. This collection of files forms the repository. We intend to provide a special-purpose language for specifying these resources; it would support inheritance, default configurations, validation and some concept of higher-level specifications. However, we are currently using a “temporary” solution based on the C preprocessor, followed by a short Perl script to preprocess the resources. The C preprocessor provides file inclusion and macros, which can be used for primitive inheritance. The Perl script allows inherited resources to be modified with regular expressions. Wild cards are also supported to provide default values.

In practice, most machines have very short resource files which simply inherit some standard templates. Machines can be cloned simply by copying these resource files. Often, a few resources are overridden to provide slight variations. For example:

#include <generic_client.h>
#include <linux.h>
#include <portable.h>
amd.localhome:  paul
auth.users:     paul

The name of the host is not necessary in the resource keys, because this is generated from the name of the resource file.

Resources are currently distributed to clients using NIS (Sun's Network Information System). This is another “temporary” solution which is far from ideal; we hope to replace it in the near future.

The Component Framework

A number of components on each machine take the resources from the repository and implement the specified configuration in whatever way is appropriate for that particular platform. The components are currently implemented as shell scripts which take a standard set of method arguments, rather like the rc.d startup scripts under Red Hat Linux:

  • START: executed when the system boots.

  • STOP: executed when the system shuts down.

  • RUN: executed periodically (from cron).

A client-server program (om) also allows methods to be executed on demand on multiple remote machines. Components may have other arbitrary methods in addition to the standard ones.

Different types of components will perform different actions at different times. Typically, a daemon might be started at boot time, reloaded periodically, and stopped at shutdown. Some components however, might simply perform a reconfiguration at boot time, or start only in response to the RUN method (for example, a backup system).

Component scripts normally inherit a set of subroutines from a generic component. This provides default methods and various utility procedures for operations such as resource retrieval. This makes simple components easy to write, and scripts are frequently quite short.

Some Important Components

A typical host runs 20 to 30 components, controlling subsystems such as web servers, printers, NIS services, NFS configuration and various other daemons. Two components are worth mentioning in more detail.

The boot component is the only one run directly from the system startup files. This uses resources to determine which other components to start. The set of services running on a particular machine is therefore controlled by the boot resources.

The update component normally runs nightly, as well as at boot time. This uses the extremely useful updaterpms program which compares the RPMs installed on a machine with those specified via the resources. RPMs are automatically installed or deleted to synchronise the state of the machine with the specification. This means that all machines in the same class are always guaranteed to have identical sets of up-to-date packages. Changing an inherited resource file will automatically reconfigure the RPMs carried by all machines in the class.

Machine Installation

As much configuration as possible is performed dynamically by the various components. However, some configuration, such as disk partitioning, must be hard-wired at installation time. New machines are booted using an installation floppy, which mounts a root file system from the network, a CD or Zip drive. The boot process runs a special install component which determines all necessary install-time parameters by interpreting the machine's install resources. A very minimal template is installed on the new system and the update component is used to load the initial set of RPMs.

This supports completely unattended builds of new machines, as well as rebuilds of existing machines. If there is any doubt about the integrity of a system, it is normal for us to simply rebuild it from scratch.

Problems and Future Plans

The concept of an open, lightweight framework has been very important; many people have contributed components so that virtually everything which varies between our machines is now handled by LCFG. This has made the system very successful; however, much of the implementation is still based on technologies originally intended to be temporary. We are currently planning to expand the use of LCFG beyond our own department and this is motivating a redesign of some of the subsystems, although the basic architecture will remain the same:

  • We hope to implement a new syntax for specifying the resources, together with a special-purpose resource compiler.

  • We hope to replace the NIS distribution with something simpler which is available earlier in the boot sequence.

  • We would like to re-implement the components in Perl, using Perl inheritance to provide generic operations.

Other items on the wish list include caching support for portables and secure signing of resources.

Resources

Acknowledgements

email: paul@dcs.ed.ac.uk

Paul Anderson is a Senior Computing Officer with the Division of Informatics at Edinburgh University. He has been involved with UNIX systems administration for 15 years. Further information is available from www.dcs.ed.ac.uk/~paul, and comments by e-mail are welcome at paul@dcs.ed.ac.uk.

LJ Archive