With less than a year to design and build a satellite, this team used existing sensor hardware, industry-standard parts, shell scripts and our favorite OS to make the project come together.
The Department of Defense (DoD) Office of Force Transformation (OFT) approached the Naval Research Laboratory (NRL) with an opportunity to build and launch a micro-satellite, of the 100-kilogram class, to provide a platform for a host of technology and operational experiments. A key challenge posed to the Laboratory by OFT was to build this capability in less than one year. Bringing this first TacSat vision together required the development of new partnerships and methods as well as the leveraging of existing hardware, software and facilities.
Copperfield-2, a sensor system developed by the author's team for the Navy, became the cornerstone of the TacSat-1 payload infrastructure. The Copperfield-2 sensor system (Figure 2) originally was designed for use on unmanned aerial vehicles (UAVs)—a good match for adaptation to a space mission, as many of the design requirements are similar.
A satellite bus can be thought of as the spacecraft vehicle. It provides the physical and electrical infrastructure to support the payload. The satellite payload is the sensor or experiment being carried by the bus. TacSat-1 used a bus originally designed for use in the ORBCOMM constellation of small communications satellites. If Copperfield-2 was flown on an aircraft or UAV, that platform would serve as a bus, providing infrastructure to the payload.
The first hardware version of the Copperfield payload was designed from legacy hardware systems and was adapted to allow the original hardware to operate through an Ethernet-connected TCP/IP interface. When trades were made before designing the second-generation experimental capability, various bus standards, commercial off-the-shelf (COTS) emerging capabilities and other factors were considered. We decided to pursue a 3U CompactPCI architecture to allow maximum flexibility of the physical form factor (Figure 3). However, we decided to use a custom PCI motherboard so the CompactPCI user-defined P2 connector pins could be used for our own purposes. This results in a motherboard with slots that support our custom-designed hardware, slots that support COTS Ethernet switch cards and slots that can accommodate cards built to the PXI standard. The resulting architecture blends standard CompactPCI with Ethernet connectivity available by way of the P2 backplane.
Few satellite programs have the latitude or the ability to take the risks that the TacSat-1 experiment has. The TacSat-1 experiment allows innovative leveraging of both government off-the-shelf (GOTS) and COTS hardware components, as well as novel approaches to creating payload software that provide maximum flexibility and standards-based operation. The risk philosophy allowed the utilization of a modular payload hardware. Identically, a modular software and communication system was expanded for TacSat-1, extending the role of standards-based open-source software such that it provides reusable software infrastructure suitable for flexible command and control of the TacSat-1 payload.
The Copperfield-2 payload architecture was intended to provide as much flexibility as possible. It is a testament to the flexibility of the architecture that extension of the UAV payload to a space application was possible. Because the payload software components are not space-flight critical, meaning the health and safety of the spacecraft does not depend on its reliability, much of the software can be leveraged across air and space platforms.
From the beginning of Copperfield-2 development, it was our desire to capitalize on the momentum, capability and availability of Linux source code. With the development of the processor card with its PowerPC PowerQuicc II, the hardware infrastructure was in place to support a robust embedded system. The accessibility of source was a paramount feature that allowed us to recover from various situations we encountered, including board layout errors. Although the board design was made to look similar to the Motorola reference design—MPC8620ADS-PCI, which no longer is available—some ambiguities, hardware limitations and other issues necessitated changes to the kernel.
When TacSat-1 development began, many seasoned veterans questioned the choice of Linux as host to the payload control software. Proprietary real-time operating systems typically have been used for space systems developed at NRL. During the architectural design process, no hard real-time requirements were discovered, revalidating the original choice of Linux for Copperfield-2 and, thus, also for TacSat-1.
Beyond the tweaks necessary to get Linux working correctly with our hardware, only three device drivers were written—one to support the sensor data format; one to interface with the Xilinx SystemAce, a CompactFlash interface device that can be used to load FPGAs and also be used for OS storage; and one on the PowerPC 823 HSI interface box communicating with the FPGA. Due to the large Xilinx Virtex-II mapped to the memory space of our PowerPC processor, some innovation was required to handle device driver development in the face of changing FPGA designs. Don Kremer at Aeronix developed a series of utilities that can read Verilog source files and create myriad macros, C code and even HTML documentation that allow the Verilog hardware specification essentially to write the majority of the necessary drivers.
The core Copperfield-2 payload processor provides two key functions for the mission. First, it is a sensor system that receives sensed data, processes the data and interacts with onboard communications equipment to transmit the results to other sensors and ground stations. Secondly, it serves as a general-purpose computer system that provides the infrastructure for storage and data handling. In fact, multiple general-purpose processors are part of the Copperfield-2 payload, each communicating by way of an Ethernet network. A COTS Ethernet switch serves as the center of the star Ethernet architecture.
Table 1. TacSat-1 Copperfield-2 Ethernet-Connected Embedded Systems
Component | Vendor | OS | Processor |
---|---|---|---|
High-Speed Interface (HSI) | Bright Star Engineering (custom adapter board) | Linux 2.4 custom distribution | PowerPC MPC823 |
IDM UHF Modem | Innovative Concepts | Proprietary | PowerPC 860 |
Copperfield-2 MR.DIG Card | Aeronix/NRL | Linux 2.4 custom distribution (DENX ELDK-based) | PowerPC PowerQuicc II 8260 |
RF Front End Controller | Bright Star Engineering (custom adapter board) | Linux 2.4 custom distribution | StrongARM SA1110 |
To capitalize on the Ethernet, TCP/IP, standards-based architecture of the UAV payload while remaining compatible with the satellite bus' legacy OX.25 interfaces—which provide a means for downlinking science data and state-of-health telemetry—a different embedded computer module was designed specifically to serve as the bridge. This module is called the high-speed interface (HSI) and provides a 2MB synchronous serial bus connected to the spacecraft communication controller. The HSI hardware is implemented as a combination of FPGA hardware and a BSE ipEngine general-purpose PowerPC 823 embedded processor.
In the HSI, the FPGA provides the hardware necessary to meet timing requirements for the data link, decoupling the processor from the synchronous data link. The PowerPC runs a Linux 2.4-based kernel, and the HSI FPGA interface is implemented as a standard Linux device driver. No special real-time extensions are used, and a Linux-based application provides the interface between the TCP/IP networking stack, using standard protocols and the device driver implementation. The HSI system allows multiple processes and Ethernet-connected computers to access the data stream sent to the spacecraft. The PowerPC communications controller on the Copperfield-2 processor easily could have handled the HSI tasks on TacSat-1. However, due to the extremely limited availability of hardware and the desire to increase parallel development opportunities, this interface was developed independently.
The most “custom” part in any satellite program often is the payload control software. Because many of the Copperfield-2 payload components with processors run Linux, interesting software options are available. Much of the payload software was implemented as bash (Bourne again shell) scripts. During the rapid development of the payload software, the philosophy was to attempt to divide the software development into two parts, custom and reused software modules. This philosophy called for minimizing custom code to limited functions and programs with specific purposes. Occasionally, we did find that existing utilities did not quite fit the requirements, and these were modified or replacements were written.
These specific custom programs and drivers allowed for control of payload elements through small command-line utilities that could be tested completely and easily in their limited functionality. These programs were developed with the UNIX command-line functionality in mind, along with data input through standard in (STDIN) and data output through standard out (STDOUT). Developing software utilities with interfaces such as these in mind has been the standard for many legacy operating system concepts from the earliest UNIX developments. We intended to continue that strategy and build upon it, as it provides an amazingly flexible way of constructing thorough capabilities with simple although powerful utilities.
The first step in designing the software architecture was to examine what tools already were available to the developers—in this case, parts of the Linux distribution and other GNU and open-source utilities with well-defined pedigrees that provided needed capabilities. Time and time again, as we were developing the payload control software, we were amazed at the flexibility and amazing number of options that various commands provide.
One example is the GNU compression utility gzip. During a ground contact event, the payload streams data in real time through a series of software pipes. It originates in a file located on the Flash filesystem and then makes its way through various utilities, including a compression stage, and into the satellite bus. We found that it was necessary to tune gzip to select a compression ratio/performance curve that would ensure that the 1MB downlink was filled completely with data packets. gzip inserted into the downlink stream was a relatively late addition, and it allows us to make maximum use of the available downlink bandwidth. The design of command-line utilities using STDIN/STDOUT interfaces allows capabilities such as this to be integrated transparently into the data stream, within the performance capability of our computer system.
Choosing a scripting language is a difficult task—indeed, in the Open Source community, many competent options are available. Perl may have been a good choice, but we were not comfortable with the size of its installation and memory footprint. Python also would have been a great choice, but the development team did not have experience with it. The most powerful shell-scripting language appeared to be bash, although it also is the heaviest in terms of footprint. Our smallest embedded systems could not handle the entire footprint of bash, but the Busybox lightweight shell-scripting interpreter, ASH, proved almost as capable for the tasks that had to be monitored and controlled on those smaller targets.
Although space here does not allow for a complete architecture discussion of the payload control software design, at its core the software is a series of bash scripts designed to support various functions of the payload. The system is designed to take advantage of POSIX-style filesystem security. Upon boot, the first processes run as root as the system starts. As the payload control software begins to come on-line, it starts up as user BOOT. The system can stay in BOOT and provide a certain number of critical system capabilities, including providing binary telemetry streams, file transfers and direct commands. When a sensor mission is about to begin, the system moves to a state of TRANSITION, and all further data collections take place as the OPS user, who has a different set of permissions. At the conclusion of the data collection, OPS is commanded to shut down. Multiple redundant copies of the BOOT directories are designed into the system to provide backup capability in the case of filesystem corruption or other significant error.
bash scripts launch every payload control system function. They create complex filenames we use to keep track of configuration, date and time and other information. They un-gzip and untar commands and files that are uplinked to the satellite. Commands themselves also are bash scripts with simplified functionalities. They call other bash scripts to do the actual data collection or to set environment variables that change the behavior of other scripts.
This combination of the bash scripting language, GNU and open-source utilities and custom command-line applications is unique in satellite programs. For TacSat-1, most of the custom code involves the conversion of data from the TCP/IP world to proprietary OX.25 formats to handle sensor data.
The extensive use of TCP/IP-based systems and the common Linux operating system provided unique opportunities for a distributed development environment. Early in TacSat-1, our custom PowerPC 8260 development hardware had limited availability. The design cycle for much of the payload software began on Intel x86-based computer systems, migrated to generic PowerPC embedded processors and eventually made its way to the final target. The software design team was distributed spatially and tied together through a virtual private network (VPN) architecture. Remote power control devices allowed developers who were operating off-site to cycle power on hardware components. A Web-based collaboration tool allowed the posting and dissemination of critical communications and interconnection control documents (ICDs). Some developers also used instant messaging technology to stay in contact with one another. Recent additions to the collaborative working environment include the use of E-Log to maintain an on-line database of lessons learned. We also are working to integrate Bugzilla capability into the system to replace our relatively crude Message Forum-based problem report (PR) tracking.
The TCP/IP nature of the payload data network allowed developers to test communications between payload elements at each step in the design process, from developing on a standard PC to final communications before inserting the custom hardware required to communicate with the bus. Even after complete integration of the payload into the bus, an Ethernet test port allowed network access to the satellite, which was invaluable for collaborative debugging of the system. Test ports also allow access to serial consoles for most of the payload components and, in some cases, JTAG or other hardware debugging ports.
The payload software design team consisted of experienced satellite and ground station software experts, as well as team members accustomed to the TCP/IP data transport and Web/CGI application development, plus embedded systems experts. Although quite different from the typical satellite software design team, this combination provided nearly the perfect balance of skills and innovative methods to maximize the use of existing software designed for aircraft applications. The extensive remote collaboration, interface testing and networking capability provided a smooth bus-payload integration.
The core of the payload control software, including many of the command and control scripts, were developed in a span of less than four months, from start to finish. Additional scripts were inserted into the core payload control software infrastructure to bring on-line additional sensor capabilities as those sensors became available. New capabilities and patches may be uploaded to the satellite as requirements dictate.
Few satellite programs have the sponsor-supplied latitude or the ability to take risks that the TacSat-1 initiative provides. In this context, the TacSat-1 program allows innovative leveraging of both GOTS and COTS hardware components, as well as novel approaches to creating payload software that provides maximum flexibility and standards-based operation. The modular nature of the Copperfield-2 allowed rapid hardware integration, proving the concept of a modular payload that scales from UAV applications to a spacecraft application, all using Linux and GNU software as a foundation. At the time of this writing, TacSat-1 was scheduled to launch in February 2005.
The author acknowledges the essential contributions to the TacSat-1 effort by Stuart Nicholson, consultant, and Eric Karlin, Mike Steininger and Brian Davis of SGSS, Inc., for core Payload Control Software. Brian Micek of Titan Corp., Chris Gembaroski, Don Kremer, Tim Richmeyer and the Copperfield-2 team at Aeronix, Inc., Jeff Angielski of the PTR Group for Linux porting, device drivers and sensor support software. Also, thanks to Wolfgang Denx and the Linux PowerPC community for their contributions toward making PowerPC Linux stable and robust.
Resources for this article: /article/8066.