UCL MSSL Swift USA Swift UK SOHO GIS

The systems engineering stories

I have spent many years working for various companies on  contracts to develop or use space instrumentation.  Over time, I was asked to do various things that fall more under the heading of systems engineering than of scientific research.  Here I am collecting some of them.  They are primarily intended to just be stories, but may give you also some idea what underlies the venture into space.

Reviewing a proposal


I had not been long in the US after my Oxford postdoc, when I was asked if I would be available to review a proposal for a new space mission.  Although I worked at the Solar Physics Branch at NASA Goddard Space Flight Center, my employment was through one of the many small technology companies that hire out specialists, mainly to government agencies like NASA, NOAA and DOD. Commonly known as 'Beltway Bandits' since their offices usually were right next to an exit of the beltway around Washington, D.C., these companies employed most of the engineers and scientist that were not drectly employed by the government.  My company was in this case supplying specialists to review a proposal prepared by a large aerospace industry, namely me. 

The way it went was that they asked me like "so-and-so cannot go to a review of a new mission tomorrow and we want to ask you to go instead".  I said yes, wanting to experience something new, and the next morning I realized I had not the faintest idea what kind of mission was going to be discussed, nor what was expected of me. I reached the gates which were heavily guarded and reminded me a bit of Checkpoint Charlie in Berlin, with large blocks of cement and lots of guards. The security was taken seriously here. Not like they did the security at Goddard Space Flight Center where my office was and guards usually waved you through with a bored look on their face.  But I was easily let through nonetheless and someone brought me to an office where the meeting was held. They had there some thick documents on the table, and I started to leaf through them. They were for the solar instruments on one of the NOAA satellites, so that was a relief. I was on familiar ground.  I did not pay attention to the smallest document that later turned out to be important since it had the governments request for proposals with all the requirements for the instument that they wanted listed.

The proposal was a phase B proposal, which meant that considerable work on the design and capabilities had already been done.  We took two days going through two documents of several hundred pages, which detailed all the systems, subsystems, specifications, and compared it to the request for proposals document the government had put out.  My background at that time was mostly in solar physics, although I had studied a fair amount of subjects discussing instrumentation as well.  What I learned was that I knew more than I knew I did, but I also discovered some things that were new to me. In particular, the requirements enumeration. The customer (in this case it was NOAA I guess) makes a list of requirements describing the satellite and instrument capabilities, accuracy of pointing for example. Some of these requirements were set in stone, others were more fluid or had even very loose requirements.  That gave the designers space for creativity.  Since there were more than one companies proposing their own designs for the mission, a good balance of cost and capabilities would make the proposed design successful or the bid and subsequent work could be lost.  In later years I participated in quite a few of these reviews, and it became clear that a good communication with the customer (who was asking for proposals) was essential to get the balance right between cost and capabilties in a proposal. In some cases, the cost factor would be the issue and the proposal basically had to be stripped down to essentials.  In other cases, there would be more interest in extended capabilities for a small cost factor. 

In a way, I came out of it disappointed. It was clear to me that the instrument proposed was not up to snuf for doing good science, just a small improvement over previous ones that were flown. NOAA was more interested in monitoring solar activity then in helping science along with their instruments.  After all, that was their mission.

After that I somehow got a reputation and got asked a lot for reviewing proposals. Not all proposals I reviewed were for instruments. Some were for data processing centers, science support centers, or proposals from individual scientists or groups of scientists.  In general, the focus there is more on expected return in the form of science or of increasing the efficiency of doing science.  There a factor was often that a substantial number of proposals were not very well prepared. Lack of good description of work to be done or placing the work in context were not unusual.  It showed to me the value of having someone look over a proposal before making a submission, so that obvious omissions and such can be addressed.

Reorganizing a data center

A few years later I had problems getting funding. After five research proposals in a row went unfunded, some after first being selected but to see the program defunded, I took an opportunity to do what they called science support.  After a while, I was asked if I could take over the management of the Astronomical Data Center at Goddard Space Flight Center, because NASA management wanted Wayne Warren replaced.  Wayne had nurtured the ADC from the beginning with Jaylee Mead, and told me that at least he was happy that a capable person was taking over. The ADC had been instrumental in digitizing astronomical catalogs and distributing them on tape, and lately on a CD-ROM - that was a new technology then.  Soon it became clear that there were serious problems with man power, and the process of verifying catalog data.  In a laborious process the catalogs were documented by hand in quite a lot of detail. Verifying data was difficult since the data would arrive in various formats. Sometimes errors were fixed while new ones were introduced in the 'corrected' data. The main problem was that the number of new digitally produced catalogs was doubling every 2 years. The only way to deal with it was to double the staff every two years, or come up with a new way of doing the ADC's business. 

The ADC was not alone in their mission. There were sister centers around the world, in Moskou, Strassbourg, Japan, and China.  In Strassbourg, France, Francois Ochenbein had been working on automating the documentation of their catalogs.  It was clear to me that we needed to use computers to automate the process of receiving catalogs, to distributing them as much as possible.  So I contacted Francois and started suggesting ways in which that might be possibile, if only his documetation could directly be used to do more things, like the catalog verification.  He wrote me back that he already was working on that.  So started a collaboration where I tested and came with suggestions for improvements, and he brilliantly coded it up. We wrote a document stating how data should be formatted and delivered to the data centers.  I decided that the native data of most catalogs should remain in ASCII but that there should be a program to convert them automatically to FITS using the machine-readable documentation. I started negociating for getting our own unix computer, so that we could process everything on one platform and finish by distributing via FTP.  In the end, we wrote also a web server and were one of the first NASA websites offering data.  The downside was that all existing catalogs, about 900, had to be reformatted and tested with our new pipeline.  The work started when I was there, and we did about 300 catalogs the first year.  Instructions were published for astronomers donating data, and negociations were also started with the publishers of astronomical journals about using the same or a compatible format for data they published as tables, and their public distribution.   The reorganization was successful and the increase in the data submitted could be handled by the new system.  At the same time, we established de facto standards for astronomical catalogs which are still in use today.

Managing the requirements process

A few years later I had started work in the remote sensing group. They tried me on this and that, and at some point was asked to work with the systems engineer of a new project, the VIIRS. He assigned me to keep track of the requirements.  It was a large project involving not just our group of about 20 scientists, but also the instrument designers of the Santa Barbara labs. The requirement was a new one in the industry. We were to come up with a solution that optimised the return from remote sensing instruments by studying the combination of instrument and science algorithms for particular designs.  So we had to work with several instrument options, and tailor the scientific algorithms for retrieval of the remote sensing, suggest changes to the instrument, and so on until we had the best things possible. All earlier missions had been done by making an instrument with some filters that were known to give good results, and then tailer the scientific algorithms once the instrument was finished. The idea was that there could be cost savings by tailoring the instrument design to get specific results.

Santa Barbara had it's own requirements process, mainly hardware oriented, and we found a way to interface the common requirements.  The various studies started by evaluating the Phase A work against the new requirements (we were doing a Phase B study).  The requirements covered several scientific areas from aerosols, ocean surface temperature to surface type, and imaging and so we had different specialist doing the modeling and designing algorithms.  Each of them had results which were used to adjust the broad requirements from the customer into more specific requirements that could be met by our design.  And several times, changes to the instrument design were made in order to reach the best overall  fit to the requirements.   In practice, the scientists had varying understanding of the needs to estimate performance, or how to progress to better algorithms. It turned out to be essential to regularly discuss in some detail what their research was showing, so that the progress of the work from different areas could progress in a steady way.  What helped was keeping a data base of open items and updated requirements. Also, regularly we had reviews from an outside panel, which helped refine the direction that the systems engineer was leading us to.

Towards the end we started working on our final report and design solution with the hope that our design would be selected for further development.  To that effect, I expanded the tracking of requirements with documenting the errors in each of the algorithms with each developer, and developing a software architecture for processing the data using the newly developed algorithms to show the overall feasability of the design from instrument to remote sensing product.  I should point out that the requirements were written in terms of the remote sensing products of the instrument.  We won.

A practical case: requirements enforcement, interface management, hardware and manpower needs

I had learned some valuable lessons working directly with the systems engineer on requirements, software architecture, and some proof how good we could match the requirements and I felt I could now handle myself some of the systems engineering in a smaller project.  The project concerned the development of the ground system for an instrument that had been added to a satellite, meaning that the other instruments had already had several years of development done. We had some catchup to do.

First thing was to understand the conditions. We did not have staff to develop software, but still had to conform to standards for the final product demanded by the earth science archive.  In practice that meant that the scientists who wanted to extract some meaningful scientific data from the observations had to write their own software which had to run in our system.   So I wrote software requirements that made it possible to move forward. There were two competing processing systems we could employ, and in the end the  project manager decided upon the one he was familiar with.   It turned out that about half the scientists needed a lot of help getting their algorithm software written, so I produced examples. The scientists were able to give us estimates of the data volumes for their products, and testing of existing software gave us a good idea of the size of system we needed to buy. In consultation with our computer engineer we chose to run RH linux on a large number of PC's, tied together by a fast network and two master machines to parcel out the work and store the input and output of each process.  After we bought about half the system, the testing began, where we improved both the processing system, and worked with the scientists to get their software to run. At the same time, I worked on interface agreements and documents with the ground station that would deliver the satellite data for our instrument, and the archive to deliver the final results to.  Those external interfaces were also tested. It was al coming together nicely before launch, which was fortunately delayed enough for all that.