Lunar Farside One
Get the astronauts to the vicinity of the Moon alive. |
Put them on the surface of the Moon alive. |
Get them off the surface alive. |
Return them to Earth alive. |
By the time the Apollo program began, NASA had developed a lot of confidence that the first step was doablle, and the last likewise, assuming everything went well with steps two and three. Step two was absolutely critical, with two points of potential disaster: the Lander could hit a mountain on the way down, or it would be unable to take off from the Moon's surface. Either would make step three impossible, regardless of the success or failure of step four.
NASA took steps early on to avoid these potential catastrophes, with three unmanned missions to evaluate conditions on the ground before risking a manned mission. The third unmanned mission, the Lunar Orbiter, was the most thorough attempt to examine the Moon environment. The photo reconnaissance subsystem would provide an understanding of conditions on and near the surface. Other sensors would measure the shape of the Moon and its gravitational field, critical to navigating the manned spacecraft in the vicinity of the Moon.
The Lunar Orbiter spacecraft were designed and built by Boeing. Boeing also managed the Lunar Orbiter program. The overall plan of the program is described in Boeing's report to NASA, here. The Lunar Orbiter cameras were designed and built by Kodak. The ground reconstruction system was unique to the Lunar Orbiter program, converting the image signal from the photo subsystem back into photographs from which maps could be made. The rest of the project used the expertise of organizations with experience in space operations.
The pre-flight plan for surveying the proposed landing sites is described and illustrated on pages 18 and 19 of the Boeing report. The planned procedure is similar to the standard procedures used in terrestrial surveys, with overlapping coverage along and across the flight paths. The overlap makes merging the individual photos into a large, composite image mosaic practical.
How to put it delicately? For the most part, the plan was carried out to perefectiion. The first few Lunar Orbiter missions returned enough useable data that subsequent Orbiters could be redirected to photograph a number of areas of the Moon of lesser interest, including the far side, shown in the photo above. That 'usable' data was not, unfortunately, uasble for making a photomosaic as intended. Years later, advanced image manipulation software was applied to the signals recorded during the Lunar Orbiter missions usable images directly. At the time, it was necessary to find another way.>
The images produced by the ground reconstruction system had several errors. Some are obvious in the iconic picture of Earth as seen from the Moon, Earthrise, taken by a Lunar Orbiter. Others are more subtle. Taken together, there was no possibility that the images could be used as-is for making contour maps.
The folks at NASA enlisted the help of the Air Force's expert cartographers at the Aeronnautical Chart and Information center in St. Louis, MO, to make maps from the 35mm film strips they had. ACIC had been making maps from aerial photographs sine the cameras were carried aloft in baloons. ACIC recommended talking with the USAF photoreconnaissance people at Wright-Paterson AFB in Ohio to see if they could enhance the images. They, in turn, pointed out that Data Corporatiion, in nearby Beavercreek, OH, were the only ones they knew of where the equipment needed and the people with the skill to use it existed.
When NASA came to Data Corporation for help, what they had was a handfull of 35mm film strips and a compilation of material about the Lunar Orbiter Camera and the physical properties of the Moon. Jack Finley, Data's VP of Engineering, gathered a few of us in a conference room and described the problem in broad terms. On the plus side, the Orbiter camera clamped the film in place during exposure; the glass plate that clamped the film during exposure had fiducial marks etched into it at 1 cm intervals in both directions, and the edges of each scan overlapped the next by 0.005". On the minus side were a dozen or so challenges, some of which were immediately evident, and some which we discovered as the work went on.
Jack ended his presentation by asking us if anyone had any idea how what we had could be turned into what NASA wanted. After a long silence I asked Richard Pratt if he thought it was possible to create a virtual image of the assembled frame in computer memory, and a real image containing the pixels we could get from the film strips, and perform a transformation to move pixels from the real to he virtual frame, dragging the fiducial marks from their actual location on the films to their ideal locations in the virtual frame, carrying the adjacent image pixels with them. Richrd thought it could be done. I asked Davey Behane if he could shoehorn two six-million-pixel images into our computer's 256 KiloByte memory. Richard and Davey began an animated conversation about how both tasks could be done in a machine too small for either. As they began to sound like they were in agreement, Jack called a halt, assigned me to run the project, and we were off and running.
One of the reasons the Air Force recon group recommended Data Corp for the job was the Data Corporation Microdensitometer. As part of on-going work on improving the quality of information that could be extracted from aerial reconnaissance imagery, Data Corporation had developed a microdensitometer with exceptional accuracy, both in positioning the film and in measurement of image density. The Micro-D was capable of resolving picture elements as small as a thousandth of a miillimeter, and measuring photographic density as great as 4.0. The high spatial resolution made it necesary to house the machine in a clean room, since common dust particles are much larger than the pixels being measured. Measuring the density of pixels that size put stringent limits on the light source being used to illuninate them, both in intensity and stability. The extremel;y low intenity of the light reaching the detector likewise required extreme sensitivity and power suppply stability. Without the Micro-D, the project could not have been considered.
Bob Boone, shown here with the Micro-D, and Bob Troidl, two of Data's photscientists, used the Micro-D to scan the film strips corrsponding to a high-resolution frame from the Orbiter camera, producing a set of large digital files. Davey set about writing code to handle the resulting volume of data as expeditiously as possible, while Richard worked on superimposing the overlaping pixels from adjacent strips and locating the images of fiducial marks in the assembled raw image. Three challenges met and overcome.
CRT displays were rare in those days. The one I had seen on the Illiac at school was actually a memory device. A clever student had figured out how to program the machine so the bit pattern in memory appeared on the face of the tube as a waving flag, but showing real images was not possible. The standard user input device was a teletype, and output was paper tape. Our computer had a line printer, but it printed characters, not pixels. It was common for programmers with time on their hands to use a printer to reproduce works of art by overprinting various combinations of characters. The Mona Lisa was a popular subject. Overprinting slowed the output rate considerably, but prinnting art was wasting time anyway. At first, we tried printing Moon Map pages that way, but getting a good black was almost impossible, and even the best black dot we could print had a while border. People can't read scrunched-together print, so printers don't print it. I took up the printer challenge.
Richard and Davey had managed to create a system to move the two images in and out of memory piecemeal, and to print individual film strips. Each strip was printed as two and about two thirds strips of 14" wide computer paper, 20' or so long. Three more challenges; two we expected, and one we didn't. We expected geometric distortion. The fiducial marks were displaced from their ideal locations. We expected density errors. It turned out that there was a systematc variation asross the width of each strip, basicall W-shaped,darker at the edges and the middle, lighter in other areas. The unexpected problem was gel spots. The film in the Orbiter Camera was developed by pressing a chemical-soaked web against it for a time and then separating the two webs. Ocassionally a small blob of material from the developer web would stick to the film.
When the systematic errors were fixed, it was decided that the effort required to fix the gel spots would not be invested unless a spot appeared in a critical location in the assembled image. We printed a complete frame for the first time. It took forever, it was gray-on-gray instead of black on white, but at least it looked like a moonscape. I had to come up with a better way to print
The publishing industry struggled with printing continuous-tone images for years. line drawings were easy. Etching and engraving were well established techniques. Sheets of Ben Day dotscould be used to print areas in color, but not with any detail. Half tone screens were invented in the 1860s. The screen, placed over the image, broke it up into dots. The engraving plate, reveicing more light from one dot, would be eaten away more deeply in the etch bath, and vice versa. In this way a continuous-tone image became a half-tone image. I decide we neede a half-tone print train for our printer. After all, we knew exactly what size dor we needed for every single pixel.
you were IBM, a modified Selectric typwriter. Bulk output was produced on a line printer, in our case, an IBM 1403N1 printer. It printed 132-character lines at the rate of 1000 lines per minute. You could print graphics by leaving off the line feed at the end of a line and over-printing. Dark pixels were made by printins M, W, and X on top of one another, lighter ones by an asterisk or a period. It sorta worked, but over-printing once cut the speed in half, and twice made it a third. The other problem was that each character on the print train perforce had a white border. Without the border, reading printer output would be impossible. With it, printing any solid black area was impossible. For a long time we just had to live with it.A problem that was obious from the start was systematic errors in density across the image strips. Image pixels were darker near the edges and in the middle, and lighter everywhere else. Part of this came from scanning a flat furface with a beam coming fron a point source. The edges get less light. Some of this can be compensated for by varying the strength of the scanning beam as it sweeps across the strip. The rest is taken care of by scanning in narrow strips and having the light source as far as possible from the film. Of course, we had no control over this, but knowing how it happened was helpful in figuring out how to deal with it. Later on, this had to be revisited, but at this stage it was good enough to allow us to make some progress.
Seeing our progress took much more time and effort than we thought it would.The biggest part of the problem was, simply, the size of the image and the speed of the computer. They combined to slow progress to a crawl. Each time we made a change, it took an hour or more to create and print a test strip. Then we had to paste the sheets of printout together and hang them on the wall before we could see what we had done. Fortunately, we had a big anough workspace that we could hang a strip and stand far enough away to see our output as a picture, not as a collection of spots. Two inches on the original Orbiter image was about twenty feet on the wall.
Getting from digitized strips to printed strips on the wall took several weeks. From there to deliverable images took quite a bit longer. A substantial part of the time was spent in mathematical manipulation. The purpose of the project was to help select a landing site. For this, a simple picture was not enough. A single picture tells nothing about the contour of the area shown. In the normal course of aerial mapping the area of interest is photographed twice, from two different positions, so a sterographic image can be created, from which elevation information can be derived. The Lunar Orbiter mission included attempts at stereo photography, but the combined images were not good enough to be useful.