PDA

View Full Version : OT: Build Your Own Apollo 11 Landing Computer



Tuckerfan
07-14-2009, 09:42 PM
I'm in awe of the level of spacegeekery. (http://www.universetoday.com/2009/07/14/build-your-own-apollo-11-landing-computer/)
Remember the computer on the Apollo 11 Eagle lander that kept reporting "1201" and "1202" alarms as Neil Armstrong and Buzz Aldrin approached landing on the Moon? Well, now you can have one of your very own. Software engineer John Pultorak worked 4 years to build a replica of the Apollo Guidance Computer (AGC), just so he could have one. And then he wrote a complete manual and put it online so that anyone else with similar aspirations wouldn't have to go through the same painstaking research as he did. The manual is available free, but Pultorak says he spent about $3,000 for the hardware.

The 1,000 page documentation includes detailed descriptions and all schematics of the computer. You can find them all posted on Galaxiki (http://www.galaxiki.org/web/main/_blog/all/build-your-own-nasa-apollo-landing-computer-no-kidding.shtml), downloadable in pdf. format (the files are large).Now all he needs is a LM to put it in!

doctor demo
07-14-2009, 10:21 PM
I would rather have a replica makeshift air scrubber from Apolo 13.

Steve

Evan
07-15-2009, 02:08 AM
4 k of ram and 32 k of rom. I have wristwatches that are much more powerful. The real question is WHY? It would be an almost trivial exercise to build an emulator in software.

macona
07-15-2009, 06:10 AM
Actually 4k words of RAM and 32k words of ROM.

RTPBurnsville
07-15-2009, 07:37 AM
I have wristwatches that are much more powerful. The real question is WHY?

Because we all have things each of us find interesting or are fascinated by. I am sure you have built many things to which most most folks would also ask why....

bob_s
07-15-2009, 09:25 AM
Why? it is obvious!!!

Go to the Smithsonian. Go to the Cape. Go to Houston. That same computer is missing in EVERYONe of the displays of the LM.

Dawai
07-15-2009, 11:47 AM
I remember the LEM knocking the apollo capsule about a half a mile while attempting to dock. Almost a real disaster. I had a suggestion back then that got me a application for employment.

Then, I remember buying a scientific calculator a few years later with more smarts than the landing computer.

Why have we stopped exploring and taking challenges? We should be living on the moon as a jumping spot for other worlds.

Bruce Griffing
07-15-2009, 11:57 AM
If you are of about my vintage, you may remember very well early minicomputers built using TTL and core memory. Tho he did not use core memory, this guy designed and built an all TTL throwback computer. Did the software for it as well. Pretty impressive in my book.

http://www.homebrewcpu.com/

Evan
07-15-2009, 12:09 PM
My internet dead. 200 bytes a minute

lazlo
07-15-2009, 12:56 PM
Tho he did not use core memory, this guy designed and built an all TTL throwback computer. Did the software for it as well. Pretty impressive in my book.

Agreed -- the Apollo Guidance Computer was a bit-slice design built entirely with bubble-gum logic. This is a very impressive reconstruction.

This guy's build was done on perfboard with an insane amount of wire-wrap -- I didn't read through his individual design documents, but I'd be surprised if NASA/MIT didn't use a PCB board, if nothing else, for weight.

isaac338
07-15-2009, 01:23 PM
Why have we stopped exploring and taking challenges? We should be living on the moon as a jumping spot for other worlds.

http://www.nasa.gov/mission_pages/constellation/main/index.html

We're on our way :)

DR
07-15-2009, 01:47 PM
Speaking of the primitive (by today's standards) computers and early orbital and space missions...

Way back one of my first out of college jobs was in numerical analysis with an aerospace company. I shared a cubicle with an odd guy. He came to work in the morning, didn't say much, no small talk other than once in awhile ranting about how athletes and movie stars were grossly overpaid. I never knew what he did, he was always hunched over his desk writing something. Never took breaks with the group. No programming that I knew of. I thought he was retarded or had some strange disability and was kept on as a charity case.

I started asking around the old timers what was with this guy. Turns out he had done programing on one of the orbital missions that had a major bug in the software. He was rushed to Houston to correct the bug to save the mission.

The guy came out of this as a hero. To me, that was weird. It was like everyone forgot he was the one who wrote the software with the bug. I came to find that was the nature of the company's system. Make a mistake and fix it, you were rewarded. Make a mistake and don't fix, that wasn't so bad either. Kind of an old boy's network where you didn't want to punish errors because the next one might be your own.

Apparently this guy was riding on the coat tails of his save of the mission.

I didn't stay with the company long enough to find out what eventually happened to him, but I imagine as time went on and new management came in his hero status much diminished.

Bruce Griffing
07-15-2009, 02:16 PM
Robert-
You did not check the link in my message - it was a different computer, but more impressive in my book.

Falcon67
07-15-2009, 02:35 PM
http://www.nasa.gov/mission_pages/constellation/main/index.html

We're on our way :)

But the budget will still only accommodate a 4K word guidance computer :p

lazlo
07-15-2009, 02:47 PM
You did not check the link in my message - it was a different computer, but more impressive in my book.

Bruce, I looked at his project, and it's definitely an impressive feat to build your own microprocessor out of 74-series TTL's and port LCC and Minix to it, but it's a weird architecture: he built a 1-byte opcode, so the machine is heavily microcode based, it's got a 16-bit virtual address mapped into a 22-bit physical address space, ...?? He started with a pure accumulator-based architecture (no registers), then realized that porting C was going to be a bitch, then added a couple of registers after the fact...

But most importantly, why on earth did he build it out of 30 year old logic components, when he's not faithfully reconstructing a historical design, like the Apollo AGC computer? In other words, why didn't he use an FPGA? If you look at the "Homebrew CPU" web ring, there are dozens of home-built CPU's constructed with 74-series logic, but they were from the 70's and 80's. Most guys these days are building their own microprocessor on an FGPA...

In any event, to me, the forensic re-construction of the Apollo computer was more impressive -- reminds me a lot of Rich building a model of the Monitor engine from partial images and vague descriptions in historical accounts...

But all are very impressive projects!

Evan
07-15-2009, 03:02 PM
I like this particularly. This guy has built several LSI CPUs using random logic parts and Eproms, with no FPGAs or other advanced parts. His implementation of the 6502 is so good that it will run C-64 kernal and basic with only one byte changed. Plus, he extended the address space to 16 MB and added other enhancements all using only around 40 ICs.

http://web.whosting.ch/dieter/m02/m02.htm

Bruce Griffing
07-15-2009, 03:07 PM
Robert-
He explains his reasoning for TTL on the website, His version 2 (or 16 - read the site) is being done with an FPGA.

lazlo
07-15-2009, 03:30 PM
I like this particularly. This guy has built several LSI CPUs using random logic parts and Eproms, with no FPGAs or other advanced parts. His implementation of the 6502 is so good that it will run C-64 kernal and basic with only one byte changed.

Now that is impressive.

Alistair Hosie
07-15-2009, 03:44 PM
I have to say I know very little about rocket science of the abilities of the spaceships but it is very obviously rivetting to some and impressive to most I wish I understood it a little more . Although I could say that about lots of things I don't quite understand.Alistair

Paul Alciatore
07-15-2009, 05:09 PM
Actually 4k words of RAM and 32k words of ROM.

Sounds a lot like some modern PIC devices.

http://en.wikipedia.org/wiki/PIC_microcontroller

Basic Stamp, Atom, PicAxe, etc., etc. All are very useful devices. Prices are in the $1 to $15 range and they are single chips with a bare minimum of external components. They use various programming languages. Neat stuff.

reggie_obe
07-15-2009, 06:01 PM
If you are of about my vintage, you may remember very well early minicomputers built using TTL and core memory. Tho he did not use core memory, this guy designed and built an all TTL throwback computer. Did the software for it as well. Pretty impressive in my book.

http://www.homebrewcpu.com/

Can't cite a reference, but being interested in computers (mainframes) in the mid 70's, I remember ferrite core memory becoming obsolete already. The first "hobbyist" micro-computer, the Altaire 8800 used solid state memory in 1976/77. I'm not sure that what were termed mini-computers (Varian V-77, IBM system 3, 34, 36, Perkin-Elmer 7-32, HP 3000) even existed at that time. When they were built, they definitely did not have "core" memory.

lazlo
07-15-2009, 06:20 PM
Can't cite a reference, but being interested in computers (mainframes) in the mid 70's, I remember ferrite core memory becoming obsolete already. The first "hobbyist" micro-computer, the Altaire 8800 used solid state memory in 1976/77.

Yep, the Altair 8800 first sold in 1975 (IIRC) and came with 256 bytes of DRAM. You could expand up to 4K through the S-100 slots (you needed 4K to run basic).

trap
07-15-2009, 06:28 PM
I'm in awe of the level of spacegeekery. (http://www.universetoday.com/2009/07/14/build-your-own-apollo-11-landing-computer/)Now all he needs is a LM to put it in!

Wow reading this article and all of sudden I feel like I'm on a space mission. My house starts shaking, the windows rattle, and along comes thunder type noise with a lot of popping. No it's not an earthquake ....... I live close to the cape and they just did a launch.

tattoomike68
07-15-2009, 06:36 PM
Actually 4k words of RAM and 32k words of ROM.

Sounds alot like my first computer a Commadore Vic 20

Bruce Griffing
07-15-2009, 07:11 PM
reggie_obe said:
"I'm not sure that what were termed mini-computers (Varian V-77, IBM system 3, 34, 36, Perkin-Elmer 7-32, HP 3000) even existed at that time. When they were built, they definitely did not have "core" memory."

The first minicomputer I ever used was a PDP8. This was an early machine with discrete transistor logic on PCB's. The memory was core memory. Later, as a graduate student, I had a General Automation SPC-16 minicomputer in my lab. It was an 8 register 16 bit machine. It had 32k 16bit words of core memory and a 5 megabyte disk - one fixed platter and one removable. It ran at a torrid clock rate of 1.25 MHz. Many early PDP11's and Data General Nova's also had core memory, but were beginning to convert to semiconductor memory during the mid '70s. For sure minicomputers had core memory in the early days. The power supply to run the machine was impressive. Lots of fans and heavy A/C to keep the room cool. Hard to imagine today, but they were very impressive at the time. By the late '70s, microprocessor based PC's were becoming available, but they were no match at that time for a real minicomputer.

Evan
07-15-2009, 07:37 PM
The space shuttle used core memory up until the last 10 years or so. In fact they may still have a few frames in place but I don't think so. It was used for several reasons. It is immune to radiation. That is a big item since when the suttle visits the ISS it also must pass through the South Atlantic Anomaly. That is a large zone in the general area off the coast of Africa and south where the radiation levels are much higher than elsewhere in low earth orbit. Semiconductor memory is subject to what are called single event upsets as are CPUs and other logic chips. Core memory is so massive that even a barrage of intense radiation won't affect it.

There is another reason that proved valuable. Ferrite cores are nearly indestructible. Each is labled with a location code in the matrix and if recovered it's magnetic state can be determined and data reconstructed. This was done with the remains of the Challenger which did carry core memory.

loose nut
07-15-2009, 08:11 PM
http://www.nasa.gov/mission_pages/constellation/main/index.html

We're on our way :)

"One giant step backward for man, one giant leap to oblivion for NASA"

lazlo
07-15-2009, 09:21 PM
The space shuttle used core memory up until the last 10 years or so. In fact they may still have a few frames in place but I don't think so. It was used for several reasons. It is immune to radiation.

You're talking about the 5 AP-101's -- the IBM 360-based avionics computers the shuttle uses. The AP-101's were the core-memory based computers from the B-52.
Radiation resistance was NASA's excuse, but I don't buy it.

I was working on electronic designs for the National Reconnaissance Office in the late 80's through the late 90's, and those devices were in highly parabolic orbits which were subject to way more radiation than the shuttle is. We used gallium arsenide, emitter coupled logic, and twin-tub (shielded) circuit designs that were exceptionally rad hard. So the technology was clearly available to NASA, they just chose not to use it.

NASA eventually upgraded the 5 shuttle control system computers to the AP-101S's, which replaced the core memory with ordinary, off-the-shelf, bulk-Silicon DRAM's. So if radiation resistance was the reason in the 80's, why wasn't that still an issue in 2000?

I have some EE friends who work at NASA Greenbelt, and they basically explain that it takes so long for NASA to get an electronic device man-certified and into their inventory database, that it takes a veritable Act of God to get them to change.

Fast-forward 20 years, and the new glass Space Shuttle cockpit and NASA's control computers for the International Space Station, use Intel 80386SX's. That was Intel's current offering 24 years ago :(

Edit: Another strange NASA choice: the original (1996) Mars Sojourner used an Intel 80c85. A 3 Mhz, 8-bit microcontroller. :rolleyes:

With the Spirit and Opportunity Rovers, NASA finally moved into the 21st Century with the RAD6000 -- a 25Mhz rad hard POWER architecture (RISC) processor made by BAE. Very nice chip, and the current darling of most (all?) of the latest deep space probes.

Evan
07-15-2009, 10:07 PM
and those devices were in highly parabolic orbits

:) You mean elliptical. A parabola is an open curve.

lazlo
07-15-2009, 10:25 PM
:) You mean elliptical. A parabola is an open curve.

Yeah, one of those oval words :D A highly-eccentric Low-Earth Obit with a perigee of less than 100 miles. That brings the electronics in and out of the Van Allen belt and the magnetosphere every 90 minutes. Basically the worst-possible scenario for radiation.

Evan
07-15-2009, 11:47 PM
If you look at the CPUs NASA uses they are all CMOS implementations. That's because of the much greater noise margin available with CMOS plus the older chips have much large feature sizes which makes them less suceptible to upsets. That's why you don't see late model CPUs in space. Of course CMOS draws less power too but that isn't the main issue.

Another limitation is data storage. They can't use hard drives because they all, as far as I know, require a standard atmosphere to operate. So they are limited to tape and ram or other similar methods.

Tuckerfan
07-16-2009, 12:36 AM
Wow reading this article and all of sudden I feel like I'm on a space mission. My house starts shaking, the windows rattle, and along comes thunder type noise with a lot of popping. No it's not an earthquake ....... I live close to the cape and they just did a launch.
Reminds me of the time a friend of mine sent me an email about the space shuttle and there was this tremendous bang when I opened it. I thought something hit the house, went outside, checked everything, didn't find anything wrong, a couple of hours later, I'm watching the local news, and they announce that the shuttle had to fly over our area because of weather conditions, and the bang I heard, was the sonic boom of the shuttle as it flew overhead.

Intel is starting to get worried about cosmic rays. (http://news.bbc.co.uk/1/hi/technology/7335322.stm) Seems the higher density processors are vulnerable to the things, even here on Earth. I know that astronauts on the shuttle and ISS have taken up their personal laptops. Wonder what, if anything they've noticed going wrong with them while they were up there?

Evan
07-16-2009, 02:26 AM
Wonder what, if anything they've noticed going wrong with them while they were up there?


You mean something unusual? Such as an unstable operating system?

Wait. They probably are using Windows. How would you tell?

macona
07-16-2009, 03:44 AM
You dont get a blue screen of death. You get a green one. ;)

lazlo
07-16-2009, 10:34 AM
Intel is starting to get worried about cosmic rays. (http://news.bbc.co.uk/1/hi/technology/7335322.stm) Seems the higher density processors are vulnerable to the things, even here on Earth.

Geez, BBC should be ashamed of that sensationalized article. When did BBC news become a tabloid?

The issue that all semiconductor vendors are faced with is that as feature sizes shrink, the gate oxide between the transistor gate and channel is only a couple of atoms thick. So, for example, Intel and IBM are preparing to launch their 32nm process at the end of the year, and the gate oxide on the transistors will be around 5 silicon atoms thick.

That's so thin that even a cosmic ray hit is enough to flip the bit -- what's called a "Single Event Upset." To make matters worse, many of the packaging materials are very mildly radioactive -- nothing to be even a remote health concern, but they're alpha emitters that are literally pressed against these ultra-tiny, ultra-thin transistors.

So any semiconductor company that's making products on high tech processes, Intel, IBM, Nvidia, ATI/AMD, ... has to design-in features that avoid SEU's. Silicon on Insulator helps a lot, so does margining the transistor threshold voltage. Also, most major RAM's are protected by ECC (error correcting codes).

The funny thing is that in the semiconductor world, the reliability guys like the engineer featured in the BBC article are a cult. They've come to the microprocessor architects for the last 20 years with statistics showing the probability of single-even upsets, and with grandiose ideas and patents for making high-reliability CPU's, GPU's, etc. They're a lot of the same features used in rad-hard designs: spread the wire routing out, shield the circuits, redundant circuits, etc.

All of those ideas greatly increase the size of the chip, and have a performance impact too. So for 20 years we've largely been ignoring them, and so far, we haven't had complaints about cosmic rays crashing people's computers :)

Of course, the reliability guys will retort with "well, wait 'till the next process generation, then you'll have problems."

Eventually they'll be right ;)

Evan
07-16-2009, 10:53 AM
Cosmic rays almost never make it to ground level. We have the equivalent of 32 feet of water to shield us from the vacuum of space. What does make it to ground level are secondary particles in what are called showers produced by the primary cosmic rays from the sun and the galaxy. This is really only a concern during highly active solar events. During a coronal mass ejection (CME) event that is aimed at Earth the ground level background radiation can increase by several times and the exposure levels at the altitudes that passenger jets fly at can go high enough to be hazardous to the crews. It is possible for aircraft crews to receive a high enough doses to be grounded from further exposure for a period of time.

Our planet is mildly radioactive, it can't be escaped. It's a normal part of the environment and it does have implications for the performance of sub micron size semiconductor parts.

But then, so does allowing sunshine to fall on the window of an Eprom or even the junction of any ordinary diode or transistor.

andy_b
07-16-2009, 10:33 PM
That's so thin that even a cosmic ray hit is enough to flip the bit -- what's called a "Single Event Upset." To make matters worse, many of the packaging materials are very mildly radioactive -- nothing to be even a remote health concern, but they're alpha emitters that are literally pressed against these ultra-tiny, ultra-thin transistors.


the radioactive mold compound in the packaging was the main cause of problems where i worked. i wasn't involved with the design team that worked on it, but as part of the new device qualification procedures they now test for it and use low-alpha mold compound. as an aside, there is no difference in the materials used for low-alpha mold compound except that the "low-alpha" version is only available from a single mine on the planet and i believe that mine is in one of the former Soviet states. the "normal-alpha" material is available from other mines in other locations. i am not sure which component in the mold compound is the alpha source.

andy b.

Evan
07-17-2009, 12:25 AM
It's alumina ceramic filler that is also used to make mil spec ceramic chip packages. This is an old problem that showed up in the 80s. It not only affected drams but put a limit on the data retention of eproms. The alumina is slightly contaminated with uranium and thorium.

I doubt the story of the mine though. There may be variations in the levels of istotopes found in bauxite but nowhere will it be zero. Also, aluminum is the third most plentiful element on Earth after oxygen and silicon. Simple gaussian distribution of the rate of contamination around the planet makes an extraordinary outlier highly improbable.

andy_b
07-18-2009, 03:12 PM
Evan,

you're correct, that's why it's "low-alpha" and not "no-alpha". :) from everything i have ever heard from any semiconductor packaging subcon the low-alpha component is only available from a single source. of course, that single mine may be the size of Australia, but i have no idea. i am referring to materials used in modern plastic packaging.

andy b.

lazlo
07-18-2009, 07:01 PM
It's alumina ceramic filler that is also used to make mil spec ceramic chip packages. This is an old problem that showed up in the 80s.

You're talking about the big Intel debacle with their DRAM's in in 1978. That was, indeed, soft errors induced by alpha-emitting ceramic packages.

The modern issue is alpha emitted by the solder, the epoxy underfill, and the thermal interface materials.