Janis Lesinskis' Blog

Assorted ramblings

  • All entries
  • About me
  • Projects
  • Economics
  • Misc
  • Software-engineering
  • Sports

RAM usage 50 years after Apollo


An image of Earth from the Lunar Reconnaissance Orbiter

I particularly like this image that was taken by the Lunar Reconnaissance Orbiter from near the surface of the moon (and there's an interesting blog post about how this image was made that's worth reading too). Seeing this image always gives me this strong feeling of a sense of perspective of things. Given my interest in space and computing I've recently noticed seeing references to some posts/Tweets that aim to bring attention to the vast increase in RAM used in some modern apps:

1969:

-what're you doing with that 2KB of RAM?

-sending people to the moon

2019:

-what're you doing with that 1.5GB of RAM?

-running Slack

One such post was actually a screenshot that contained the text quoted above. I can't help but feel this is missing a lot of perspective. In respect to those from half a century ago I decided to NOT include a screenshot of this post since that would have been multiple kilobytes of data, more than that arbitrarily anchored number which was the RAM capacity of a computer that was used to send people to the moon. Perhaps ironically Twitter is itself an interesting example of what happens when you constrain the amount of text people can send but not constrain the amount of data. The result is that you get posts where people attach pictures that just contain text to up their word count, or god forbid heavy gifs that only contain text. The irony of people complaining about the bloat in modern software by taking a screenshot of a post about how little RAM was available on old space systems is not lost on me (most screenshots were bigger than that magical 2KB).

I guess what this post is trying to play on is the thought that a chat program shouldn't use 1.5GB of RAM if we could send people to the moon with 2KB of RAM. In some senses the post is funny, even without any of those ironies like the screenshots of text, but I think this vastly oversimplifies things and is likely to result in people having a worse understanding of how to make software. The other thing is that this post also won't send people to the moon but it will also be heavier than 2KB on page load, so please go check your web browsers dev tools and tell me how many people this page should have sent to the moon.

The thing that bothers me most is that there's no actionable insight from this tweet, it badly abuses psychological anchoring to stir the emotions but that's about it. I guess in some sort of silver-lining sense it stirred enough emotions in me to actually get to writing something about this topic since it's been on my mind for at least the last decade.

The sorts of engineering involved in the Apollo program or in the creation of a commercially successful chat app such as Slack are as a direct result of thinking in depth about engineering and business problems. I think the lack of depth of thought that's primed by these tweets is disturbing because it is counterproductive to gaining a better level of insight into what is actually involved in the success of these products. So I'd like to talk about this phenomenon in a lot more depth to try to explain from a few angles how things have changed and what that has done to the economics of developing applications. I'd also like to talk about software bloat in a sensible way, especially because I do think there are a lot of issues these days due to bloat in software. This state of affairs is propped up by ever decreasing computational costs but we may be in for a rough patch if this trend stops since it would require different approaches to making software and hardware that emphasize different skills. I've very deliberately tried to keep my personal website somewhat lean but there's a lot of sites out there that for various reasons are incredibly bloated. One of my favorite talks about the increasing size of websites is titled The website obesity epidemic which points out a rather nasty trend of page load sizes getting completely out of hand on some sites. In the years since this talk the situation hasn't improved.

If you use Firefox and also use an extension like uBlock origin or NoScript you'll see that on some sites there's enormous amounts of javascript being pulled in from all sorts of sources and being executed (and really you should be using something like this since there's a lot of issues with letting all scripts on the internet execute in your browser). In many of these cases you could vastly reduce the memory and network requirements for pages like this while still delivering the content.

Software bloat vs irreducible complexity

Before we can really talk about bloat in software we need to have some rigorous definitions of what "bloat" really means.

I'm a big fan of a concept called Kolmogorov complexity, it is a formalization of the amount of complexity that is present in a system. Say you have a way of specifying systems with some form of description, the shortest such length that can describe a system is a good property with which to think about the complexity of the system.

More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below).

In some sense there's a minimal length implementation of a some sort of feature or product. We may not be able to practically achieve this in many cases but there is a minimum somewhere. Getting close to this minimum in size may be optimal, but it also might not be.

One very powerful trend in computing is that the cost of storage, both hard drive and RAM have fallen tremendously over time. Costs for storage have fallen faster than costs for compute like CPUs and GPUs. In many cases you have these situations where you may wish to trade off storage space for improved execution speed. This is the type of thing you do when you make lookup tables or do memoization. In these cases you are explicitly trading a bigger size for a faster run time. This is a very pervasive technique in modern computing.

But is this bloat?

Most people wouldn't call this bloat. It's seen as a trade off that makes one thing better while another is worse. Bloat tends to refer to situations where you are "paying a lot but not getting a lot in return". Many people I talk to have a mental model of bloat that corresponds to things which are far from the pareto efficiency frontier. In other words situations involving bloat are situations where you could improve certain aspects of the system without making others worse.

Usually adding features come with downsides not limited to the following:

  1. Increased complexity of user interface
  2. Higher resource consumption and higher demands on system performance requirements
  3. Increase in complexity of the underlying system which in turn leads to more bugs and security flaws

Indeed a nice technique related to the last point is the practice of debloating apps, which is a formalization of the technique of removing features and functionality from systems when you can prove that nobody is using them. This is good from a security point of view since it reduces the attack surface but I think the benefits go far beyond just this. In any case it provides a good example of moving things closer to that pareto efficiency frontier.

Computers, what really are they

These days computers are ubiquitous. The cost for a fixed amount of capacity has fallen so tremendously over the years in a way rarely seen in any industry. The fall in cost has had an enormous impact in many areas, especially in the way in which computing is taught to people. Because we aren't forced to care as much about computing power these days we can defer discussing it and some programmers, who are creating a positive economic output, end up never really covering these topics at all.

Once upon a time computing power was anything but ubiquitous.

If we go back to the era in which the Apollo space program was starting in the early 1960s it was a completely different landscape, computers were bulky, expensive and generally speaking drew a lot of power for how much computation capacity they had. The engineers simply needed to have a good understanding of how the computer was getting results, not just what the results were. They needed to have a good mental model of how the computer was working internally since many of the techniques, not to mention various tricks, to get resource usage down were comparatively speaking far more worth the effort back then.

Many people's mental model of what a computer is involves computations. This is perhaps an unfortunate consequence of the linguistic priming that the name computer has in making us think of computations.

Here's what Richard Feynman had to say about this:

One of the miseries of life is that everybody names things a little bit wrong. And so it makes everything a little harder to understand in the world than it would be if it were named differently. A computer does not primarily compute in the sense of doing arithmetic. Strange. Although they call them computers that’s not what they primarily do. They primarily are filing systems.

This quote is from his talk about computers. One of the ironies about this is that the metaphor being made to filing systems is getting less accessible as time goes on because more people use computers to do what filing systems used to do.

This distinction between use cases where computers are primarily doing calculations vs retrieving information is not new. All the way back when Ada Lovelace was talking to Charles Babbage this conceptual divide was apparent, she wrote:

[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine...Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent

This might be one of the first known recorded pieces of writing to directly address this concept. At some level the internal workings of a computer must involve calculations. This is because the ability to abstractly represent other entities involves some form of being able to address them. The complexity in accessing abstract information has a very specific feel to it, the calculations are not the sort that you'd see in something like a differential equation or a fluid dynamics simulation. Specifically there's no specific mathematical formula that's inherent to many of these problems. The calculations are more about indexing and in many modern "high level programming languages" are hidden from the user in the form of abstractions about variables. Take for example a struct in C++, this is effectively a way of arranging a blob of memory such that offsets from the place in memory at which the struct resides will store the member variables in that struct. Whereas in Python it looks something like this. But as a programmer in either of those languages when you do foo.bar you don't have to think about any of the computations that let this happen 1. In either of those languages if you try to do some statistics work for example then you do have to think directly about those computations.

Some systems, such as many in the Apollo AGC, are primarily about the execution of complex mathematical computations while some are more about the storage and retrieval of information. The divide centers on the following: If you have some sort of mathematical formula that you must solve which is inherent to the problem itself then you have some sort of mathematical computation problem.

This divide still exists in modern computing and can be seen in the ways in which some industry sub-sectors have a focus on only a portion of the overall capabilities of these systems. For example areas such as scientific computing and simulations that are mostly concerned with using computers for calculations, it may be possible to a lot of work in this area without ever using a database. On the other hand could have some web apps that store data about a service that is very much a data retrievals driven area and could be done with minimal complex computations. Of course many areas are somewhere in between these extremes.

Many years ago I remember reading this blog post from 2005 that talked about the rise of a specialization in software development that mostly involved using existing APIs and libraries rather than the development of such APIs and libraries. It's interesting to see just how much has change in the industry in 15 years. Take for example this part of the article that references how the candidate was writing code and did a degree without covering any subjects that were heavy on data structures:

But the interview really bugged me, because the guy is obviously smart, and he's producing useful things for his company. How is it that he hadn't taken any of the fifteen-odd courses we associate with an undergrad CS degree?

The funny thing about re-reading this almost 15 years later is that overwhelmingly in the industry the people I encounter in programming roles are not strong with data structures and algorithms, and for the most part nobody cares. For example you can make a very commercially successful web app used by many users without having to really know all that much about data structures, a web framework plus an ORM that will let you store things in a database gets you far enough often enough these days for that skill set to be a professionally feasible career path as a web developer (at least for now).

It's an interesting side note to see the industry shift on what's seen to be essential in education, you had classic posts about the perils of Java schools that stated the main reason for getting people to program in C then covering material with hard data structures and algorithms was so that you could filter out those people who really weren't going to be that good at programming. With the shortfall in supply for programmers that's come about in recent years due to the massive expansion of the tech industry demand for programmers the economics of the situation has made it such that many employers don't care if they get a "blub programmer" who only knows more limited languages and paradigms if the alternative is getting nobody to fill their position at all (or having to pay a lot more to match the market rate required to access the more advanced talent pool that does have those skills). These market forces have been strong and have certainly placed pressures on the training industry to cover topics that provide a shorter term window towards profitable application, this can be seen at a language level and also on a conceptual level. This is something I've seen firsthand lately when I've been creating materials for and running Python Training workshops, while some clients want to see in depth implementation details topics many do not. Many clients are composing applications using existing building blocks but are not creating those building blocks themselves. The success of libraries such as Pandas are a great example of this, people can solve so many of their data science business problems by using these libraries without needing a particularly deep understanding of how the actual computations are being performed on their machines. Over a longer time window the engineers at many of these organizations would gain tremendously from knowing more fundamental concepts that the industry is built on top of. But on a shorter time frame window it's far more obvious to someone's manager the benefits from learning the new API of the ERP software the company has bought compared to say understanding computational complexity at a deeper level. The upsides of knowing about the API in that example are very easy to tally but the downsides of not knowing the fundamentals of computational complexity are extremely hard to tally. As a result of this and the multitude of economic pressures that exist you find that many people are pressured into learning about topics that can show some quick ROI wins on a quarterly report even at the expense of ones that could return a much larger ROI over the course of a decade.

Due to the huge amount of abstraction presented by modern application frameworks there's a definite rise in the percentage of professional software developers who never see the entirety of the stack they are working with. There's increasingly a huge number of jobs that don't require you to know anything about the nature of the computation that is occurring on the hardware. This in very stark contrast to the Apollo systems where knowing the hardware was absolutely imperative to the success of the mission.

Modern chat apps

A modern chat app is something that will send a message from one computer to another. This is a classic case of a networked application. Many people take this portion for granted but we really shouldn't here if we are making a comparison with the moon mission computing.

Specifically if you want to reliably send a message from one computer to another across a network there's a substantial amount of complexity involved in making sure that there's verification and acknowledgements of messages being correctly received. If you have to make your own protocols for this it's quite an annoying undertaking. Some game developers I know have to do this since the latency introduced by a protocol like Transmission Control Protocol (which due to its current ubiquity is usually referred to by it's acronym TCP) will not meet their requirements for latency, this means they have to do very clever things to reconcile the differences in game state that can occur on different clients. However these days most people just use TCP to deal with reliable transmission of data, but this protocol didn't exist at the time of the moon missions. Nor did the memory requirements needed for the buffers to actually support TCP with high data rates (not that you want to add too much buffer for TCP, even if the memory were free, thanks to an issue known as bufferbloat). The maximum transmission unit (MTU) size for the data in a single TCP packet is 1500 bytes. For a buffer to be reconstructed when packets arrive out of order (as per their sequencing numbers) it requires that you are able to store enough data to reconstruct this buffer, which in practice means being able to store at least as many packets as needed to re-order them back to their desired ordering.

This is not to mention any of the overheads, the MTU for the data is 1500bytes but then from the network point of view you have the metadata on the packets and the frames. You might also have a variety of overheads in the data structures used on the application level as well. For example sending JSON encoded text might end up being not the most efficient way, in terms of data transmitted, to convey a message from one machine to another. This is so much so that many protocols compress the data before sending it over the wire, which in turn means another layer of computational complexity, this time with the aim of using abundant compute resources to conserve scarce bandwidth resources. So even in the most lean, perfectly implemented, modern networked app you'd already have expended all of the memory allowance of the Apollo missions just on handling incoming network data.

But not all apps are made in the most minimal way possible. For example some chat apps go to pains to obfuscate their internal data, since prevention of interoperability and communications with 3rd party apps is unfortunately a business goal of some of these communications apps.

For a long time I've longed for the return of something like XMPP to mainstream chat, since I've liked the idea of being able to run my own chat client that suits my needs (Group chat really does need to be dealt with properly though and core XMPP doesn't have group chat). If someone else wants to have a client weighing in at 1.5GB of RAM with an open protocol they can and I can use something far more lightweight.

The main reason that a chat client like Slack can take such a heavy resource footprint is mostly because of the Electron framework that is being used to run them. Effectively what this framework does is to give you the ability to run a web app on your desktop by creating a web execution environment. This is done by bundling in a whole web browser (chromium) and NodeJS so that your web code can run within a window that looks like your regular operating system windows. Modern web browsers are incredibly complex and powerful platforms that support a huge, and growing, number of complex features. Rendering a web page is no longer a just getting text from across the network and then running it through a HTML parser. You need to have an entire execution environment (JS) along with a complex layout engine for dealing with styling (CSS) and a variety of other features (caches, certificates, websockets, webassembly, etc). When you package up your app in Electron you pay for all of this, even if your app doesn't explicitly need it. And given just how powerful a modern web browser is in 2019 you will be introducing a very very heavy dependency right there.

Considering the absolutely huge amount of extra complexity that exists in the modern technical stacks we work with I wonder what a reasonable amount of RAM usage would be for something like Slack? And I mean actual RAM usage, virtual memory didn't exist in the moon mission computers so a good comparison I think should take this into account. I'd tend to agree that Electron based apps seem to be unreasonably heavy2 but what's a reasonable lower limit these days for a featured chat application? I feel like the project to make a native windows 3.1 Slack client, would provide a far more interesting take on the lower bound to the RAM usage that a modern chat client needs. Getting feature parity in a Windows 3.1 environment would be hard however (think things like embedded Gif and other more graphics heavy features), which again is one of the big temptations that something like Electron dangles in front of you. 3

But more importantly, while the comparison with 2KB of RAM in some of the space applications seems interesting I think it doesn't serve a deeper understanding well. The complexity of much of the space stuff is in the calculations and the hard real time requirements. The difficulty of much of the Apollo mission is not storage driven, it doesn't take a lot of bits of data to get extremely precise data about things like positions. With a modern chat program you will want to deal with data heavy things like internationalization/Unicode which in V12.1 is already 137,994 characters. This already is far larger than the entire memory footprint of the moon mission computers without even considering the image and font assets needed to render these characters as something other than ���'s. So instead of comparing to a space system from a completely different era perhaps we should ask "in the modern era what would a good amount of RAM be for a reasonably featured chat app"?

The Apollo guidance computer

When you dive into the details of the Apollo Guidance Computer (AGC) 4 you'll see that the comparison with a modern chat app is rather absurd. You'll also see a whole bunch of really interesting engineering decisions that led to what was an amazing accomplishment with the technologies that were available to the engineers at the time.

I think this documentary made in the 1960s about the Apollo computer is great not only because of the informational content about the systems but also the window it gives into what the conditions were like at the time that the project was being worked on.

The Apollo Guidance Computer and the DSKEY input module

I've been interested in how these teams achieved such great things for many years now. The idea to me that people could get to the moon on 1960s era technology is quite impressive when you consider the primitive nature of computing technology at the time. One of the most fascinating things about older computing systems is their simplicity, you have a good shot at understanding a large percentage of what's going on if you study them. Compare this to the massive rabbithole of looking at all the moving parts in issuing a web request to a server from your browser, there's a huge amount of complexity there as I encountered when writing about that process.

A huge thing that has changed since then is the rise of the transistor based integrated circuit. This has really changed everything. And I don't think it's hyperbolic to say it's changed everything, the era of low cost computing power and low cost application specific hardware has had an enormous impact on modern life. One other large change is how much cheaper memory is since then, the Apollo memory is an example of magnetic core memory but more modern memory is going to be a using a completely different manufacturing process.

Some of the constraints of the Apollo era that were due to the harshness of operating in space are still very much relevant today. Nowadays you could get something like an Atmega328p chip for a couple of dollars that has a similar amount of SRAM (2Kb) and progmem flash (32KB) as the Apollo guidance counter. For many microcontroller tasks this is a great chip, one I've used professionally in production environments, however I'd never want to rely on a chip such as this in a space application for a few reasons.

One reason I'd avoid that chip is the lack of radiation hardening. In space, if you don't deal with radiation properly, a single event transient could really badly ruin your day and a single event latchup might mean mission failure. Even in the factory settings (think various types of manufacturing plants) that I remember dealing with in my job in microelectronics in Canada I remember adverse environmental conditions causing issues for the equipment on occasion and space would be a whole lot rougher than this for a variety of reasons. For example if a chip got fried in Canada we'd often just shut down the machine and send out a replacement, an option that even on the ground could be quite difficult and expensive. As far as I understand the core memory and the ROM rope memory in the AGC has pretty good radiation resistance.

Another reason this chip wouldn't be appropriate is that this chip doesn't have a floating point instruction set. The AGC had to do things like compute the trajectory of mission aborts amongst other things. Floating point arithmetic matters a lot in space guidance systems. I distinctly remember the annoyance of representing decimal and other measurement quantities on this particular Atmel instruction set on a project I worked on a few years back. Sure you can get more powerful chips that have floating point instructions but they mostly come with substantially more memory which again shows that the 2KB "anchor" for size is once again misleading in those cases.

One thing that Atmel chips does do well is support various hardware interrupt handlers, which is something that comes in very handy for creating the ability for the user to input data into the system from a keyboard or similar.

Annotated Apollo DSKY interface

The user interface of the moon mission computers is spartan to say the least, you have to enter in a number that represents a noun and another number that represents a verb. See this video that contains a demo with explanations, it makes a lot more sense when you see it. In some sense I'd definitely like to have fewer distractions in some of the modern UI's I use! Again courtesy of the massively complex stack of technologies that powers the web you can see a simulation of this system all in your browser. You very quickly get a sense that there's a lot of training that you'd need to do to know how to use this interface, especially in the heat of the moment when correct use means the mission, and your life, literally depended on it.

Seeing some of this old technology is a good way to get a handle on just how far along technological progressions in hardware have come, and sometimes with examples like the RAM usage how the economics of development has shifted so far towards the assumption of cheap and powerful hardware.

There's also a very interesting thing to notice that some of these old user interfaces were much better suited to the job than some of these modern UIs. In particular this idea of pictures under glass has become pervasive. I know if I'm in the process of landing on the moon or even something far more mundane like driving a car, I really want to have some tactile feedback on engaging with a system. A big part of why these interfaces have become commonplace I think is primarily economic, tablets/devices with touchscreen interfaces have become insanely cheap due to the economies of scale that have been involved in the manufacture of them for the last decade or so.

Once again the machinery needed to display a single frame of an image at a modern computer resolution will expend the entire memory capacity of the Apollo missions. Take for example the fairly ubiquitous.

Making a better comparison

I feel the intention of the Tweet is to get people to think that software is "too big" these days in terms of resource consumption. And in some situations this is definitely the case, but the economics of making more performant software are rather more complex than things first seem.

Recently I was in Chicago running a Python workshop for a non-profit and a few things really struck me during that trip. Attendees brought their own laptops, and even the least powerful of these laptops was more powerful than the iconic original Cray 1 super computer, and also would have been more powerful than the compute power available to the moon mission. All this while running on the performance reduced battery power no less. At the Chicago Python Meetup I used all this extra abundance of computing power to present a talk that flipped the output from my script into upside-down text. This abundance of computing power allowed me to make the talk with fewer of my own hours spent on it. I could probably have made something that was better but the constraint was mostly upon my time and the talk was a one-off. The goal here was to present a talk to amuse my colleagues, creating great code was not the main goal here and the code is as a result most certainly not "Good Code".

So now we have all this computing power but this opens up a whole lot of interesting decisions about what to do with this power. And it's a huge amount of power that we now have at our disposal, for example many homeless people now have "smart"phones which, on minimal battery power, have vastly more computing power than any system of the 60s. But what we have chosen to provide on those phone platforms has mostly been driven by commercial decisions and as a result of these decisions its not always easy to use this power in settings not approved by those distributing the hardware 5. For example the nature of this computing power on a phone would let you easily run a heavily bloated chat app but it is definitely not a real-time system and for situations that require hard-real-time performance guarantees where the consequence of too much jitter is that stuff blows up you'd be crazy to try run it on a current era phone operating system.

Beyond these incidental matters of what the market has prioritized for the ways in which we can use our computing power there's a far deeper and lasting difference between these applications. In the original comparison between the RAM usage of the Apollo Guidance computer and Slack there's a comparison being made between something that's primarily a computational application of computing with one that's primarily an informational storage and retrieval one.

Comparisons in the same era

A lot has happened between the 1960's and now. So a better comparison of the trends in computing resource heaviness could be comparison with the AGC and a more modern space based system. I remember Damien George telling me that ESA was funding development of micropython for space applications. These systems will take a whole lot more than 2K of RAM to run but the flexibility provided by this is well worth the extra computing complexity (but more about that later).

Perhaps a nice comparison point would be to look at the current hardware and software used in contemporary space equipment and see what sorts of resource usage is involved there.

If you are working at SpaceX/NASA/ESA/etc and have some information about what sort of raw power your computing devices use please let me know and I'd be more than happy to edit this post to include it. Even though comparing a contemporary chat application to a contemporary space platform is absurd at least it would be a better comparison than to a system from over 50 years ago.

Purpose built vs general hardware

Everything about the AGC system is built for a very specific singular use case. The hardware was built for specifically for this purpose.

The hardware that people are running Slack on runs many other things. This is especially obvious with the ubiquitous x86 architecture. Do we really need instructions like AVX to run a chat app?

The software stack that runs Slack runs many other things. Do we need to have a generalized execution environment like Electron to power a chat app?

The crucial change is that in the Apollo era computing power and hardware availability was a limitation that was directly impacting feasibility. It was a matter of "can this be done at all?" vs "Can we do this cheaper or more efficiently?". To this day the dedicated hardware option remains on the table if you have complete make-or-break performance requirements. One such example of dedicated hardware in recent times were the Application Specific Integrated Circuits for hashing operations used in cryptocurrencies (which performs a lot better than running such hashing one the AGC equipment, which no-joke people have actually attempted). These days in the vast majority of commercial software and computing situations we are dealing with tradeoffs of a more economic nature rather than tradeoffs that directly impact overall feasibility. Sure there's sill many situations where people are pushing the boundaries of what's physically possible, but Slack isn't doing this so I think a good comparison would be with other systems that also are not doing that.

There's a few hypothetical situations that could be really enlightening here:

  1. If we ran the AGC inside the stack that powers Slack how much RAM would that take?
  2. If we made specialized chat hardware that only ran chat what would that look like?

The first "hypothetical" situation isn't even that hypothetical thanks to a large number of Apollo enthusiasts. A while back a group restored an old Apollo AGC and enthusiasts have gone to the effort of making hardware versions of the Apollo Guidance Computer using new hardware components hardware. Speaking directly to the situation of running Apollo on modern system, people have also made emulators for this system that will run on a commodity x86 processor. For example there's the NASSP project which is an add on for the Orbiter space flight simulator and provides simulation for the Apollo missions. And if we were interested in the more direct comparison with Electron we could use this browser based simulator as a reference point.

Thanks to many of the enthusiasts there's a variety of very interesting information about the different subsystems in the Apollo missions, take for example this page about the Apollo Abort Guidance system.

While I'm not sure of the exact footprint, it's a hell of a lot more than 2kb to run the actual Apollo software on modern computing x86 based hardware. As such I think the number used for the RAM in the comparison in the tweets is not an entirely fair and direct comparison.

But what about looking at it the other way, what would a hardware heavy chat app look like? I think this is an interesting question and one that would be best suited to a future article (currently I only have a draft).

Conclusion

I am concerned that due to my schedule I won't have time to get back to this post for a while so I'm wrapping it up, hopefully I can finish the follow up hardware based chat app article soon and I'll come back to this to edit in more details if the opportunity arises. I'm extremely unapologetic for the length of this article as I think some of concepts here really do require quite a lot of detail to understand fully (and more detail than is present here) and would be hurt by being artificially condensed.

The Apollo guidance computer and a chat app serve entirely different purposes and have very different requirements in terms of performance/memory and the instructions that are supported. As you can see taking the RAM usage to make a comparison between two eras of software development is tenuous at best. Perhaps the clearest thing it shows is just how much cheaper computer memory has got over the years and that generalized tools which take advantage of this abundance of computing power will allow you to get your program to market much faster. Estimates show that the computer software and hardware development for the apollo project took 1400 total years of engineer time 6 before the first moon landing. By using a platform like Electron you will be leveraging far more than 1400 years of engineering time, but crucially your organization won't have to spend all of those years internally. Hopefully you don't waste more than 1400 years of users time on bloat and waste, a figure that seems high but is exceedingly easy to reach if you have a large number of users. The fact that we can get things to market faster because we can externalize the costs to the users doesn't mean that we should. It also strongly suggests that an economic system that so strongly encourages these externalities is by no means ideal. Instead of taking aspects of the current economic system as an inviolable axiom we might find other approaches to the organization of our society and how it assigns the scarce resource of engineering expertise to better suit everyone's needs.

How to best use your engineering time is an important question, and one that extends far beyond fleeting concerns like money and whether or not some company does well or badly. At one point a decision to spend engineering time on space sent people to the moon and recently has led to rockets that can land again and be reused. Engineering time has also been put to good use in making applications that allow us to be in communication with people across the world. The extreme leverage that computing systems give to engineers abilities to change the world is obvious in both cases, and with such power I think comes substantial responsibility.

By understanding the true nature of these engineering decisions we can actually hope to make a better future for ourselves. By understanding things better we have a chance to make better decisions overall. While there's a quick emotional reward in engaging in the sorts of outrage seeking posts like the one that inspired this post such an expenditure of energy doesn't really serve us well. The decisions that are made in engineering greatly affect our societies and the world, and as such the ability to make these decisions from a position of knowledge and consideration and not one of hasty misinformed outrage has never been more important than it is now in our high tech leveraged modern world.


  1. Since C++ aims to give you a lot of power over the memory layouts you do have the option to specify your own memory layouts and there's even metaprogramming available that touches on this, for example std::is_standard_layout ↩

  2. As of the writing of this article the Electron official web site has the following quotes on the front page: "It's easier than you think If you can build a website, you can build a desktop app." and "The hard parts made easy". You can see that all of the focus is on the developer experience, making it as easy as possible for the developer to ship their application. That this leads to more resource heavy applications is a trade off that's being deliberately made here to make life easier for the developer. Contrast this with other platforms and frameworks have different goals like making the user experience fast and battery life long that do so by forcing the developers to spend more time on development. ↩

  3. Given the constraints of the time Windows 3.1 can't access anywhere near as much memory as a more modern operating systems. The official Microsoft documentation on windows 3.1 memory limits shows that at the very maximum 512MB of memory is accessible, which is a third of the reported footprint of a single instance of Slack running in Electron. Specifically back then you'd need to use the HIMEM.SYS memory manager to access beyond the first megabyte of space. Early versions of this had a 16MB maximum for memory allocations, there's a good explanation on the docs about why the constraints of the memory management in Windows 3.1 end giving a maximum of 256MB of virtual memory to a process. It's interesting to see that the absolute maximum memory allocation from 25 years ago is not enough for many Electron apps and that someone has ported one such app to that older system showing that it is indeed possible to run an app in a far leaner resourced environment. ↩

  4. There's a huge amount of details about the Apollo Computers here: http://www.ibiblio.org/apollo/ ↩

  5. One of the most notable things about the Apollo computing is that the hardware was designed in such a way that it could fully be utilized by the mission. This was no accident, they wanted to get as much computing power as possible to be available for the mission. At one point in time this likely wouldn't have been noteworthy but as time has gone on many vendors have very deliberately decided to restrict the ways in which their hardware and software can be used. This culture of anti-featuring has unfortunately got more common in recent times, where vendors deliberately cripple the usage of their hardware to try to extract economic rents. Perhaps one of the most egregious cases of this was with TiVo that used open source software extensively but went to engineering effort to cripple the options for usage of those devices by using hardware methods. This led to the creation of the AGPL licence and the term Tivoization to refer to such cases. I think this is highly relevant to any such discussions about the real applicability of computing power, if you have this phone that theoretically could have enough computing power to run a space mission that has been deliberately sabotaged such that in practice it can't be utilized then it really couldn't send people on a space mission regardless of what the spec sheet says. This is something that is commonly an issue with GPU code that people wish to run on various platforms, if you wanted to get all the benefits of GPU compute (and some of the phones these days have surprisingly powerful GPUs) you still need some way of being able to run that code. If you have no information about the API/ABIs due to only having binary firmware blobs then you really aren't in a good spot to actually make use of any of that computing power. ↩

  6. This document about the history of the AGC project contains a whole bunch of interesting information about the project including the engineering-years effort figure. ↩

Published: Thu 19 December 2019
Modified: Sun 22 December 2019
By Janis Lesinskis
In Software-engineering
Tags: ram history apollo space space-hardware

links

  • JaggedVerge

social

  • My GitHub page
  • LinkedIn

Proudly powered by Pelican, which takes great advantage of Python.