A bunch of code without spaces fading out of focus

If you ever wondered what’s programming like, you might find this essay interesting.

I know at least some people view programming as a bit grey, too mathematical or uncreative. It is sometimes these things, but it’s also a bunch of other things.

If you only have time for a short video, this is the best explanation I’ve seen yet: http://video.disney.com.au/fantasia-sorcerers-apprentice

I’ve always liked it but it is only since I’ve become an experienced programmer that I’ve started to get emotional about it. It’s a very good display of what it feels like to be a programmer: The hope, the creativity, the sense of power, the hubris and the inevitable downfall.

Want to hear more? Here are 7 motifs of a programmer’s life.

Poor mickey.

Programmers jest that a working program is simply one with an even amount of bugs. They say this because it is probabilistically impossible to code without writing bugs. Writing bugs is not something that happens sometimes, or when you’re not a good programmer rather it is an inherent property of writing lines of code. If you’ve written 50 lines of code and have not run it yet, you have a bug somewhere inside there. See how these kids find out the hard way:

https://www.youtube.com/watch?v=cDA3_5982h8

Although the word “bug” was used to describe errors even before, I like this realization by Maurice Wilkes:

“By June 1949 people had begun to realize that it was not so easy to get programs right as at one time appeared. I well remember when this realization first came on me with full force.

The EDSAC was on the top floor of the building and the tape-punching and editing equipment one floor below. […] It was on one of my journeys between the EDSAC room and the punching equipment that “hesitating at the angles of stairs” the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.”

It may or may not surprise that most time spent programming is not writing the code, but finding errors in it. It’s kinda like having a Genie which is a real asshole. If you ask for a car, he drops one on your head. You have to be very specific with him to get what you want.

Mankind’s reach exceeds its grasp, but at times we do not grasp that which we reach. Or program for that matter. Today, a program is run many times over, sometimes by thousands of computers at once. Given the inevitability of errors, a catastrophe seems inevitable.

This is why testing is also an inevitable part of the art of programming. A test is a piece of code that runs the original code, and checks that it yields what is expected. Even just one contrived test catches most bugs. This is because a working piece of code is doing something very specific. But although it represents just one specific behaviour, it can break in many ways, and each of these breaks is usually detrimental. That is why simple tests catch most bugs.

But who watches the watchmen? Or tests the tests for that matter… Do we need to test the tests in endless loop? Well luckily no, it is the original code. They test each other basically.

I will mention that there is a very famous and sad proof that shows it’s impossible to build a program that looks at another program and for sure knows what it’s doing. That’s why tests need to actually run the code to see what happens.

It’s much easier to write code than read it. Even if you’re both the author and reader.

Alongside with tests, programmers do code reviews. Code reviews are delightful ceremonies in which Alice sends Bob her code. Bob goes over her code, and writes some comments, hopefully catching some bugs she missed in the process. Bob is Alice’s peer, not manager, and similarly to the tests and the code, they each send the other their code for review.

Beyond catching bugs, a main goal of code reviews is to keep code readable. Readability is the measure in which a new programmer can read the code and understand what it is doing, and be able to make modifications without going through the seven gates of hell.

When I studied an introductory course for computer science, our lecturer asked us the following question: Imagine you have an elaborate system which is run over thousands of computers. You are in a position to make an optimization to the system’s code that would make it run in 95% of the original run time, so you can make it faster, by 5%. This optimization requires adding some code and hindering readability. What would you decide? Since the goal of programming is to write efficient solutions, I volunteered my opinion: “of course, let’s optimize and worry about the quality of code later, we’ll be sparing many cpu cycles, so why shouldn’t we?” to which my professor replied — “nope”. I don’t remember his words exactly but I guess he said something like this: “You are not optimizing only computers. You are optimizing your fellow programmers and yourself in a few days. Paying for programmers time is considerably more expensive than just buying more cpu. So the right answer is that readability is immensely important, and the benefit of optimization must be very large for you to consider sacrificing it.”

So what is a big benefit? What is a small one? We’ll go back to that later. But first I want to articulate an idea that is hidden in the last couple of paragraphs. That idea is that programming is the most collaborative work known to mankind. This is staggeringly different than how I imagine people imagine what programming is like.

At least in the case of companies, you can also forget about the stereotype of a the non communicative genius hiding behind their keyboards. The ideal company programmer is verbal and cordial. A chatty ant, basically. Because you can’t build anything which is non trivial by yourself.

Well perhaps you could. But it’s only because every line of code ever written relies on countless others, written by countless anonymous individuals. These lines of code power complicated systems that interacted in complicated. It is so complex that no single human can hold the entire picture in their head. The same way an ant colony is an incredibly sophisticated and adaptive system powered by many silly drones. We are not drones, but we are definitely ill equipped to see the whole system.

Try to compare an average program to the analysis of Aristotle’s writing, for example. Even if you bring into the picture 100 interpreters, translators and philosophers and combine all their works, examining how they reference each other, this will be no way near the complexity, or amount of people involved in the silliest of apps which lies dormant on your phone. Because in that app, say it runs on your Android device, a single programmer has written code in Java, which is a programming language written by many people, with thousands of library code written by thousands of people, which all rely on each other just to finally participate in that silly program. The app code works against the android operating system that many other people wrote in a whole different programming language, written by many other people with its own myriad of libraries. And all this code translates to primitive machine code that is run by the processor, a piece of hardware written by many other people, designed by companies, manufactured somewhere in many different places in the world, relying on countless scientific discoveries, physics, engineering. You get the point.

The majority of these people were likely born in 1950–2000. It’s all happening now. It’s not just relying on past discoveries, and this ant colony is a very strange phenomenon in the apparent age of ego. An age that celebrates the famous, the beautiful, the funny, the artists, the singular persons, the story tellers. I’m still unclear as to whom the joke’s on.

We’re going to go back to the subject of bugs for a moment😓. We’ve already mentioned that bugs are inevitable. I will add that they appear in direct relation to the length of the code. In other words a programmer makes a mistake every n lines. So beyond testing and code reviews, there is another technique to avoid bugs: writing less code. But, with less code how can you achieve more functionality? In mitigating these two values lies the art of programming. That is why it is the greatest crime for a programmer to repeat the same logic instead of reusing it. In other words, if two elements look too much alike, they need to be unified, or as we call it — refactored. It may sound weird to someone who never programmed, but whenever functionality is duplicated n times, a programmer has to do every change n times as well, and on one of these times they will forget or make a mistake. Now I will confess that I’ve over simplified a bit — it’s not just the sheer amount of lines of code, it’s also their complexity, how they are depending each other. The more dependencies the harder it is to understand and reason about your code, and easier it becomes to make mistakes. So a good programmer is an expert of shifting logic around to make things as simple as possible, as isolated as possible and as short as possible. All this while teaching the code new tricks.

Confusingly, we use the word complexity to also describe the run-time of the program. In other words, when it will end and whether we’re still going to be alive when it does. This is because it’s really easy to write something correct that will solve the problem, just in more time left until the universe ends. So a program must not only be correct it must also be feasible, or efficient. So how do we calculate a program’s efficiency, or complexity? Well, we don’t use calculators, we estimate it intelligently. We do something called asymptotic analysis. That’s a fancy name for giving an estimation for how many actions the CPU will do given that the input is of size n. But we don’t care about constants, we only care about big differences, and we assume n is a very large number. So 5*n vs 9*n is better but they’re both negligible as opposed to n², which is in turn negligible to 2^n. The first two examples we call linear (because runtime increase linearly with input size), the third example is quadratic and the last is exponential. If we’ve written a program with exponential complexity it will not end before the universe does.

It so happens to be that there is a family of problems in programming that only have exponential solutions (as far as we know today), and if we solve one we can solve all the others as they are equivalent. If we had a non exponential solution to these problems we could break all codes and have artificial intelligence. So maybe it’s good that we haven’t found a way to do that yet.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store