Securing open source requires better programming languages

Uri Valevski
5 min readJan 18, 2022

--

https://github.com/Marak/colors.js/commit/074a0f8ed0c31c35d13d28632bd8a049ff136fb6#diff-92bbac9a308cd5fcf9db165841f2d90ce981baddcb2b1e26cfff170929af3bd1R18

The colors.js self-sabotage exploit has reminded me of some thoughts that might be worth sharing.

Just in case you haven’t heard about it, the tldr is as follows: a popular open source code library maintainer wanted to exact revenge in companies using his code without paying and so introduced an inifinte loop in his code, causing many depending programs to become unresponsive upon update.

There are two interesting questions actually;

  1. Is open source scalable from a security standpoint?
  2. Is open source scalable from a financial standpoint? Or — how do we make sure it is financially worthwhile for individuals and companies to maintain popular open source libraries.

The second question is definitely interesting as well, but today I will focus on the first one: Is open source software scalable from a security standpoint? Can we sleep soundly in our beds knowing that our machines are running someone else’s code? Can I leave my dependncies unversioned? Is that code I copied from stackoverflow going to steal my password?

Although contrived-sounding, even the last example is not impossible.

Several companies have risen to handle the challenge of securing open source software, alongside GitHub itself who in the last few years started scanning code for vulnerabilities as service.

While these efforts can definitely enhance confidence in foreign code, we need to acknowledge a fundamental design flaw that would be hard to fix from outside — the trust model of using foreign software in our code.

When we download a library code in python and call a function in it, we essentially say — “I trust this piece of code to do anything, just as I would the parent program calling it, which I have written myself”. This is a binary trust model — all or nothing, and is true to most of the popular programming languages (e.g. python, javascript, java, cpp, …). Due to how common it is to use code available online it really makes you wonder how trusting we are, and how rare it is for someone to be undeserving of this trust…

But anyway this naivete can’t go on forever. What we would have liked to do is actually grant/withold certain permissions, when we include a new library in our code. This is similar to how we are asked for specific permissions when we install a new app on our phones — e.g. allow camera, but not contacts access (you might remember it didn’t use to be like this installing a program on our PCs). Assuming correctness of the permission model itself, it make a hacker’s life much harder.

As said, it is unfortunate that most programming languages cannot support such a model, leaving us a choice between trusting the author completely, or looking at the code, either ourselves or by machine, or as it is often referred to — static analysis.

Static analysis can detect, for example, if library code performs networking, or accesses unexpected machine APIs. It is however a pretty hard problem to determine generally if a code “does what it’s supposed to do”. Actually it’s undecidable in its most general formulation.

If an explicit permission model would be baked in the programming language itself, a compiler can prove that a certain piece of code only uses certain abilities which are expected and not others.

Interestingly, such a model almost exists in functional programming languages. Languages like Haskell can be characterized as revolving around pure functions. These are functions which can get input from the external world only via their incoming parameters and affect it only via their return value. “External world” here implies the rest of the program, or actually another computer (e.g. send your passwords online). In such languages, affecting the outside world needs to be explicitly declared, or otherwise the code would not compile. If you download a library in haskell, you can know from the function type that, for example, it does not access networking when you don’t expect it to.

Pure functions are a great building block to build on, from a security standpoint. Using effect management, that is — explicitly declaring when a function is impure and what exactly makes it so — means that the compiler can enforce these expectations, which means all the developer needs to do is look at the function type, while the code itself can be left unchecked.

This solves for the majority of code exploits, but not all. Interestingly, in the colors.js story, the foreign code did not cause any “side effects” but does create an infinite loop causing a denial of service. So how can we solve for that?

Taking inspiration from effect management in functional programming languages, we could treat computation time as such an effect, that requires explicit declaration (definitely making the computer warm has caused some effect as humanity had recently learned…). If this were possible, then we could be certain that the foreign code we’re using will terminate, do so quickly enough, and won’t do any funny stuff in the middle.

A naive way to go about it is to enforce a piece of code doesn’t do more than X cpu cycles. While this might work, it will only do so at runtime — which might be a bit too late for preventing a denial of service attack.

A more formal approach would be to be able to prove that a certain piece of code always terminates. For example the Agda language, for the purpose of determining termination uses a method called foetus (hehe).

But limiting our programs this way might take too much expressivity. So maybe we don’t have to give up non-decidability of termination in such an absolute manner. Instead we might want to track a piece of code that might not terminate, vs pieces of code that do and provably so. The latter kind using a smaller subset use of the language features which makes its terminability provable. Potentially-infinite amount of processing will then be treated as a side effect, needing some explicit declaration, and trust which will need to be granted explicitly to external code.

We might also be able to extend the decidability of termination notion to be non binary, i.e. ensure certain limits on runtime. If by using only a subset of the language the compiler might be able to make some automatic proofs about the asymptotic runtime, that could make benchmarking obsolete, at least in part. You’d be able to say “library X is linear in output length whereas library Y is quadratic, therefore use library X”. A developer maliciously changing the code will cause it not to compile, which means the code would never reach production.

The more fragmented (or decentralized) software development becomes — the more the problem of verifying runtime and side-effects becomes important for computation theorists and language designers. Decentralization and reuse have enormous benefits, and giving up on them is out of the question. Instead we need to treat software security as a design constraint for the next generation of languages.

Much like blockchain has solved decentralization for currency, formal verification could allow 0-trust models in open source. Using effect declaration (including runtime). This would fuel the trend of reuse, outsourcing and decentralization which has fueled the software industry in the last decades.

--

--

Uri Valevski
Uri Valevski

Written by Uri Valevski

Formerly: Co-founder/CTO of hyro.ai, Googler

No responses yet