modified: Saturday 7 January 2023
Time safety is more important than memory safety
This is an edited except from the EEVblog forum topic The Rust Megathread
Summary: “Time safety” is more important than “memory safety” et al. We live in a world where time destroys more things than software bugs will ever be able to do. You should choose a language based off the biggest threats to the (time-integrated) usefulness of your projects, not based off the threats in your local region.
A pet peeve of mine is around claims that languages like C are a bad choice for new projects and that languages like Go or Rust are better options. This is only true if you take a subset of the lessons history has taught us and disregard the rest. I see this as irresponsible.
I think we are living in the information dark ages, where a lot of our useful and amazing creations of code are simply not going to be around ten or twenty years from now. They will be broken (or passively deleted, another problematic topic).
We seem to have come to accept that a program written today won’t be useful or usable ten years from now and I think that is a very bad philosophy. Compare it with many of the machines (cars, factories), artwork (music, books, film), architecture and systems (society, education, etc) that show how good creations can last decades or more. Most don’t, but they have at least a chance of doing it.
A similar argument can be made about a lot of modern car designs and DRM in digital works: we dislike the idea of things having intentional or preventable use-by dates.
I see choosing “new” languages (or frameworks) as very risky propositions to the life of a project. These languages are less likely to be around in ten or twenty years time (newer and better Rusts and Gos are likely to form in the years ahead) and in the meantime require you to constantly expend effort to prevent your projects from being turned into a brick by language/environment/library changes. In contrast my old C projects from 5-8 years ago still compile and run, with one particular example only needing a small modification. It’s not perfect, but C is much more likely to be the “safe” option if time * usefulness of your project is a goal.
Should you care about code and software longevity if you are only making personal projects?
I still say yes. I want to be able to show and demonstrate projects I made “when I was young” to my kids, just like I will be able to show them old pictures of family and physical projects I have made. I want to be able to show them the lessons of my life, not show them that the only lesson is about “oh yeah we lost all of that”.
It’s always useful to be able to use old code from old projects too. Ideally perhaps even forever, but in our current world that’s a longshot. But even code snippets that survive a few years are very useful. Contrast this to how many environment or framework specific code examples or tutorials from the web that you have tried to use but found do not work any more — you do not want your code becoming that.
Should we persue these new languages?
Yes, experimentation and development drive our society. But ordinary people using them, IMHO, provides minimal benefit compared to the problems it causes.
This is the same reason that we don’t all drive prototype cars and wear prototype clothes. In the long term it would cause more harm than good.
As such: I don’t think it’s responsible to ask ordinary programmers to start their projects in new languages.
Thanks for this, I agree. Several years ago I started migrating many of my old code projects to C for the reasons you mentioned.
I forget where the quote is from, but there's one along the lines of, "C is rarely the best language for any given project, but it is almost always the 2nd best."
I would add to that: "Working out which language is 1st best may require a time machine" :)
I like it. :)
Being afraid of your software no longer being unstable is retarded because of two reasons. First, you can always use an old version of the compiler + a virtual machine to recompile and use your software. Secondly, in order for evolution to take place the less fit languages need to die off. Artificially prolonging the life of a language like C because it's "stable", is hindering everybody.
ggcity looks very much like Marble Madness ...
I think as programmers learn more languages (and I don't mean syntax, a C-programmer can write C in any language...) they tend to become much better at thinking about solving problems.
There are things Go and Rust kick the pants off of C in and those things tend to be less likely to have a large swath of security issues going along for the ride because of the constraints the language provides. Also, Go 1.0 code compiles in the latest release of Go. Similar with Java.
Finally, I was once supporting a system that required a specific sub-release (and RPM even) of gcc to be installed. It turned out that RedHat added a small patch to a single RPM and a grad student had written code that exploited the behavior of that compiler. Just 'cause it's C doesn't mean it will stand the test of time.
imoverclocked: that's a good last point. There's a lot of C code that no longer works.
Similar things can however be said about Java: I have obscure old java code that no longer works.
No language or environment is perfect, but I think C avoids some of the extra temporal pitfalls that new and experimental languages bring.
Good to hear about Go.
count: you must be looking at image filenames. Indeed it's a 4-corner-per-tile heightmap world, similar to marble madness.
I do not think the logic holds in this argument. If we are discounting programming in "new" (10 year old) languages because they may disappear, we should apply the same concept of temporality to our own projects. Recognize that our code will rot faster than the language (in all likelihood) and put an EOL plan in place for retiring the project.
I think it is much more justifiable to use whichever language provides the most engineering advantages at the time of writing, rather than writing everything in C. Machines evolve fairly quickly which dates the software. In ten more years our C applications that do not take advantage of 512 CPU cores, 1TB of RAM, and the TPU will be as interesting as a DOS application is today; niche.
Good article! I agree that code should be preserved, but I also think that Rust and Go have already put themselves in the future. What I mean is, a lot of programs have been created in these languages that I think, for better or for worse, the languages will live on, especially Rust. Microsoft is using Rust and Mozilla has already used it heavily in Firefox.
Please stop the rust and go fanboyism.
I have thought about a similar problem with older vs new frameworks: often, people do not even consider older, proven frameworks just because they want to use the latest, shiny frameworks that solve x and y problems that the old one had. In the meantime, they completely ignore the new problems the new, shiny frameworks introduce, often not even understood fully when they are chosen for the new projects. In the end, they will have many issues the old frameworks either did not have, or were at least well understood and discussed by the people who used it.
I also had a similar thought about the C++ vs C# argument that I have heard multiple times: people often claim that because C# has a built-in garbage collector, it does not have the same issues with memory management that C++ has. Even forgetting about the RAII pattern that solves the problem much better (IMO) than C#'s garbage collector, there is something they are not willing to see here: as someone who has worked for years with both languages, I had way more problems because of the unpredicatability of the destruction of objects in C#/.NET (and heard the same issue from multiple teams/people, so it is not just me). .NET must do very complicated things just to solve a problem that ultimately is only a problem in books about C#, as again, RAII does solve most of the issues much better than garbage collectors do (apart from circular dependencies that can be dealt with on a case-by-case basis, as it does not happen nearly as often as .NET people claim it).
Anyway, maybe I am just getting older, but I often think about how much in IT is actual progress and how much is just marketing BS that is done for the sake of generating more income for corporations and investors.
You misspelled ``pursue''. Your title for this article creates something of a false dichotomy, since one can easily have memory safety and this ``time safety''; as an aside, this title reads as if referring to preventing timing attacks, although I'd figure you wouldn't care to change it now.
The C language was poor at inception and is still poor today. You likely don't write properly standard C, so how do you know if your code depends on a detail of the current machine, compiler, or environment to run; it's certainly easier to write such a C program than one properly portable.
You fail to recognize this ``time safety'' is perhaps best provided by abstraction. A suitably abstract language makes it easy to write portable programs which will always behave the same, with the underlying machinery more easily transferred to new environments, etc.; I agree that these newer languages are poor, in any case.
Rather than focus on experimentation, focus on archaeology. Ada and Common Lisp are old languages which are also suitable for this purpose; Ada is updated with a new standard about once a decade, but the older standards are still in heavy use; Common Lisp hasn't changed at all since standardization. If you use Ada or Common Lisp you get memory safety, moreso with the Lisp, and this ``time safety''.
As code bases grow larger and larger.... the more convinced I am that we're missing the true driving criterion for new languages.
A language should _also_ be digestible by machines to assist us in proving properties of our code, and hence assist us in refactoring and improving and debugging our code.
I think the "worse is better" idea applies here (https://en.m.wikipedia.org/wiki/Worse_is_better). Actually, in that article, it specifically says, "Gabriel argued that early Unix and C, developed by Bell Labs, are examples of this design approach." So essentially, in any given situation, C might be worse than other languages but still be the better choice in the long run. Of course it's extremely natural to focus on the short-term "best" choice (i.e. to look at lists of features of two languages and choose the best), but sometimes it takes a lot of experience to know that the worse choice is better. That kind of experience isn't always easy to put into words, so naturally it can be hard to persuade others why you're making that choice.
...and maybe none of this applies to the C vs. Go or Rust case. Maybe Go and Rust are 100% better than C, even in the long run? I don't know. Just some thoughts...
RE Rust backwards compatibility: https://utcc.utoronto.ca/~cks/space/blog/programming/Rust1BackwardIncompatibility
I agree with the sentiment, having had my own share of programs go obsolete: 1) a Drupal website that stopped working after only 3 years when the php library dependencies were "upgraded." 2) a bunch of Java Applets that no longer work on the web.
But I think the issue is a little more nuanced than your black/white title implies. I agree with the comments about the value of abstraction and language standards. I like that Ada is a standardized language and my Ada85 programs still run fine and it's a more reliable ("safe") language than C.
I also think most of the obsolescence happens on the front end. Witness the longevity of COBOL programs for batch processing applications. My Java Applets don't work but I see plenty of old JSP applications still running. So these days I try to write "tiered" applications where the front end is separate from the back end so I'm somewhat cushioned from the impact of GUI library changes.
Sometimes I'm more worried about the digital media itself than the software, having had to migrate my archives from floppy drives to zip drives to CD-ROM to DVD and now Flash drives. How robust is a Flash drive for long-term storage? I have my doubts.
It is my browser or did you applied a dark mode to your site? Looks great, the white background was heavy.
Just your browser, sorry :)
Go is too complicated and Rust is too opinionated. C is as good as we're gonna get for lasting code because it doesn't try to do too much and doesn't get in the programmer's way with annoying "safety" features. Abstraction of anything higher level requires more standardization between operating systems. C + OpenGL is the best bet for lasting games. Eventually the industry is gonna get tired of abstracting abstractions when they're running a game at 2 fps in a riscv emulator in a browser inside a docker container inside an electron app on windows 15 using WSL.