Should C be mandatory learning for career developers?
The year is 2025. The C programming language is something like 50 years old now - a dinosaur within the fast-moving environment of software development. Dozens of new languages have cropped up through the years, with languages like Rust and Go as prime contenders for systems-level programming. Bootstrapping a project in C these days will often raise eyebrows or encourage people to dismiss you out of hand. Personally, I've barely touched the language since I graduated.
Now, with all that said: I still consider learning and understanding C to be key for having an integrated, in-depth understanding of how computers and programming really works. When I am getting a project up and running, I frequently end up running commands like "sudo apt install libopenssl-dev" without really giving it much thought about what's going on there. I know that it pulls some libraries onto my computer so that another program can use them, but without the requisite experience of building and compiliing a library then it's kind of difficult to understand what it's all about. I know that other languages will introduce this concept, but realistically everything is built to bind to C libraries.
System libraries are only one instance of my argument though. To take a more general view, I would say that learning C helps you better understand computers and programming. It might be a pain to consider stuff like memory allocation and pointers on a regular basis, but I also think that not understanding these subjects can open up avenues for a poorly formed understanding about how computers work. Adding new layers of abstraction does not make the foundation less relevant, and I think that learning C is the best avenue toward an in-depth understanding of how computers actually work. This sort of baseline understanding, even if the language isn't used on a regular basis, goes a long way to improving one's skills as a developer. It also gives people the skills to apply their skills in a wide variety of contexts.
I'm no expert, though: most of the programming I do is very high-level and abstracted from the machine (Python, Haskell, BASH). I'm sure there are plenty of folks here who are better qualified to chime in, so what do you think?
"Mandatory" is a strong word. I think there's a lot of contexts where a "career" developer can know little about bare metal and be more than good at their job. In many cases, knowing CAP theorem is going to be more important than knowing how heap allocations are defragmented.
For a broad education of computer science, I think it's a good idea to have bare metal experience. And I'd go further than C - you should have experience writing in assembly. And writing code in lambda-calculus.
I think it's mattered a lot less in the last decade or so, and is almost detrimental for some. I've seen a lot of dev's go all in on either:
A. I want to manage every bit of memory for what should be a 5 second CRUD application
or
B. I want to build an enterprise monolith solely in base javascript.
From my point of view, in magical fairy land, we wouldn't need to ever manage memory. It would always work itself out, never expose us to vulnerabilities, and be as performant as physically possible.
We don't actually live in that world, but if you're doing small projects/apps, you kinda can by just using any of the very good and shockingly performant GC languages out there. I live in F# and have yet to run into a performance issue that wasn't something that couldn't just be fixed in F# by writing better code, not even working around the memory guardrails.
A friend of mine worked in the kind of environment that NEEDED to care about shaving milliseconds, and so yeah it was relevant to him, but honestly I've found more people misapply knowledge of memory management more than they've found it useful. Spending time diving into unsafe code because they don't understand that the issue isn't "this stupid language" but their own code did something poor (list instead of an array or something like that).
In all honestly to me the way we teach coding (at uni/college) these days seems massively behind the times and an artifact of back when you NEEDED to understand this stuff because there weren't other options. The future is a lot closer to Scratch than it is to Assembly for 99% of the world.
And that's been a problem. Our code just gets more bloated, and more people become less able to fix it. Taking this example down the slippery slope, we'd eventually stop being able to write compilers. A language like C is close enough to the metal to help computer scientists / software engineers get a good feel for how their code affects the machines they make do things. For me, the assembly course was great in helping me appreciate how much heavy lifting even a language like C is doing for me. And yes, I gained a better understanding of the low level stuff, which in turn helps things like Big-Oh have more context.
And I don't know that the fact you've seen devs choose wrong tools justifies the case you're making. To me it simply says their education wasn't good enough because they are unwilling to learn the proper tools. Likely because their education wasn't principles-based (multi-purpose tools), but more rote (single purpose/use tools). I can't stand PowerShell, but I'm not forcing my whole company to use git-bash/MSYS2/etc., either. Instead, I apply the principles I know and google the syntax.
I can't tell if you're advocating more for the bootcamp approach, which is great at churning out entry-level coders (who now have to increasingly rely on vibes, making learning even more difficult) who will become competent mid-level coders. But if we're going to keep advancing, people need to know it all, including the lower-level stuff, too. I do have moments where I think the terms Computer Scientist and Software Engineer should have never been chosen. We need a "do grunt coding" title, and that seems to fit the people you're describing. Much like the difference between a drafter and a design engineer.
Having cake and eating it too, teach Zig.
And yet there's more code in production around the world than ever before. This, in my eyes, has absolutely nothing to do with the languages being used, and a lot more to do with the fact that the market just doesn't care if your code is perfect. I think AI allowing for VERY quick output of REALLY awful code could jacknife things finally into the discussion of "ok we need some fucking standards and metrics that matter", much like a bridge collapse in olden days eventually led to "who ACTUALLY knows math here?", but that doesn't mean you build bridges without modern tools because the older ones meant you had to understand the math better. Like many industries the separation of skills is something that's going to occur, and having a bunch of "Good enough" coders with a few "knows the ins and outs" coders is mostly fine.
And for the coder handling their 50th frontend ticket today that matters not one iota. The vast majority of coding work boils down to "here's this isolated piece that does X and should do Y, figure out why and fix it". There's a larger discussion of how to handle the immense problem of these insanely large projects, but "make sure everyone knows C" isn't going to change most people's understanding because they aren't coding with anywhere near that level of freedom to begin with.
Would that everyone would be perfect, but the teaching methods need to match the people who show up, not the people we always want. Lowering the barrier of entry to coding is the end goal just like every other discipline. You don't look for the finest crafstman anymore for construction because its no longer required. You simplify the process. Most people taking a hammer to the build site don't know anything about what the architect went through to make the structure sound, and coding is very much already well down that path.
Eh i think our entire approach across the industry is going to change pretty drastically in the next 10 or so years (even before AI was a thing, which is going to be its own problem). Coding has become massively more accessible and we're now hitting the point that you don't need craftsman. I don't think bootcamps are all that useful either, but yes I do think it moves towards a more mass produced approach. That said I also think that just like in my housing metaphor you don't let standard construction works do the architecture. Much like the early days of electricity vs today, I do think coding will eventually settle much more into reasonable standards. It's just still in its infancy and of course mutates faster than just about every other discipline.
See i'd probably be more in favor of that as well. Everyone likes to throw around C, but why not Rust or Zig? C is still an absurdly dangerous language to code in with a ton of "well of course you know that you can't..." exceptions that are just NOT obvious. We've got much better tooling at the language level these days and if you are going to expose people to the joys of memory management they might as well "wear a helmet" to keep on this tortured metaphor.
Like medicine? Or construction? That statement is overly-general. Lowering the barrier of entry is what companies want so they can pay less. It also leads to more shitty X. I’m not advocating for the finest craftsman, as you put it, but raising the bar a bit.
I have another comment in here about creating separate roles. There can still be engineers and all that, but we need a ‘grunt coders’ as well. This exists in plenty of other disciplines. Drafters, technicians, etc.
But maybe I’m biased because C++ pays my bills.
Yes and absolutely yes? The corner store employee giving you your booster shot doesn't know how the vaccine is made in the slightest. And construction is the example I'm already using? 90% of construction work is labor intensive and SUPER narrow skill specification. The guy doing the drywall doesn't know shit about the electrical, and that guy doesn't know jack about the plumbing, and none of them know much about the physics that hold the whole thing up.
This goes to my earlier point about the separation. You don't equate Software Engineers to doctors, but to techs. I think we're approaching the question from very different places.
You're advocating for the thing AI is here to do. And I think people need to be smart enough not to blindly trust what gets spewed out from that hose, and generally approach writing code with a bit more thought.
Yes, but it's small and well established. I'm not sure if I'd make any new codebase with c (at the bare minimum I'd do C++), but for learning purposes there's less hoops to jump through to get C code working compared to Rust (i haven't used Zig at all, so I can't say), while later scaling to memory management.
Not every job requires memory management, but I do feel a curriculum in CS should understand and master such concepts. Otherwise, why ono simply go to a 1-2 year coding camp?
I mean I’ve been arguing that most 4 year degrees are a huge waste of time for most of my life. I think a 1-2 year boot camp into real job experience is almost always more valuable than 4 years racking up debt, learning outdated techniques that don’t hold up to reality
I am all for not teaching outdated stuff like Oracle when there are better platforms such as PostgreSQL to demonstrate the fundamentals in action.
The point of teaching C is not to teach a tool to use for new projects, it's to demonstrate the fundamentals in a way that that's transparent. Disassembled C is very close to the original. Labels are readable, statements more or less correspond to small number of asm rows... C is extremely transparent. Doubly so on microcontrollers without SIMD.
When you teach with C, you teach pointers, arrays, structures, memory management, algorithmic complexity, basic control flow abstractions such as functions and loops, in a form that's very close to the hardware. You deal with native data types, you can discuss registers not as some mystical beasts living deep down in the machine code dungeon, but as a simple fact of life.
Not all computing is web apps. A lot of the students might get into embedded, high-perf or gaming "industries" where knowing the details of the platform simply is required or they will get a job where they'll have to maintain or rewrite old code. And you don't know who will end up where beforehand.
Teaching C++, would be patently stupid. Harder debugging, much, much, much more complex language that simply cannot be quickly grasped in full and so on. The language should be a teaching aid to demonstrate the fundamentals.
The next step after C is any language with higher order functions and probably a good asynchronous story. For this a garbage collected runtime would be a good idea. Or at least memory safety.
I don't totally disagree but also find it a little funny you mention PostgreSQL instead of Oracle when that can be considered the database version of teaching C# instead of C (arguably, although looked at another way its a question of capabilities not level of management)
When I teach SQL, I want to concentrate on stuff like joins and CTEs, not spend time explaining why empty strings are identical to NULL and that there is no boolean type. I also want for everyone to be able to start tinkering quickly. I am confident I can install PG in minutes vs. hours for Oracle.
Much like C is just a single
dnf install
orapt install away
, PG is as well.I'm reminded of a time in college where I was spit-balling a project idea with some other CS students. It was a simple web app. Some CRUD, some async workers - I/O bound stuff. A guy comes out of nowhere saying "Well you'll want to run as close to the metal as possible".
How do y'all even understand this shit and where did you even begin? My 3 braincells just start producing magic smoke every time I look at it :D
It looks scarier than it is. Most people first start learning and computers from a model that more closely aligns with the Turing model, but if you came from a discrete math background, lambda calculus would be a much more intuitive way to model computation.
It's absurdly easy when actually put in practical terms instead of abstracted ones that are often used to teach it. I was looking for lambda calculus (and by extension functional programming/being able to pass functions) without knowing it for the start of my career.
it's functions all the way down bro
It's because this already IS part of most computer science educations. This was a basic part of my curriculum
Here's an interactive tutorial that builds it up piece by piece: https://lambdaexplorer.com/
This video might help: https://youtu.be/RcVA8Nj6HEo
Does anyone have book recommendations for learning lambda calculus?
Are you interested in it from a formal math perspective, or from an "expanding my computer science fundamentals" angle? For the former I can't help, but for the latter, I reccomend the classic SICP. It only explicitly mentions lambda calculus a handful of times, but it will teach you pretty much everything you need to know about it from a programmer's perspective.
reading SICP is one of the best experiences I've had in my entire life. I can't recommend giving it a chance highly enough - maybe it'll be for you, maybe it won't, worst that happens is it isn't, and the possible upside is incredible
Yeah sicp is great, it's one of the reasons why I think it would be interesting to learn more about lambda calculus
I would argue that most developers don't need to know what the computer does at a deeper level.
I do know C. I know it well enough that I have made pretty significant ROM hacks and am making a game engine for myself in it. It has not had any significant changes to how I approach my day to day job because my day to day job works on far more layers of abstraction.
As a counter example, every day I work with API calls over the internet. I do not know at any kind of intimate level how networking actually works. I have a vague concept that I need to send a request to some server somewhere and a bunch of magic black boxes and wires in the middle take my request and get it there for me. Knowing how those black boxes work might be interesting for some people, but the intimate knowledge doesn't change how those people send requests at their day job.
Yeah, I'm more in the is not strictly necessary camp.
People are okay with specialists building specific car parts without needing to deeply know how all the other parts work.
Knowing computers deeply takes about as much time as knowing how each car part is made and how to make useful incremental improvements to each car part. There's a lot going on in every level. Both in cars and in computers.
If anything, it's a problem that there are too many generalists. But this is just as much a consequence of industry practices of tech vs manufacturing as it is of individual choice.
That's not to say that specializing deeply in two different areas is not fruitful for cross-pollination... but not everyone needs to know C. It's like saying everyone needs to know how to build a combustion engine--even the air conditioner person.
I agreed with you at first, but after thinking for a little bit I think we're kind of dooming software to be inefficient buggy messes if we don't put any stress on knowing fundamental computing basics. I'm remembering when I first got serious about writing enterprise level code reading a book about writing good Java code and realizing that the coding style I had developed up to that point was very inefficient because of some behind-the-scenes stuff happening.
Granted, I don't expect most programmers to need to know very low-level things like specific hardware instructions, and some of them might not even need to understand things like bitwise operations even. Programmers need to be somewhere between "builds a CPU from TTL chips" and "strictly a vibe-coder", and that not only depends on the situation, but it's debatable even within specific scenarios.
My issue with the end of this line of logic is that it strikes me as humorous that the line always ends at C.
Why not assembly?
Why is C the abstraction we decide is the reasonable line between understanding and “too tedious”.
I didn't say "vibe coding"
You can know C# and algorithms just fine without delving into low level programming.
I never implied you did; I was just listing an extreme.
I do agree that people should conceptually understand memory allocation and references vs values before calling themselves skilled developers, but why C specifically? Is it the best avenue toward that understanding, or is it just the one you're familiar with? C is an abstraction over 1970s-era computers, and has held back progress in some significant ways by remaining the standard low-level language for so long. (It's why so many things are single-threaded even in these days when we regularly have tens of cores in a machine, for instance. Multithreading is just that unnatural in C and languages with similar execution models, which is most of them.)
Separately, how many layers do you need to go down before you consider yourself to have understood a system? Does it make sense to teach C without assembly, or assembly without raw machine code? Does this keep recursing down to individual logic gates and beyond?
(I hope not too far beyond; transistors are the level where my understanding becomes "idk, cursed dark magicks probably, just trust that they behave as advertised".)
Ah, you're describing my university degree. I like that I have a decent understanding of all that stuff, but it was very intensive and took me a long time to get through.
It is the one I'm most familiar with, and the one that my computer's OS is built on.
Coincidentally, I don't understand multi-threading particularly well. I'm pretty sure we went over it in my course on Operating Systems (specifically process forking and context switching) but it wasn't my favorite course.
I suppose I'm not sure how computers have really changed since the 1970s. I think that there might be a case for a really good lingua franca of distributed / concurrent programming, but generally speaking the way that I look at computer programming is from a single-threaded perspective. GPUs are another huge development that I don't really understand, along with M1 chips. I still think these things are still derivative of the theoretical foundation of computing, though.
Regarding your second question, I will state my bias as a "computer scientist" (i.e. someone who prefers dealing with abstract concepts over concrete implementations) before saying that C reliably abstracts away hardware layers. I think that there's a difference between understanding the idea of an Instruction Set and understanding how to write an assembly program for a specific CPU. In the context of this post, I am thinking specifically about the world of "software" development. That doesn't mean that it's not helpful to understand assembly, machine code, and CPU architecture, if one wants to truly understand the computer as a system, but in this case I am essentially taking the C compiler for granted.
There are a lot of features that come to mind like branch prediction/speculative execution, out of order execution, multi level caching, virtual memory, and SIMD instructions.
The compiler and CPU do all the heavy lifting for us so we can pretend that we are coding on a PDP-11, but the mechanics of what actually happens in your CPU are very different from what is written in your C program.
I think this article makes a good argument how C is no longer a low level language in the sense of being close to the hardware: https://queue.acm.org/detail.cfm?id=3212479
That said, it’s still a useful model and one that is helpful for most developers to know imo.
One of the biggest things is more raw power and memory. That’s not to say that a webpage should have 8 gigs of JS framework under it, but you’re insane if you spend weeks optimizing for an extra 1K.
Styles like functional programming are becoming popular again because “fuck it it’s all immutable, clone it if you need to “ isn’t a death sentence of an overhead but a rounding error in performance instead.
Further styles like that let the compiler do the heavy lifting. If I stay in proper F# code practice, 90% of the time if my code compiles IT RUNS. I cannot tell you how happy I am to almost never deal with runtime bugs anymore because through proper typing and testing a single change in my code will instantly highlight every other area I need to adjust to fix it.
I'm not the most knowledgeable about GPUs specifically, but have worked with plenty of peripheral and acceleration devices. GPUs did, and continue to, significantly change computing. They're a completely different tool than a CPU and solve different classes of problems. Essentially somebody took the Arithmetic Logic Unit from the CPU, duplicated it a thousand times, and gave it some dedicated memory.
This system allows the main system CPU to move data into the dedicated memory, chunk it into a bunch of small sections, and assign each section to one of those ALUs. If each ALU is able to independently and simultaneously operate on its assigned section of data, then you've just done a math operation on the whole data set in 1/1000th the time it would have taken the system CPU.
This was relevant to computer graphics 20ish years ago because, it turns out, rendering graphics can be chunked into bite sized jobs pretty well. This is also relevant today, because training and using Machine Learning models is ultimately just a bunch of Linear Algebra math. Linear Algebra/Matrix Math can be simplified to a bunch of Multiply-Accumulate operations, which ALUs can do in bite-sized-chunks!
If you're wondering about Apple's M-series of chips, I can talk about those too. There wasn't anything truly paradigm-shifitng there, but it did create waves and leveraged some cool ideas that the industry wasn't quite expecting.
As general background, the M-series chips (and the A-series in iPhones) are Systems-on-a-Chip or SoCs. The idea here is that the entire computer is all on one single* piece of silicon (*caveat for multi-die systems on an interposer or generally sharing a package). Traditional computers have separate devices all connected together. Separate CPU, GPU, Memory, I/O, etc. Putting it all together gets you some speed and power efficiency at the cost of money and complexity.
That leads nicely into the first cool idea Apple implemented: Unified Memory Architecture. This is what you call a system that shares one pool of Memory/RAM for all devices, particularly the GPU and CPU, and integrates it into the same package. This gets you 2 benefits:
The second idea Apple implemented is the ARM Instruction Set Architecture. ARM is nothing new, it's been the default ISA for smartphones since almost the beginning. But Apple was the first to deploy it in a high-power/performance setting (laptops and desktops). Some people thought it couldn't be done, since high-performance chips had almost always been x86 ISA from Intel or AMD.
To wrap it up, why was this all so exciting? Nobody expected Apple to deploy their own ARM-based, high-performance SoCs and replace Intel's traditional CPUs. Intel had been the only game in town since the early 2000s. Topping it all off, the M1 was extremely powerful AND power efficient. I'm typing this on a M1 Pro MacBook Pro, and this was the first time I had a laptop that could genuinely last a full work day. Never before.
Now why was this not surprising? The M1 is just a bigger iPhone SoC! Apple has been designing these SoCs, with Unified Memory Architecture, and relatively high-performance ARM ISA CPUs since 2010 and their A4 SoC. The A4 was in the original iPad and the iPhone 4.
This was a hugely detailed answer and I'm really grateful for it. Thanks! I didn't realize that M1 was operating on ARM but I've been working with RPis long enough to appreciate how energy-efficient they are.
C itself probably shouldn’t be mandatory, but I do believe that devs who plan to remain devs for the long haul should acquire at least a handful of languages totally different from each other. Doesn’t really matter which ones, so long as they make the individual think a bit more deeply about what’s going on under the hood and illustrate that there’s more than one way to do things.
So for example I think it’s great if someone who’s traditionally only written JavaScript also achieves practically usable skill levels with a strictly typed safe compiled language like Rust or Swift, a JVM language, and a minimal language like Go or C. Even if they only ever do little side projects in those languages, they’ll be equipped with a more robust mental model and wider variety of approaches to write JS with.
This is a great point. In high school, we started with Lisp, then C++, and finished with Java (and PHP for little side projects). When I eventually joined a small, general web software consulting company, every client was using a different language. The first project in Node.js with it's event loop and async execution was a mind bender, along with a glimpse into Erlang while working with Riak. Then there was Perl's powerful regex and "There's more than one way to do it" motto, for better or for worse. Go after that was a breath of fresh air. Scala brought in functional programming and powerful types in an accessible way. No matter how much I do with Typescript, I still feel like I'm just scratching the surface of the type system. I'm hoping to add Rust in my personal time, just to see what the hype is about.
At the very least, it cement the notion of "the right tool for the right job" having seen so many strengths and weaknesses and tradeoffs. More broadly, the exposure and familiarity to so many different approaches really helps in seeing what's possible, as well as quickly understanding other people's solutions.
I think it's good to think in programs in the way the computer actually executes them. So designing and solving a somewhat complex problem in C; even if somewhat tedious, is worth it.
I think C's level of representation of the low level of a computer is overblown these days. It doesn't represent today's modern pipelined multi threaded CPUs well, for example. Iterating linearly through a linked list and a vector is one pointer access per element for example, but the vector is going to be much faster because of cache locality and prediction.
Agree. I think the low-level ideas in C about memory management, pointers, and hardware device programming can be taught just as well in any other systems language. I think seeing a little assembly, and seeing how languages compile to it might be good things to see (e.g. Compiler Explorer)
The point about lists and vectors is completely orthogonal to C as a language.
What I will say is that professors don't stress that linked lists are amazing for learning but terrible for production. I earn my money writing C++, and I used to earn my money teaching C++. Being able to write a templated, idiomatic, doubly-linked list in C++ was my line between beginner and intermediate. The students who got that assignment tended to have much fewer problems going through the rest of their degree.
And while I'm sure none them will ever use a linked list or need to write a custom data structure at their jobs, they gained good fundamental knowledge (and confidence) that will allow them to think smarter and better about the problems they will encounter.
And as an aside, while most students are exposed to multiple languages, it's usually one per course. A course should use more than one language to better demonstrate and teach principles, and not just syntax.
You can issue prefetch instructions so the linked list will be about as fast. The actual benefit likely comes from automatic vectorization nowadays. There are even gather fetches that can be emitted for arrays of structs.
But guess what language makes it really simple to emit those manually and also maps close enough to asm that you can read decompiled asm with annotations and heatmap? Yeah, C.
Do you think it's reasonable to teach the multi-threaded approach without building up an understanding based on a single-threaded approach?
Multithreading is only one way to accomplish concurrency. For example, Node.js is extremely efficient for concurrent network I/O with only a single thread.
From a software perspective, I'd start with concurrency then discuss the different ways to parallelize tasks.
Yes. Absolutely. If anything, just to get performance related computing back into our grasp. We may have all the cores, but we aren't using them as best as we could.
C is one if my favorite languages, its simple, effective, and wonderful. its pure programming that you can do anything and everything in any way possible. This is also the problem.
C isn't a good language, but its the most universal runtime. I would even go as far to say you dont even need to really "learn" C, such as the depths of memory management, but knowing and dealing with pointers a few dozen times on smaller projects would be incredibly beneficial for any developer.
I personally use Odin as my main language these days, as it takes care of a lot of manual memory management and adds a few safe guards (and the syntax is stunningly beautiful), but C will also be The Language for me. I can always drop into it and get something going.
If someone already knew Rust specifically, then to know C they just need to imagine if Rust didn't have explicit lifetimes, generics, traits, modules, or a package manager. Also they need to learn about terrible things like header files and maybe autotools. All that's to say that I don't think there are any important computing fundamentals to be learned by such an exercise.
The real value is in knowing at least one language that compiles to native executables and uses manual memory management instead of garbage collection. This is a pretty short list including C and Rust and a few others like C++, D, and Zig.
IDK, but I really liked learning a little bit of C in my college's intro to programming. It just clicked for me in a way no other language ever did. It's so basic that it forces you to do lots of cool stuff that would be abstracted in other languages.
I no longer wish to have anything to do with programming or computer science. But I cherish my time learning C. It allowed me to have insights and do stuff just because I thought they might make sense. Unlike something like Python, which is so huge and full of (sometimes idiosyncratic) specialized solutions for every single thing that you spend more time in the documentation than you do coding.
I mean, listing prime numbers (or some other basic things) in C is actually kind of cool. In Python, there's probably some
PrimeNumbers(n)
function that you have to look for in the docs. That's not fun.I agree. C (and, to a lesser extent, other C-like languages as long as I ignore a lot of their high-level features) really click for me in a way that more Python-like languages do not. I feel like tearing my hair out whenever I try to write anything in Python.
I think I'm just too much of a control freak; I want to understand exactly what my code is doing, to the extent that that's possible, or I feel lost. But I imagine I might feel differently if I were a "real" programmer rather than just an occasional hobbyist.
Yes, if only to read one of the greatest programming language books ever: The C Programming Language by K&R.
In my embedded systems class, the professor had a “working” C program we were supposed to build upon. Split into teams, no one could avoid a segfault that happened after running for a “random” amount of minutes.
Me and 5 other students mapped out the stack & heap on a whiteboard, and meticulously went through every line, updating the whiteboard as we went. Eventually we found the bug (stack overflow). The “A-ha!” and dopamine when it just worked. It felt like a rite of passage, finally we knew what the computer was doing. I’ve loved debugging ever since; I can get as low level as needed.
My 2c as a software engineer: for school curriculum I think it's good to experience a breadth of tools in the context where they are most useful. I don't think mandating any one tool is the right approach. Instead we should have students learn OS design, application development, compilers, graphics, etc and let those choices dictate what languages get used. For many of them that might be C but it might instead be assembly or C++ or rust or some other language. We shouldn't hold up one language or one tool as the be all end all, instead teach students to pick the right tool for each task and build a well rounded tool box.
Yes. Right after finishing the second computer in Turing Complete. Neither is hard and both will give you insight into computer architecture. Hopefully enough to make you use & instead of Math.log2() when testing bit fields, which I've seen my younger colleagues do nowadays.
For me it's very simple - at some point, on any modern computing device, a stack trace will more likely than not cross a C ABI boundary at some point. Even in my web developer days, sometimes I had to cross that boundary to figure out things like the behavior of the webserver I was working with.
For me, learning C had the best effort to reward ratio of anything I've learned in my 20+ year career. It's probably not required, and you certainly don't have to love it (or even like it) but I'd highly recommend it to anyone who cares about their craft.
I learned it on my Amiga originally. I have to maintain some 30-odd year old lumps of it for my day job.
Quite frankly I'd rather not, these days. I'd much rather use Rust for low level stuff, Go for appropriate things maybe like web services or easier to write command line utilities, and some nice simple perl for OS scripting (j/k, I'd use some Python or Bash, or Awk or something).
That's not to say it wasn't useful from time to time between the Amiga days and now, I wrote some stuff in it that was useful to me. Too many footguns for me to keep track of these days.
I think learning C has a fairly high payoff in terms of developer learning outcomes even if you never use the language. Sure, C has some idiosyncracies that are products of its time, and another language might be able to teach similar lessons while not being as tedious, but C has the nice benefit of being instructive while also enabling developers to read lots of historic code.
I wouldn’t be surprised if pioneering schools start teaching their systems classes with a language like Go or Rust in the next 10 years with an elective for C.
Ninja Edit: Disclaimer that I’ve not actually used Go or Rust much and I don’t know if they would serve as teaching equivalents in practice.
Might have been true over about 1970-2010, but these days Rust can and should replace it (along with C++) completely. Just like something will replace Rust in that space in the future.
On the other hand, there's always value in studying history and there's still plenty of legacy code out there.
I think that the biggest counterpoint I have to this is that open source operating systems are still written in C. If you want to dig into the internals of the machine you're working on, you'll need to understand the language it's written in. You can write a kernel in another language (see RedoxOS) but I think it's unlikely that it would have the staying power of something like Linux or OpenBSD. If we lose the capacity to maintain these projects, it is a tall mountain to climb starting from any other foundation. Maybe it will happen one day, but I think that we are collectively better off working with what is already available.
I got a lot more out of following along with NAND to Tetris than I ever have using the admittedly small amount of C that I know. I got a better understanding of logic out of LOGO than I did any C based language. We all pick up information in different ways. If C were the only way to really understand what we need to know, every CS degree would require it.
I'm not a developer, but I learned C through the CS50x course. I've lightly dabbled in several other languages — C++, C#, Lua, Python, etc. — but I found C's syntax particularly simple to understand and work with (albeit in the easy-to-learn-hard-to-master sense), and it really helped solidify a number of computer science topics in my head, particularly around how computers manage memory. After taking CS50, I used C in several Arduino projects, which further helped me grok some of the inner workings of computers.
I don't do any programming in my line of work, but I edit text and develop graphics/diagrams for mainframe-related educational material, and I suspect I owe several raises and promotions to C.