I am struggling a bit what to do with this post? Unless people buy the book they can't really discuss it. It also isn't clear why you made this submission as you didn't leave a submission...
I am struggling a bit what to do with this post? Unless people buy the book they can't really discuss it. It also isn't clear why you made this submission as you didn't leave a submission statement.
The only thing I can say is that the title of the book is 100% clickbait, even more so as this book seems to dedicate a whole section to LLMs which is only relevant if you are... well, working on LLMs and such. More importantly, programmers cover a wide array of people with various types of programming/development work and some of them rarely need to deal with more than some basic algorithms. In fact, I am willing to be that a majority of development work done does not require much in the way of superficial knowledge about algorithms.
Edit:
To the people upvoting this post, can you tell me why? It already had two votes before I made the first comment here. I can only assume that was based on the title and I really was hoping Tildes would avoid that sort of voting behavior.
Yeah, like I said the title is 100% clickbait and content questionable. So it being posted here confuses me. Certainly if they saw it on hackernews and had seen the comments. What confuses me even...
Yeah, like I said the title is 100% clickbait and content questionable. So it being posted here confuses me. Certainly if they saw it on hackernews and had seen the comments. What confuses me even more is that 45 people decided that upvote this post. Which sadly means that also here on tildes we have people not bothering with looking at content and just voting based on titles.
That's fair, it just surprised me to see it at all here. Then again, it does seem to be in the handful of posts with the lowest score in the last week or so. All of them have some amount of votes.
That's fair, it just surprised me to see it at all here. Then again, it does seem to be in the handful of posts with the lowest score in the last week or so. All of them have some amount of votes.
I think this is one of those times that highlights the difference between someone who works as a programmer versus someone who works as a computer scientist. If you changed the title to algorithms...
I think this is one of those times that highlights the difference between someone who works as a programmer versus someone who works as a computer scientist. If you changed the title to algorithms every computer scientist should know, I would think it more accurate, since much of the work of a computer scientist is investigating algorithms for their own sake. However, I'm sure that title would appeal to a smaller audience.
That's not really a thing. No one is hired as a "computer scientist". The closest would be research positions, mostly in academia, but that's always specialized. If you work in graph topology...
versus someone who works as a computer scientist.
That's not really a thing. No one is hired as a "computer scientist". The closest would be research positions, mostly in academia, but that's always specialized. If you work in graph topology research, for instance, I don't see any particular reason you'd "need" to know about transformer networks.
Well, I work in academia and in public service so I'm used to people being hired as scientists and various other blue sky research positions. I would also say that there is a larger umbrella of...
Well, I work in academia and in public service so I'm used to people being hired as scientists and various other blue sky research positions. I would also say that there is a larger umbrella of work involving computational maths that fall under the broad term of computer scientist. E.g., where the primary thrust of your work is experimenting with modifications or transformations to algorithms. The overall evolution of neutral networks for example and deep learning in general has been ground forward by folks working in highly speculative or research focused pursuits. I would very much dub these folks as being computer scientists moreso than a programmer. But I think reasonable different interpretations exist. Like I said, working in research I have one set of contextual experience.
That's disappointing, I was hoping to brush up my knowledge. I haven't done anything programming for a while since I graduated and went into a factory career instead since I was too nervous and...
That's disappointing, I was hoping to brush up my knowledge.
I haven't done anything programming for a while since I graduated and went into a factory career instead since I was too nervous and sure I would end up failing the interview questions to even try.
It will be much more beneficial to actually work on a hobby project rather than read up on theory to be honest. If you have trouble finding a project, there are also games out there that actually...
It will be much more beneficial to actually work on a hobby project rather than read up on theory to be honest.
If you have trouble finding a project, there are also games out there that actually require you to program to make progress Screeps is one example. The other day someone also posted something on Tildes about a similar project, but I can't find the post anymore.
Yeah, I was hoping for some tasty coding meats too. I've done so little actual coding over my IT course that I doubt I be able to break out of customer service (good employer and lucky position -...
Yeah, I was hoping for some tasty coding meats too. I've done so little actual coding over my IT course that I doubt I be able to break out of customer service (good employer and lucky position - I just don't want to be pigeonholed), and thinking about coding sometimes causes me unreasonably frustration and panic.
Maybe we can remove the link and “(Second Edition)” and just have a fun discussion on which algos people find the most useful or even remember? Breadth First Search is always in the back of my...
Maybe we can remove the link and “(Second Edition)” and just have a fun discussion on which algos people find the most useful or even remember?
Breadth First Search is always in the back of my mind for some reason. “If I ever go blind this moment I know I’ll eventually get out of this place,” is strangely comforting.
Also, mentally revisiting all the sorting algorithms keeps my brain sharp.
Python 2 let you set True, False = False, True. Havoc is wreaked upon the interpreter. You can't do that anymore in Python 3. Instead in Python 3 you can dig into the implementation details with...
Python 2 let you set True, False = False, True. Havoc is wreaked upon the interpreter. You can't do that anymore in Python 3.
Instead in Python 3 you can dig into the implementation details with ctypes module to change small integers and some other "immutable" values. Havoc is, once again, wreaked upon the interpreter.
Since we don't have the book on hand, why don't we come up with the Tildes-approved list of 50 algorithms every programmer should know (first edition)? I'll start with some easy ones. Please feel...
Since we don't have the book on hand, why don't we come up with the Tildes-approved list of 50 algorithms every programmer should know (first edition)?
I'll start with some easy ones. Please feel free to strike through them because I have very little formal computer science training.
Fizz Buzz (I know, it's so simple and overused that it's a meme, but if we're talk about algorithms everyone should know, it seems like it'd be good to have this one on hand in an interview)
Bubble sort
Quicksort (are sorting algorithms actually good to know these days??)
Prime number algorithms (is n a prime number, or calculate the nth prime number)
Fibonacci sequence (computer science 101, recursion, another meme)
To be honest I don't think any of those are worth "knowing". I wouldn't really call Fizz Buzz an algorithm. It has memetic properties but realistically I don't think it's useful nor actually comes...
To be honest I don't think any of those are worth "knowing".
I wouldn't really call Fizz Buzz an algorithm. It has memetic properties but realistically I don't think it's useful nor actually comes up in interviews very much, that quality of candidate is usually filtered out with OAs now. I suppose you should know what the modulus operator does (and maybe the absolute butchering of modulo in different language).
I don't think bubble sort is worthwhile knowing. For sorting in general, mainly you should know the various properties of the two "main" sorting algorithms actually in use, namely stability and cache coherency. Radix may also be useful.
I don't think it's actually practically useful to know any of the prime number algorithms.
Fibonacci sequence is in the same as fizzbuzz. It doesn't really come up ever.
I tend to think of algorithms less in terms of utility and more along the lines of knowing for the sake of knowing. Kind of like in Ready Player One where it was about playing the game and...
I tend to think of algorithms less in terms of utility and more along the lines of knowing for the sake of knowing. Kind of like in Ready Player One where it was about playing the game and exploring rather than having a set goal.
There was a sweet time not too long ago where everything was one abstraction layer away from machine language and we were inventing as we go. Discovery, revisiting, redesigning, and optimization, everywhere!
Now things are so highly abstracted that you don’t know or even need to know how to implement a hash map or tree, and decide which trade off is better, ie speed or storage. Now you just add RAM and CPU.
Not to sound like that old guy but it used to mean something to be able to cleverly use limited resources. I don’t think that’s a valued knowledge area anymore. The tinkering aspect has been replaced by something else but I don’t quite know what it is because I’m so lost pining for the Good Old Days (TM).
The list is “algorithms every programmer should know”, though. Just because some of those are part of a classical CS education doesn’t particularly make them “must knows” anymore than knowing...
The list is “algorithms every programmer should know”, though. Just because some of those are part of a classical CS education doesn’t particularly make them “must knows” anymore than knowing scheme is a must know just because SICP taught in scheme. For any one of those, in a CS education, you could find an alternate with the same lessons.
Obviously sort is critical, but performance is finnicky enough I prefer to use the built-in implementation. It's good to know characteristics of the algorithms so you know if/when you need to...
Obviously sort is critical, but performance is finnicky enough I prefer to use the built-in implementation. It's good to know characteristics of the algorithms so you know if/when you need to reach for a custom implementation - but usually the builtin is O(n log n) in time and that's fine.
Topological sort comes up often enough for me that it's worth keeping in my pocket, especially since a built-in implementation often isn't available. But then I just use a third party library so still the same predicament.
If you do anything over the internet it's important to know what steps occur in a TLS handshake and how each piece of data contributes to security. People certainly shouldn't be writing their own implementations, though, just understand that 1. Key agreement happens (usually RSA or DH), 2. A symmetric cipher is used (usually AES) with salt and an iv, 3. A mac is used.
Offhand it's hard to think of useful algorithms in isolation. More useful in general are the common data structures, and tangentially related are the common algorithms that operate on those structures. But the utility of those algorithms heavily depends on the context of the problem, so it's hard to give general advice.
For example any developer should know how to manipulate a tree or a graph in any representation. The most important thing is to be comfortable navigating from node to node, inserting nodes, and removing nodes.
Stacks and queues are also critical. Any developer should know to use a stack to keep track of "where they are" in a problem, or use a queue to keep track of "what's left" in a problem.
Heaps and/or priority queues are useful - but again, there's usually a built-in implementation.
Maps are critical. Knowing which kind of map to use for a given problem is important. But in the popular interpreted languages it doesn't really matter and people just default to the built-in hashmap.
Finite State Machines aren't exactly a datastructure or an algorithm, but they're sometimes a nice way to organize certain algorithms or workflows. Extracting all the state into its own structure also makes things more deterministic, which helps for testing and for parallelization.
From Ask HN What are some cool but obscure data-structures you know about? What are your favorite algorithms? What is new in algorithms and data structures these days? What are the most requested...
I just did an Algolia search for “algorithm” and “data structure”. I remember reading most of these myself and they had some good examples (ex: Bloom Filter, Union-Find).
Things every programmer should know: How to read an error message & extract the needed information from the stack trace, and how to google it if they don't understand it Basic control flow How to...
Things every programmer should know:
How to read an error message & extract the needed information from the stack trace, and how to google it if they don't understand it
Basic control flow
How to read documentation & apply it to their current problem
Git, although this can probably be derived by #1 and #3
I like ziv's algorithm. Given some number x, how can we compute sin(x), correct to k digits? We have infinite series such as taylor series that we can use to compute sine in terms of simpler...
I like ziv's algorithm. Given some number x, how can we compute sin(x), correct to k digits? We have infinite series such as taylor series that we can use to compute sine in terms of simpler operations (addition, multiplication, division) that you already know how to do, but we obviously have no way of adding up an infinite number of terms. Then again, we don't necessarily actually need an infinite number of terms—we only need to generate k digits, and we can do that with just a finite prefix of the infinite sum. But how do we know how many terms we need? Surprisingly, there are some values of x for which we don't need very many terms, but there are some values for which we need quite a lot, and there's no good way to know in advance how many you'll need; this is the table-maker's dilemma.
The approach of ziv's algorithm is to use interval arithmetic. We can of course compute any finite prefix of an infinite sum exactly. While we obviously can't compute the infinite suffix exactly, what we can do is calculate exact bounds on it. That gives us an interval [lo hi] into which sin(x) definitely falls. If the first k digits of lo are the same as the first k digits of hi, then we know those are the digits we're looking for. Otherwise, we have to add more terms.
This explains the table-maker's dilemma—explains why, sometimes, quite a lot of terms are needed, even when the error goes down quite quickly. Suppose we just want two correct digits, but our interval is [0.899999999 0.90000001]. That's a really tight interval! But it doesn't tell us whether sin(x) is closer to 0.89 or 0.90.
I am also a fan of parallelised addition (carry-lookahead), scalable wait-free hash tables (alongside a few as-yet-unpublished wait-free algorithms of my own devising), and real-time garbage collection (various; keywords: pauseless, stopless, c4). But those are harder to explain.
I am struggling a bit what to do with this post? Unless people buy the book they can't really discuss it. It also isn't clear why you made this submission as you didn't leave a submission statement.
The only thing I can say is that the title of the book is 100% clickbait, even more so as this book seems to dedicate a whole section to LLMs which is only relevant if you are... well, working on LLMs and such. More importantly, programmers cover a wide array of people with various types of programming/development work and some of them rarely need to deal with more than some basic algorithms. In fact, I am willing to be that a majority of development work done does not require much in the way of superficial knowledge about algorithms.
Edit:
To the people upvoting this post, can you tell me why? It already had two votes before I made the first comment here. I can only assume that was based on the title and I really was hoping Tildes would avoid that sort of voting behavior.
I wouldn't be surprised if this book was itself written by an LLM, seeing how clickbaity that sounds.
Yeah, like I said the title is 100% clickbait and content questionable. So it being posted here confuses me. Certainly if they saw it on hackernews and had seen the comments. What confuses me even more is that
45 people decided that upvote this post. Which sadly means that also here on tildes we have people not bothering with looking at content and just voting based on titles.The 4 has a line through it, I edited the post once I saw an extra vote. That this post even got 10 votes is just baffling to me.
That's fair, it just surprised me to see it at all here. Then again, it does seem to be in the handful of posts with the lowest score in the last week or so. All of them have some amount of votes.
I think this is one of those times that highlights the difference between someone who works as a programmer versus someone who works as a computer scientist. If you changed the title to algorithms every computer scientist should know, I would think it more accurate, since much of the work of a computer scientist is investigating algorithms for their own sake. However, I'm sure that title would appeal to a smaller audience.
That's not really a thing. No one is hired as a "computer scientist". The closest would be research positions, mostly in academia, but that's always specialized. If you work in graph topology research, for instance, I don't see any particular reason you'd "need" to know about transformer networks.
Well, I work in academia and in public service so I'm used to people being hired as scientists and various other blue sky research positions. I would also say that there is a larger umbrella of work involving computational maths that fall under the broad term of computer scientist. E.g., where the primary thrust of your work is experimenting with modifications or transformations to algorithms. The overall evolution of neutral networks for example and deep learning in general has been ground forward by folks working in highly speculative or research focused pursuits. I would very much dub these folks as being computer scientists moreso than a programmer. But I think reasonable different interpretations exist. Like I said, working in research I have one set of contextual experience.
That's disappointing, I was hoping to brush up my knowledge.
I haven't done anything programming for a while since I graduated and went into a factory career instead since I was too nervous and sure I would end up failing the interview questions to even try.
It will be much more beneficial to actually work on a hobby project rather than read up on theory to be honest.
If you have trouble finding a project, there are also games out there that actually require you to program to make progress Screeps is one example. The other day someone also posted something on Tildes about a similar project, but I can't find the post anymore.
You are probably thinking of the link to Battlesnake:
https://play.battlesnake.com
Yup, that is the one! :)
Yeah, I was hoping for some tasty coding meats too. I've done so little actual coding over my IT course that I doubt I be able to break out of customer service (good employer and lucky position - I just don't want to be pigeonholed), and thinking about coding sometimes causes me unreasonably frustration and panic.
Again, practicing and making use of online resources is probably a better approach than just reading offline theory.
Maybe we can remove the link and “(Second Edition)” and just have a fun discussion on which algos people find the most useful or even remember?
Breadth First Search is always in the back of my mind for some reason. “If I ever go blind this moment I know I’ll eventually get out of this place,” is strangely comforting.
Also, mentally revisiting all the sorting algorithms keeps my brain sharp.
Neat for toggling A between 0 and 1:
1 -= A (I did not read the book)
Neater still, but in Godot, GDScript flatly refused to set 1 to a new value. Would make for some interesting math if 1 suddenly was 0.
I remember trolling people by sneakily inserting lines such as
#define NULL rand() < 0.1 ? 1 : 0;
into common project header files. Fun times.Trolling? More like mental terrorism lol
Python 2 let you set
True, False = False, True
. Havoc is wreaked upon the interpreter. You can't do that anymore in Python 3.Instead in Python 3 you can dig into the implementation details with
ctypes
module to change small integers and some other "immutable" values. Havoc is, once again, wreaked upon the interpreter.Since we don't have the book on hand, why don't we come up with the Tildes-approved list of 50 algorithms every programmer should know (first edition)?
I'll start with some easy ones. Please feel free to strike through them because I have very little formal computer science training.
To be honest I don't think any of those are worth "knowing".
I wouldn't really call Fizz Buzz an algorithm. It has memetic properties but realistically I don't think it's useful nor actually comes up in interviews very much, that quality of candidate is usually filtered out with OAs now. I suppose you should know what the modulus operator does (and maybe the absolute butchering of modulo in different language).
I don't think bubble sort is worthwhile knowing. For sorting in general, mainly you should know the various properties of the two "main" sorting algorithms actually in use, namely stability and cache coherency. Radix may also be useful.
I don't think it's actually practically useful to know any of the prime number algorithms.
Fibonacci sequence is in the same as fizzbuzz. It doesn't really come up ever.
I tend to think of algorithms less in terms of utility and more along the lines of knowing for the sake of knowing. Kind of like in Ready Player One where it was about playing the game and exploring rather than having a set goal.
There was a sweet time not too long ago where everything was one abstraction layer away from machine language and we were inventing as we go. Discovery, revisiting, redesigning, and optimization, everywhere!
Now things are so highly abstracted that you don’t know or even need to know how to implement a hash map or tree, and decide which trade off is better, ie speed or storage. Now you just add RAM and CPU.
Not to sound like that old guy but it used to mean something to be able to cleverly use limited resources. I don’t think that’s a valued knowledge area anymore. The tinkering aspect has been replaced by something else but I don’t quite know what it is because I’m so lost pining for the Good Old Days (TM).
The list is “algorithms every programmer should know”, though. Just because some of those are part of a classical CS education doesn’t particularly make them “must knows” anymore than knowing scheme is a must know just because SICP taught in scheme. For any one of those, in a CS education, you could find an alternate with the same lessons.
Obviously sort is critical, but performance is finnicky enough I prefer to use the built-in implementation. It's good to know characteristics of the algorithms so you know if/when you need to reach for a custom implementation - but usually the builtin is
O(n log n)
in time and that's fine.Topological sort comes up often enough for me that it's worth keeping in my pocket, especially since a built-in implementation often isn't available. But then I just use a third party library so still the same predicament.
If you do anything over the internet it's important to know what steps occur in a TLS handshake and how each piece of data contributes to security. People certainly shouldn't be writing their own implementations, though, just understand that 1. Key agreement happens (usually RSA or DH), 2. A symmetric cipher is used (usually AES) with salt and an iv, 3. A mac is used.
Offhand it's hard to think of useful algorithms in isolation. More useful in general are the common data structures, and tangentially related are the common algorithms that operate on those structures. But the utility of those algorithms heavily depends on the context of the problem, so it's hard to give general advice.
For example any developer should know how to manipulate a tree or a graph in any representation. The most important thing is to be comfortable navigating from node to node, inserting nodes, and removing nodes.
Stacks and queues are also critical. Any developer should know to use a stack to keep track of "where they are" in a problem, or use a queue to keep track of "what's left" in a problem.
Heaps and/or priority queues are useful - but again, there's usually a built-in implementation.
Maps are critical. Knowing which kind of map to use for a given problem is important. But in the popular interpreted languages it doesn't really matter and people just default to the built-in hashmap.
Finite State Machines aren't exactly a datastructure or an algorithm, but they're sometimes a nice way to organize certain algorithms or workflows. Extracting all the state into its own structure also makes things more deterministic, which helps for testing and for parallelization.
From Ask HN
What are some cool but obscure data-structures you know about?
What are your favorite algorithms?
What is new in algorithms and data structures these days?
What are the most requested algorithms during technical interviews?
I just did an Algolia search for “algorithm” and “data structure”. I remember reading most of these myself and they had some good examples (ex: Bloom Filter, Union-Find).
Things every programmer should know:
That's like, it, imo.
I like ziv's algorithm. Given some number x, how can we compute sin(x), correct to k digits? We have infinite series such as taylor series that we can use to compute sine in terms of simpler operations (addition, multiplication, division) that you already know how to do, but we obviously have no way of adding up an infinite number of terms. Then again, we don't necessarily actually need an infinite number of terms—we only need to generate k digits, and we can do that with just a finite prefix of the infinite sum. But how do we know how many terms we need? Surprisingly, there are some values of x for which we don't need very many terms, but there are some values for which we need quite a lot, and there's no good way to know in advance how many you'll need; this is the table-maker's dilemma.
The approach of ziv's algorithm is to use interval arithmetic. We can of course compute any finite prefix of an infinite sum exactly. While we obviously can't compute the infinite suffix exactly, what we can do is calculate exact bounds on it. That gives us an interval [lo hi] into which sin(x) definitely falls. If the first k digits of lo are the same as the first k digits of hi, then we know those are the digits we're looking for. Otherwise, we have to add more terms.
This explains the table-maker's dilemma—explains why, sometimes, quite a lot of terms are needed, even when the error goes down quite quickly. Suppose we just want two correct digits, but our interval is [0.899999999 0.90000001]. That's a really tight interval! But it doesn't tell us whether sin(x) is closer to 0.89 or 0.90.
What's really fun is this method can also be used to generate random numbers.
I am also a fan of parallelised addition (carry-lookahead), scalable wait-free hash tables (alongside a few as-yet-unpublished wait-free algorithms of my own devising), and real-time garbage collection (various; keywords: pauseless, stopless, c4). But those are harder to explain.