“I do, and I’m saying: people plainly aren’t getting it, because of how we teach it.”
“Well lemme explain the relationship again–”
Nobody has to tell people not to use parallelism. They just… won’t. In part because of how people tend to think, by default, and in part because of how we teach them to think.
We would have to tell students to use parallelism, if we expect graduates to choose it freely. It’s hard and it’s weird and you can’t just slap it on at the end. It should become what they do first.
I am telling you in some detail how focusing on linear performance, using the language of the nineteen goddamn seventies, doesn’t need to say multi-threading isn’t worth it, to leave people thinking multi-threading isn’t worth it.
Jesus, even calling it “multi-threading” is an obstacle. It makes parallelism sound like some fancy added feature. It’s the version of parallelism that shows up in late-version changelogs, when for some reason performance has become an obstacle.
Multi-threading is difficult, you can’t just slap it on everything and call it a day.
There are languages where it’s easier (Go, Rust, …) but parallelism is an advanced feature. Do it wrong and you get race conditions or dead locks. There is a reason you learn about this later in programming, but you do learn about it (and get to use it).
When we’re being honest most programmers work on CRUD applications, which are highly sequential, usually waiting on IO and not CPU cycles and so on. Saving 2ms on some operations doesn’t matter if you wait 50ms on the database (and sometimes using more threads is actually slower due to orchestration). If you’re working with highly efficient algorithms or with GPUs then parallelism has a much higher priority. But it always depends on what you’re working with.
Depending on your tech stack you might not even have the option to properly use parallelism, for example with JavaScript (if you don’t jump through hoops).
“Here’s all the ways we tell people not to use parallelism.”
I’m sorry, that’s not fair. It’s only a fraction of the ways we tell people not to use parallelism.
Multi-threading is difficult, which is why I said it’s a fucking obstacle. It’s the wrong model. The fact you’d try to “slap it on” is WHAT I AM TALKING ABOUT. You CANNOT just apply more cores to existing linear code. You MUST actively train people to write parallel-friendly code, even if it won’t necessarily run in parallel.
Javascript is a terrible language I work with regularly, and most of the things that should be parallel aren’t - and yet - it has abundant features that should be parallel. It has absorbed elements of functional programming that are excellent practice, even if for some goddamn reason they’re actually executed in-order.
Fetches are single-threaded, in Javascript. I don’t even know how they did that. Grabbing a webpage and then responding to an event using an inline function is somehow more rigidly linear than pre-emptive multitasking in Windows 95. But you should still write the damn things as though they’re going to happen in parallel. You have no control over the order they happen in. That and some caching get you halfway around most locks.
Javascript, loathesome relic, also has vector processing. The kind insisted upon by that pedant in the other subthread, who thinks the 512-bit vector units in a modern Intel chip don’t qualify, but the DSP on a Super Nintendo does. Array.forEach and Array.map really fucking ought to be parallelisable. Google could use its digital imperialism to force millions of devs to adopt better standards, just by following the spec and not processing keys in a rigid order. Bad code treating it like a simplified for-loop would break. Good code… wouldn’t.
We want people to write that kind of code.
Not necessarily code that will run in parallel. Just code that could.
Workload-centric thinking is the only thing that’s going to stop “let’s add a little parallelism, as a treat” from producing months of needless agony. Anything else has to be dissected, warped beyond recognition, and stitched back together, with each step taking more effort than starting over from scratch, and the end result still being slow and unreadable and fragile.
“The way we teach this relationship causes harm.”
“Well you don’t understand this relationship.”
“I do, and I’m saying: people plainly aren’t getting it, because of how we teach it.”
“Well lemme explain the relationship again–”
Nobody has to tell people not to use parallelism. They just… won’t. In part because of how people tend to think, by default, and in part because of how we teach them to think.
We would have to tell students to use parallelism, if we expect graduates to choose it freely. It’s hard and it’s weird and you can’t just slap it on at the end. It should become what they do first.
I am telling you in some detail how focusing on linear performance, using the language of the nineteen goddamn seventies, doesn’t need to say multi-threading isn’t worth it, to leave people thinking multi-threading isn’t worth it.
Jesus, even calling it “multi-threading” is an obstacle. It makes parallelism sound like some fancy added feature. It’s the version of parallelism that shows up in late-version changelogs, when for some reason performance has become an obstacle.
Multi-threading is difficult, you can’t just slap it on everything and call it a day.
There are languages where it’s easier (Go, Rust, …) but parallelism is an advanced feature. Do it wrong and you get race conditions or dead locks. There is a reason you learn about this later in programming, but you do learn about it (and get to use it).
When we’re being honest most programmers work on CRUD applications, which are highly sequential, usually waiting on IO and not CPU cycles and so on. Saving 2ms on some operations doesn’t matter if you wait 50ms on the database (and sometimes using more threads is actually slower due to orchestration). If you’re working with highly efficient algorithms or with GPUs then parallelism has a much higher priority. But it always depends on what you’re working with.
Depending on your tech stack you might not even have the option to properly use parallelism, for example with JavaScript (if you don’t jump through hoops).
“Here’s all the ways we tell people not to use parallelism.”
I’m sorry, that’s not fair. It’s only a fraction of the ways we tell people not to use parallelism.
Multi-threading is difficult, which is why I said it’s a fucking obstacle. It’s the wrong model. The fact you’d try to “slap it on” is WHAT I AM TALKING ABOUT. You CANNOT just apply more cores to existing linear code. You MUST actively train people to write parallel-friendly code, even if it won’t necessarily run in parallel.
Javascript is a terrible language I work with regularly, and most of the things that should be parallel aren’t - and yet - it has abundant features that should be parallel. It has absorbed elements of functional programming that are excellent practice, even if for some goddamn reason they’re actually executed in-order.
Fetches are single-threaded, in Javascript. I don’t even know how they did that. Grabbing a webpage and then responding to an event using an inline function is somehow more rigidly linear than pre-emptive multitasking in Windows 95. But you should still write the damn things as though they’re going to happen in parallel. You have no control over the order they happen in. That and some caching get you halfway around most locks.
Javascript, loathesome relic, also has vector processing. The kind insisted upon by that pedant in the other subthread, who thinks the 512-bit vector units in a modern Intel chip don’t qualify, but the DSP on a Super Nintendo does. Array.forEach and Array.map really fucking ought to be parallelisable. Google could use its digital imperialism to force millions of devs to adopt better standards, just by following the spec and not processing keys in a rigid order. Bad code treating it like a simplified for-loop would break. Good code… wouldn’t.
We want people to write that kind of code.
Not necessarily code that will run in parallel. Just code that could.
Workload-centric thinking is the only thing that’s going to stop “let’s add a little parallelism, as a treat” from producing months of needless agony. Anything else has to be dissected, warped beyond recognition, and stitched back together, with each step taking more effort than starting over from scratch, and the end result still being slow and unreadable and fragile.