This is an automated archive made by the Lemmit Bot.
The original was posted on /r/programminglanguages by /u/nngnna on 2023-10-27 11:11:30.
I was teaching myself a bit of low-level logic recently. And I was thinking about the fact that checking for sign is much cheaper than checking for zero (check one bit vs or all of the bits). Today that’s bits-wise bytes-foolish, but I assumed the convention that if(x)≡if(x != 0) is very old.
In addition, I feel like negatives being false would be much more intuitive to a non-programmer (i.e. someone not biased by experience). I did blind-asked one friend and it checked out, but obviously not scientific.
With standard floats the situation is similar. But it’s unsigned integers where my little counterfactual is least convincing. To have a false unsigned at all, we must choose zero to be false (if(x)≡if(x > 0)) so checking it is a slightly less basic instead of little more.
Indeed my guess would be that the normal convention just started from extending the bit/boolean to unsigned integers. And then it was just extended to signed integer in the minimal way. When assembly and typelessness were the norm, and making negative numbers false in your high-level language would be a completely pointless headach for the compiler, that no one is asking for.
Thoughts?