[ACCEPTED]-Has TRUE always had a non-zero value?-boolean
The 0 / non-0 thing your coworker is confused 11 about is probably referring to when people 10 use numeric values as return value indicating 9 success, not truth (i.e. in bash scripts 8 and some styles of C/C++).
Using 0 = success 7 allows for a much greater precision in specifying 6 causes of failure (e.g. 1 = missing file, 2 5 = missing limb, and so on).
As a side note: in 4 Ruby, the only false values are nil and 3 false. 0 is true, but not as opposed to 2 other numbers. 0 is true because it's an 1 instance of the object 0.
It might be in reference to a result code 3 of 0 which in most cases after a process 2 has run, a result code of 0 meant, "Hey, everything 1 worked fine, no problems here."
I worked at a company with a large amount 4 of old C code. Some of the shared headers 3 defined their own values for TRUE and FALSE, and 2 some did indeed have TRUE as 0 and FALSE 1 as 1. This led to "truth wars":
/* like my constants better */
#undef TRUE
#define TRUE 1
#undef FALSE
#define FALSE 0
If nothing else, bash shells still use 0 1 for true, and 1 for false.
Several functions in the C standard library 27 return an 'error code' integer as result. Since 26 noErr is defined as 0, a quick check can 25 be 'if it's 0, it's Ok'. The same convention 24 carried to a Unix process' 'result code'; that 23 is, an integer that gave some inidication 22 about how a given process finished.
In Unix 21 shell scripting, the result code of a command 20 just executed is available, and tipically 19 used to signify if the command 'succeeded' or 18 not, with 0 meaning success, and anything 17 else a specific non-success condition.
From 16 that, all test-like constructs in shell 15 scripts use 'success' (that is, a result 14 code of 0) to mean TRUE, and anything else 13 to mean FALSE.
On a totally different plane, digital 12 circuits frecuently use 'negative logic'. that 11 is, even if 0 volts is called 'binary 0' and 10 some positive value (commonly +5v or +3.3v, but 9 nowadays it's not rare to use +1.8v) is 8 called 'binary 1', some events are 'asserted' by 7 a given pin going to 0. I think there's 6 some noise-resistant advantages, but i'm 5 not sure about the reasons.
Note, however 4 that there's nothing 'ancient' or some 'switching 3 time' about this. Everything I know about 2 this is based on old conventions, but are 1 totally current and relevant today.
I'm not certain, but I can tell you this: tricks 5 relying on the underlying nature of TRUE 4 and FALSE are prone to error because the 3 definition of these values is left up to 2 the implementer of the language (or, at 1 the very least, the specifier).
System calls in the C standard library typically 11 return -1 on error and 0 on success. Also 10 the Fotran computed if statement would (and 9 probably still does) jump to one of three 8 line numbers depending on the condition 7 evaluating to less than, equal to or greater 6 than zero.
eg: IF (I-15) 10,20,10
would test 5 for the condition of I == 15 jumping to 4 line 20 if true (evaluates to zero) and 3 line 10 otherwise.
Sam is right about the 2 problems of relying on specific knowledge 1 of implementation details.
General rule:
Shells (DOS included) use 5 "0" as "No Error"... not necessarily true.
Programming 4 languages use non-zero to denote true.
That 3 said, if you're in a language which lets 2 your define TRUE of FALSE, define it and 1 always use the constants.
Even today, in some languages (Ruby, lisp, ...) 0 6 is true because everything except nil is 5 true. More often 1 is true. That's a common 4 gotcha and so it's sometimes considered 3 a good practice to not rely on 0 being false, but 2 to do an explicit test. Java requires you 1 do this.
Instead of this
int x;
....
x = 0;
if (x) // might be ambiguous
{
}
Make is explicit
if (0 != x)
{
}
I recall doing some VB programming in an 1 access form where True was -1.
I remember PL/1 had no boolean class. You 4 could create a bit and assign it the result 3 of a boolean expression. Then, to use it, you 2 had to remember that 1 was false and 0 was 1 true.
It's easy to get confused when bash's true/false 1 return statements are the other way around:
$ false; echo $?
1
$ true; echo $?
0
For the most part, false is defined as 0, and 16 true is non-zero. Some programming languages 15 use 1, some use -1, and some use any non-zero 14 value.
For Unix shells though, they use the 13 opposite convention.
Most commands that run 12 in a Unix shell are actually small programs. They 11 pass back an exit code so that you can determine 10 whether the command was successful (a value 9 of 0), or whether it failed for some reason 8 (1 or more, depending on the type of failure).
This 7 is used in the sh/ksh/bash shell interpreters 6 within the if/while/until commands to check 5 conditions:
if command then # successful fi
If the command is successful 4 (ie, returns a zero exit code), the code 3 within the statement is executed. Usually, the 2 command that is used is the [ command, which 1 is an alias for the test command.
The funny thing is that it depends on the 3 language your are working with. In Lua is true 2 == zero internal for performance.. Same 1 for many syscalls in C.
In the C language, before C++, there was 5 no such thing as a boolean. Conditionals 4 were done by testing ints. Zero meant false 3 and any non-zero meant true. So you could 2 write
if (2) {
alwaysDoThis();
} else {
neverDothis();
}
Fortunately C++ allowed a dedicated 1 boolean type.
I have heard of and used older compilers 5 where true > 0, and false <= 0.
That's 4 one reason you don't want to use if(pointer) or 3 if(number) to check for zero, they might 2 evaluate to false unexpectedly.
Similarly, I've 1 worked on systems where NULL wasn't zero.
In any language I've ever worked in (going 2 back to BASIC in the late 70s), false has 1 been considered 0 and true has been non-zero.
I can't recall TRUE
being 0
.
0
is something a 3 C programmer would return to indicate success, though. This 2 can be confused with TRUE
.
It's not always 1
either. It 1 can be -1
or just non-zero.
For languages without a built in boolean 12 type, the only convention that I have seen 11 is to define TRUE as 1 and FALSE as 0. For 10 example, in C, the if
statement will execute 9 the if clause if the conditional expression 8 evaluates to anything other than 0.
I even 7 once saw a coding guidelines document which 6 specifically said not to redefine TRUE and 5 FALSE. :)
If you are using a language that 4 has a built in boolean, like C++, then keywords 3 true
and false
are part of the language, and you 2 should not rely on how they are actually 1 implemented.
In languages like C there was no boolean 3 value so you had to define your own. Could 2 they have worked on a non-standard BOOL 1 overrides?
DOS and exit codes from applications generally 7 use 0 to mean success and non-zero to mean 6 failure of some type!
DOS error codes are 5 0-255 and when tested using the 'errorlevel' syntax 4 mean anything above or including the specified 3 value, so the following matches 2 and above 2 to the first goto, 1 to the second and 0 1 (success) to the final one!
IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR
The SQL Server Database Engine optimizes 12 storage of bit columns. If there are 8 or 11 less bit columns in a table, the columns 10 are stored as 1 byte. If there are from 9 9 up to 16 bit columns, the columns are 8 stored as 2 bytes, and so on. The string 7 values TRUE and FALSE can be converted to 6 bit values: TRUE is converted to 1 and FALSE 5 is converted to 0. Converting to bit promotes 4 any nonzero value to 1.
Every language can 3 have 0 as true or false So stop using number 2 use words true Lol Or t and f 1 byte storage 1
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.