So i actually had a viewer comment and ask james: could you cover why this not not x thing is faster than boolex and when i first read this comment, i thought to myself theres no way right, i mean theyre computing. The same thing not not x is just going to call the you know, boolean conversion operator for x, just like bool does. How could one be any faster or slower than the other, but i thought about it for a little while and then i decided i would at least test it, and so i wrote up a little script to test it and heres. What i found. Quite surprisingly, bull took 1.08 seconds for 10 million trials versus only 0.68 seconds for not not x. You can see the testing code that i used here just have 10 million trials, and then i use the built in time. It dot time it in order to time each of the functions so since im using 10 million trials thats why these are in seconds and not in tenths of a microsecond, no matter how many times i run, it ive tried changing the order of the tests. All of the different ways that ive tried it bool x always takes much longer than not not x. So how can that be? And why? The first thing to note is that bool, like any other name, is a variable, and you have to look up that variable in this case in the global scope.

So this function involves looking up the name bool, whereas this function doesnt have to look anything up its always going to call the boolean conversion operator, because theres no way to override not like there is a way to override bool. For instance, i could just say: bool equals stir, and you know that changes the value that bool is here now bool is pointing to the built in stir instead of to the built in bool. So because this value of bool can change, it has to be looked up at runtime, so i went ahead and added this y equals bool line into this conversion function. It doesnt actually do anything as far as the return value is concerned, its just trying to put it on a level playing field. So in this case, this function now also has to look up. The name bool, but, as we see from the runtimes not not x, is still faster, so its not the lookup of the name bool, which is causing the slowdown, its something else for this test im doing something very similar, which is im still looking up. The name bool, but in this case im also actually calling the bull function. Im just default, constructing a boolean, so thats going to give me true and im, storing it and then not doing anything with it and still just returning, not not x. But in this case, we can see that the times are very, very closely aligned just 0.

93 and 0.94. What this is telling me is essentially that the slowdown is caused by how slow it is to actually call a function in python. Not not x, doesnt actually go through the mechanism of calling a function in python. It actually has a special fast path programmed in which allows it to execute just directly going to the boolean conversion rather than calling the bool function, which then does the conversion thats. Why adding the unnecessary name, lookup and function call basically makes the times equal again, we can confirm our intuition by plugging the functions into compiler, explorer and looking at the output. So this top four set of instructions is the code for the bool convert function once its been compiled into bytecode, as you can see, theres the load global or the bool operator, so it has to load bool and then it loads, the local variable x and then Calls the function, which is the bool that was just loaded and then returns, whereas in the not not conversion theres a special instruction called unary, not which gets called twice instead of calling any function just for completeness heres. What you see when you throw back in the redundant read so in this case this instruction, you can see when i hover my mouse over it. It highlights the three instructions on the right here that instruction corresponds to loading the global pool, then calling that function and storing it in the variable y. In particular, you can see that even though the variable y isnt used later in the function – and it could theoretically just be removed and optimized away, those kinds of optimizations generally dont happen when you compile things into python byte code.

There are some situations where things get compiled ahead of time like, if you add two constant integers together, but most of the time you will actually see redundant operations, even if theres, basically no chance that they could ever be used in the actual code. So thats pretty cool that we can explain what exactly is the difference between knot, knot and bullex, and this is how were making up the difference in time, but for now lets just get rid of this redundant read and compare to a few other things so down Here i also have: how long does it take to call the double under or dunderbool function of the variable and also im, comparing to just a no op looking at the times? Of course, we find that the no op just a function call that passes does. Nothing. Is the fastest followed by the not not, and then the method call of the dunderbool and then the calling of the global bull take about the same amount of time now i did run this a few times and it seemed like most of the time actually, the Method call was even slower, but just by a little bit, i think that, most of the time, though, you dont really care about converting an x to a bool. What you really want is to just use it in an if statement so lets also compare against uh doing just an if x versus an if not not x.

Thankfully, we see that in this case, both if x and if not not x, are on par with just a little bit slower than a no op function. Call so if x, and if not not x, are not calling the bool function as a function and then doing stuff. It definitely has a fast path which is converting the x to a pool in a different way, and i actually have a whole video on that. If you want to check it out additionally, the times for if x and if not not x, are very close, so i wonder, if theyre doing anything different. Interestingly, when we plug the example into compiler explorer, we see that an optimization is actually happening here. If x is being converted into these four instructions, we see load and then pop jump if false and then a load and return, if not not x, you can compare line for line is doing the exact same thing. That means that there is literally no difference in the byte code between if x and, if not nonex, so please dont write not not x, thats, just going to confuse people just write if x. So speaking of not confusing people suppose youre in a situation where you do actually want to call bool x, youre, not just going to use x in an if statement immediately later, you actually want to store the value of the boolean conversion. In that case, is it worth it to use not not x and potentially confuse your readers? Instead of just calling bool x im going to go ahead and say no, you should always just be using bool x.

Although the difference in timing between not not x and bool x is going to be significant in the statistical sense. Its definitely not going to be significant in the real world sense remember. This was for 10 million iterations, so we still have a pretty small difference in 10 million iterations. So, unless youre counting tenths of a microsecond, then you really shouldnt be caring about whether you use not not x or boulex. So just use the one thats more readable, and if you are counting tenths of a millisecond, then you probably shouldnt be doing whatever youre doing. In python, well, just because the video told you to keep doing what you were probably already doing, doesnt mean it wasnt, interesting right. In any case, thank you as always to my patrons and donors.

https://www.youtube.com/watch?v=9gEX7jesV34