Quote:If you optimized it, the question isn't how well the compiler optimized it further, but how much better 32 bit code is than 16 bit code. . .
Hey aga...I'm not sure if you were talking to me or not, but...
I, too, used to think that, "if you write good code, there is nothing left to optimize!!!". However, I no-longer think this way. If you talk to really-sharp, really-experienced c++ programmers, they (essentially) *all* say, "don't bother with trying to optimize your code...make it work first. Then...if it is too slow...profile it and try to improve your algs...it is the rare case where the little tweaks make a difference...IFFFFFF you have a good *optimizing* compiler.
That being said, Here are some small optimizations I've found...
in QB,
if x then...
is faster than
if x > 0 then....
This same optimization exists in c++, and I use it often...for example
for y = 1 to 100 step 1
...
next y
in c++
for (int y = 0; y < 100; ++y){
...
}
can be improved on my compiler by:
for (int y = 100; y; --y){
...
}
however, such constructs will only confuse other people who read your code, and the speed boost is modest, and could be eliminated with future compilers...as it is, you will look like a bozo in the future if you write the fast way...
Anyway...after spending a bunch of time testing various things...I'm starting to understand/agree with the guru's advice...with modern/fast/resource-rich machines, and optimizing compilers...make it work first, then tweak only as needed...
However these guys are professional proggers, not hobbiests like we are, and so the asthetic of maxed code may not apply to them, since they are coding from a strictly pragmatic perspective...still...their view is valid.
Other C++ optimizations I've found...
*variable
is slightly faster than
variable[0]
but...these things are compiler-dependent...if you refuse to index arrays for speed and instead dereference pointers, a futer-optimizing compiler may steal your thunder (and 4% speed gain over indexed arrays) and make you look like a luddite!!!
Similarly, using arrays over vectors, or char arrays over strings may be similar in the not-to-distant future. In the C++ world, "make it work...then if needed...make it fast" is sound advice, if you are trying to be productive rather than reach nirvana. Really, if you shave a micro-second off of a 20 microsecond function that is only called once in a program,...whoopdeedo!!! as computers get faster, your megga improvement amounts to squat...and if, for your increase, have resorded to using a char array instead of a std:
tring...then the loser isn't the user, but rather you code/future maintainers of your code...
Sorry for the rant...but I seem to be on a roll tonight...and can't stop... Anyway, aga...optimizers *are* powerful, worth-while, and while not fixing sloppy code, they do make some things irrevelant...
Just think...with a good optimizing compiler, a good-quality QB sort routine would be just as fast a the same alg implemented in c. Yet we know that not to be the case...You wrote some QB quicksort routines some time back and I did the same with C++. The best I could do was about 5% *worse* than the c++ alg sort() ...and my methd was about 20x faster than your qb method...I tweaked the alg as far as I could, and still the "optimized-compiled" was about 4 times faster than the non-optimized...and even my slow version is more than 10x faster than the QB-compiled version...
One more thing...if you are programming in a static universe (eg with QB), then it does make sense to tweak code to the compiler at hand, since you *know* that some smarter compiler/implemention is *not* in the works...
Cheers...don't know why I'm rambling so much tonight... :???: