View Single Post
  #6  
Old 11-30-2007, 12:10 AM
madnak madnak is offline
Senior Member
 
Join Date: Aug 2005
Location: Brooklyn (Red Hook)
Posts: 5,271
Default Re: Hypothetical CS type question...

More explicitly, each program can be viewed as a sequence of instructions. According to this view, it's easy to see that the number of permutations increases as the number of instructions in the sequence increases. As the number of instructions becomes higher than a certain constant (the number of instructions that would take one year to execute, for example), the proportion of possible programs with an instruction count above that constant geometrically increases relative to the number of possible programs with an instruction count below the constant. (We can view the program in terms of "ticks" or time units instead of instructions - an instruction that takes 4 time units is equivalent to any two instructions that take 2 time units each. But the trend doesn't change - the number of permutations still increases with the number of allowed time units.)

The number of instructions must be finite, because the program does eventually have to spit something out. But even without infinite loops, it's trivial to see that a simple program can execute an arbitrarily large number of instructions. In other words, not only can a given program execute a given number of specific instructions in a given sequence, it can also repeat instructions, skip instructions, and otherwise mess up the math. Still, every instruction must be executed at least once due to the constraints of the problem. Therefore, a sufficiently large program must execute a sufficiently large number of instructions, and even a smaller program can execute such a number of instructions.

Actually, given that the program merely produces a given output, all programs can be seen as specific sequences of instructions but with compression algorithms potentially applied. So the average number of instructions will be way higher than what can be fit into a static program!

The time it takes to execute n instructions will always be greater than or equal to the amount of time it would take to execute the fastest instruction n times.

Therefore, given a sufficiently large hard drive (and let's ignore that the memory management necessarily in such a large program will necessarily increase the running time), most programs will totally run longer than 1 year straight. I can't prove that a 200GB hard disk is big enough, but we all know it is LDO.
Reply With Quote