Okay, so i'm writing a program in assembly language to calculate a simple frame delay.
It uses the real time clock which ticks at a resolution of 18.2 times a second. (1/18 of a second)
The code simply counts the number of times the cpu can execute a loop before the RTC ticks again. (the loop begins immediately after a tick)
The cpu architecture is 16 bit, so let's just say variable X takes the single units place value, and variable Y takes the 64K (65,536) place value.
Every time X reaches 65,535, it is reinitialized to 0, and Y (which begins at 0) is incremented.
The max count (64K^2)-1 is a 32 bit number, this particular CPU architecture provides easy instructions to multiply and divide 32 bit numbers.
However, adding, and subtracting 32 bit numbers is a little convuluted, so i'm trying to avoid doing that.
A delay of 1/18th of a second obviously results in a framerate locked at 18 fps, not very useful.
So, as input the code takes a constant number. It's a simple frame rate multiplier.
If passed 2, the code will divide 1/18 into 2 resulting in a delay that lasts 1/36th of a second. (36 fps)
This is all well and good, but the problem is that frame rates of both 36, and 54 (1/(18*3)) don't look very good at standard monitor refresh rates. (screen tearing)
So, what I need to do is extract from 1/18th of a second, using only division and multiplication operations (factors and multiples), 1/30th of a second. In the shortest amount of operations.
The division instruction returns only a whole number and a remainder. I can't use floating point numbers.
It uses the real time clock which ticks at a resolution of 18.2 times a second. (1/18 of a second)
The code simply counts the number of times the cpu can execute a loop before the RTC ticks again. (the loop begins immediately after a tick)
The cpu architecture is 16 bit, so let's just say variable X takes the single units place value, and variable Y takes the 64K (65,536) place value.
Every time X reaches 65,535, it is reinitialized to 0, and Y (which begins at 0) is incremented.
The max count (64K^2)-1 is a 32 bit number, this particular CPU architecture provides easy instructions to multiply and divide 32 bit numbers.
However, adding, and subtracting 32 bit numbers is a little convuluted, so i'm trying to avoid doing that.
A delay of 1/18th of a second obviously results in a framerate locked at 18 fps, not very useful.
So, as input the code takes a constant number. It's a simple frame rate multiplier.
If passed 2, the code will divide 1/18 into 2 resulting in a delay that lasts 1/36th of a second. (36 fps)
This is all well and good, but the problem is that frame rates of both 36, and 54 (1/(18*3)) don't look very good at standard monitor refresh rates. (screen tearing)
So, what I need to do is extract from 1/18th of a second, using only division and multiplication operations (factors and multiples), 1/30th of a second. In the shortest amount of operations.
The division instruction returns only a whole number and a remainder. I can't use floating point numbers.
Last edited: