Fourier series f(x)=x real world issue

jpigg55

New member
Joined
Nov 28, 2019
Messages
12
I have a real-world application Fourier series issue that (I think) is either due to or the solution for my problem.

Background:

I have a set of absolute position, linear encoders that, through O-scope testing and data runs, I’ve found the data Word output from these scales to be 3 pure binary encoded tracks. The readhead output is a 25-bit word that is a combination of the 3 binary track values transmitted LSB first.
The 3 tracks, hence referred to as Coarse, Mid, & Fine, are 8-bit, 8-bit, & 9-bit length. When graphed, the binary values of each track form 3 sawtooth wave patterns such that one complete cycle of the Coarse track is 16 cycles of the Mid track, and 512 cycles of the Fine track.
As far as I can tell, the Mid track is offset from the other 2 and is most likely just used for noise correction of the other 2 tracks with the final output position being just a combination of the Coarse and Fine track values. In total, this equates to 131,072 possible unique position ID’s over the max possible encoder length.
Each encoder doesn’t necessarily start out at the “Zero” binary position and can be of any length up to the maximum possible before a repeat happens making the positions Absolute.

Issue:

My problem is trying to find the correct equation that will linearize across the Fourier series “Jump”.
If the values are treated as Signed integers, there is a jump from max positive value to max negative value (or vice-versa when traveling in the opposite direction) causing a large jump in position at the point where the Coarse track value goes from positive to negative or 01111111 (127) to 10000000 (128) in binary.
If treated as a Unsigned integer, the same jump happens, but 180 degrees out when Coarse track binary value goes from 11111111 to 00000000.

Since both of these points can and do reside on a single encoder, coding each one separately to be either Signed or Unsigned won’t work.
I apologize if this question should reside elsewhere other than Calculus, but I thought since the signals are a Fourier sawtooth series, it belonged here.
 
A friend of mine that’s been helping had this to add:

We are looking to solve the equation:

A + B + C = Z
where A is unique, B and C are not unique, and Z is unique.
A and B are some multiple of 256. C is some multiple of 512.

The problem comes in when trying to calculate a unique value for Z.

It has to be a derivative problem, I just don't know how to do it.
A corresponds to the Coarse track value, B to the Mid, and C to the Fine. Z is the absolute position.
The values of A, B and C are given, it's trying to get a unique value for Z that is the problem.


I think he may have misspoken about needing to get a unique value for “Z”.
His solution to the combined track value, or “Z”, outputs 131,072 unique values that, when coded as Signed Integers, creates a linear function from -65,536 to +65535, or -π to π if you will.
The problem rears itself when an encoder starts out, say, with a combined track value of -150, goes up through +65535, and recycles around ending with -60,726.
 
I figured it out.
Was just a matter of using a large enough phase shift value on the Coarse track value.
 
I did a quick diagram of the bit ranges of the encoders...

Code:
         LSB                                                MSB

X         0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17
fine     |-------------------------|
medium                     |----------------------|
coarse                                 |----------------------|

Domain of X, 0 <= X < 2^18 (262,144)

Each encoder could have an error in its reading. In the simple case this would be a simple offset. (However the offset could actually change as X increases.)

ScaleReadingErrorReading in terms of X (and errors)
coarseXcEcmod(floor((X+Ec) / 1024), 256)
mediumXmEmmod(floor((X+Em) / 64), 256)
fineXfEfmod(X+Ef, 512)

Problem:
Given a set of readings for Xc,Xm and Xf over the full domain of X, find a robust method to find X from any of the individual readings.

--

Well done for figuring out an answer. I guess your method was this, because you mention Fourier transforms:-
  1. Determine the errors {Ec,Em,Ef} individually, assuming that all errors are caused by an offset only. To do this, perform a Fourier transform of, say, the Xm readings over the whole range. Look at the phase value of the fundamental, and take the difference of this to the value expected from an ideal saw-tooth wave. This phase difference determines the error offset.
  2. Splice together the "phase adjusted" results to form X, perhaps checking that the overlapping bit regions are as expected?
This method seems good to me.

I suspect there's a better method that just uses the (extensive) overlaps in the bit ranges. I'll have a think about that later!
 
The 3 tracks, hence referred to as Coarse, Mid, & Fine, are 8-bit, 8-bit, & 9-bit length. When graphed, the binary values of each track form 3 sawtooth wave patterns such that one complete cycle of the Coarse track is 16 cycles of the Mid track, and 512 cycles of the Fine track.

BTW: I based my maximum value of X (and my diagram) on your statement that there are 512 cycles of the fine 9 bit number... this equates to 512 * 2^9 = 262,144. This doesn't match your later statement that there are 131,072 possible positions. Which is right, or have I made a mistake in understanding the problem?
 
The reason for mentioning the Fourier transform was that via data collection from the encoders, the graph of all 3 track values and combined position formed sawtooth wave patterns. The Fourier transform was the only equation I found that explained them.

As far as the max possible positions, I probably didn't explain enough or give enough info.
It falls to the resolution of each track. Each tick of the Coarse track was 16 Mid track cycles and each Mid track cycles consisted of 32 Fine track cycles. So 256 * 16 * 32 = 131,072.

I realized since all encoders didn't start at value -π and since the max length sold was 36" of a possible 51.6" (so less than -π to π), it was just a matter of shifting to the Coarse track value so that it fit within a single Fourier series vs straddling partially over 2 series.
 
Each tick of the Coarse track was 16 Mid track cycles and each Mid track cycles consisted of 32 Fine track cycles. So 256 * 16 * 32 = 131,072.

I assume that "tick of the coarse track" means a complete cycle of the coarse track.

So in a complete cycle of coarse track there are 16 * 32 cycles of the fine track. However the fine track is 9 bits wide, so I there are 29 or 512 values in a single cycle of the fine track, not 256. Therefore I think your calculation should be 16 * 32 * 512?

Intuitively I think there must be a nice way to get the position without using Fourier. Also it should be possible to get an "error" indication if the tracks should slip relative to each other (or if one of the sensors malfunctions). I couldn't find anything about this on the internet, but I'm probably not using the right search terms. I'll get some time to think about this later. If I come up with anything I'll let you know!
 
Sorry, bad terminology. By "Tick", I meant each increment of the course track.
So when the encoder readhead traveled from 1 to 2 on the Coarse track, the Mid track would cycle 16 times and the Fine track would cycle 32 times. Hence on a theoretical max length encoder (none were made that long that I know of), the Coarse track would increment from 0 to 255 with 16 Mid cycles and 32 Fine track cycles for each incremental change of the Coarse track.
 
I'm mindful that in post #3 you seem to have solved the problem, so please say if you'd prefer to end the discussions here.

Your last post #8 confused me, it seems to imply "option A" in the diagram below. Previously I had thought that "option B" was the situation...

interpretations.png

If option A is true, then I can't see how the decoder could possibly determine the current distance - unless there is a logic circuit that counts the number of cycles of the mid track as the sensor moves (and this wouldn't be able to start counting until the sensor had moved past a transition point on the coarse track).

Any extra background info, like the manufacturer and model number might be useful (or is it very old?)

From the maths point of view it seems quite interesting how this device works. Reminds me of the markings on a rule, coarse cm and then fine mm markings. But it seems strange that this encoder is organised in this way. One "big width" output would be much easier!
 
Your "Option A" shows how the encoder Coarse and Mid tracks relate to one another. 16 Mid track complete cycles for each increment of the Coarse track.
"Option B" shows the Coarse track recycle point on the far right which was causing the output position to "Jump".
For your option 'B', it shows how the Coarse track value would recycle as a Unsigned integer. As a signed integer, the recycle jump would happen at the half-way point where the value goes from 127 to 128 since, as binary values, 127 is 01111111 & 128 is 10000000 with the MSB being the sign bit, '0' for positive and '1' for negative.
So when the Coarse track value is combined with the Fine track, the result is a 17-bit number, the first 8 being the Coarse value and the last 9 being the Fine. End result was the output value went from +65535 to -65536 or if treated as Unsigned, goes from 11111111 to 00000000 for the coarse track making the values 131,072 to 0.
Same jump, just at different position along the track. My problem was in trying to use a single Arduino to take the inputs from 3 different encoders with one having the 255 to 0 Coarse track recycle transition and one of the others having the 127 to 128 mid-track transition and the limitations of the coding having to all be treated as either signed or unsigned.

The encoders are "iGaging Absolute Origin" linear encoder scales for a DRO. They come with each having their own OEM display unit, but lack the functionality of a 3axis Digital Readout.
Myself and, as it turned out, many others had purchased these encoder scales with the intent of using them with an App based DRO system called "Touch DRO" (https://www.yuriystoys.com/) only to find out these scales weren't compatible with the decoding methodology it was created to use with incremental type encoders. Contacting the retailer was of no help and he refused to give contact info for the manufacturer (China), so a friend and I decided to try decoding the algorithm the scales used to make them usable with the Touch DRO App.
Shortly after we embarked on this quest, iGaging quit using the coding scheme and started using another along with stopping production and sales of these encoders about 4 years ago.
These encoders use 2 processor chips, one in the readhead to scan and process the signals from the encoder tape, format it, and transmit it to the display processor in the form of a 52-bit position Word (with only 25 of the bits used for the position). The display processor took that info and used its algorithm to filter out the track "Noise" and convert it to a decimal output to the display screen.

FYI, if you're unfamiliar with Absolute vs Incremental encoders, an Incremental just counts pulses from 2 tracks and uses a Lead/Lag methodology to determine direction of travel. Advantage is, this type is cheap to make and can be as long as you desire. Disadvantage is that if power is interrupted at all, it loses track of it position and has to be reset to some reference position.
Absolute, on the other hand, are more expensive and complicated to manufacture and the maximum length is determined by the number of tracks used along with the resolution of the tracks. The big advantage is that even with a loss of power, when re-energized it knows exactly where it's at since every encoded position has its own "Unique" value, i.e. only one spot along the length where the track values would be 5, 67, 502 for Coarse, Mid, Fine values, for example.
 
Very interesting indeed. Thanks for the back story. You've done a great job researching the data that the read head outputs. And an even better job by coming up with a solution on the Arduino.

If you have any problems with your algorithm then I would recommend using the following strategy to refine your solution:-
  1. Produce a model of the device. I'd write a function that takes a position value X as input, and turns it into C,M,F {coarse, mid, fine} values (exactly modelling a "perfect" version of the device).
  2. I think you already have an inverse function, that takes C,M,F as input and outputs position X. This inverse function can be called numerous times with adjacent CMF values, and it produces valid output after a change in the C value (due to the incremental encoding strategy). Therefore the inverse function needs to store some knowledge of previous input(s).
  3. Test that the function and inverse function work together, robustly, yielding the same X that was input, over a set of different situations:- different increments of X (different speed of movement); and different C,M, or F errors.
This would be a great way to verify, and perhaps improve, your solution.
 
I thought we’d found the answer to this problem, but it didn’t completely solve the issue, so I’m back looking for an answer. I’ll state it another way to try and better clarify the problem.

Like stated before, I have a set of Absolute linear encoders that use 3 tracks encoded with a binary sequence when combined output a unique ID or value for every spot along its length.

The problem arises from the Coarse track of the three when the value crosses a transition point.

When graphed, the output series of all 3 tracks resembles a Fourier Series/Transform with the values when looked at as a Signed integer could go from -π to π or when graphed as an Unsigned integer, could go from 0 to 2π.

Our problem comes from when the manufacturer produced the encoders, they used an encoded tape that was a repeating, cyclic roll of these 3 tracks and then cut them to the desired lengths.

The end result is, with the Coarse track being an 8-bit integer series, a scale Coarse track could start off with a value say 240, go up through 255 where it recycles back to zero, and counts up from there to say 55 at the other end.

In this example, if treating the Coarse track binary value as Signed solves the issue, but as Unsigned results in the recycle point “Jump”. On the flip-side, if an encoder has the section of the encoder tape where the value goes from 127 to 128, the opposite is true. As a Signed value, it results in the “Jump”, but is linear across the recycle point.

The dedicated, single OEM encoder display units don’t show this “Jump” issue and, through testing by swapping display units between encoder scale, doesn’t display this behavior either, meaning they use a single decoding algorithm for all OEM displays.

I thought I’d found the answer by phase shifting the Coarse track values such that it wouldn’t cross the offending boundary. This will work, but only for that particular scale and could result in causing another scale that didn’t have the “Jump” issue to now have it.

Basically, I’m looking for a method/algorithm that will output a linear function across both of the transition points.

The JPEG tries to illustrate the possible ranges of values the encoder scales might have.
Example.jpg

The red line is an example of an ideal, maximum length scale starting with Coarse value of 0 and climbing linearly to 255.

The Green line would be an example of a short encoder scale with bounds > 0, but < π.

The Brown line represents a scale that begins at > 0, but goes up past the mid-point 127 to 128 value, yet ends at < 2π.

The Gray line represent encoder scales that could go from values > π up through the recycle point of 2π and continues.

The Blue line is one I have that starts at a value of > π, goes up through the recycle point at 2π, and ends at a point > π on the next cycle of the tape.
 

Attachments

  • Example.jpg
    Example.jpg
    142.1 KB · Views: 2
Top