This is an utterly brilliant hack for dimensionality reduction leading to pattern recognition. That it even beats SVMs (albeit with a single carefully chosen example ;-) is icing on the cake.
One thing I don't understand is the addition of the constant 3 to the row index (in the paper just after formula 6). Intuitively this should be only 2, because the last row vector of the local topology lags the last state captured in the distance matrix by one row, and then we want to move ahead one more row to start forecasting.
What am I missing?
Isn’t it because m = n - 2 (above equation 4) and you want to get to n + 1?
Yes, you're right. Off by one error on my part caused by concentrating on the bottom half of figure 1a while trying to visualize this and formulating my question.
naive question but can forecasting in a time series be applied backward to interpolate it? in a case like this FReT algorithm, the idea would be that the information in the FReT interpolated series (or SETAR, NNET, etc) would have a higher fidelity to the total information in the sequence.
Interesting idea.
The FReT algorithm (if I understand correctly) works on equally spaced points in time and projects on the same grid into the future. So it would seem that interpolation is not possible with this method.
Sorry, am I missing something? "Topology" here just seems to mean connectivity, and I can't even tell why they have a notion of 3x3 connectivity-matrix structure. A whole lot of this seems under-explained.
There's an earlier paper [0] involving the same authors which explains this a bit better.
AIUI, they use the 3x3 neighbourhoods to capture local directional and curvature (i.e. gradient) information in the distance matrix. They then apply two heuristics (reduction to an 8-bit binary number and binning into sextiles) to reduce the floating point gradient information to coarse integers to aid pattern recognition.
The more recent paper adds another heuristic (empirically chosen similarity threshold) to aid finding starting points of recurring patterns.
Thanks. What I don’t understand is how searching for previous patterns that are similar helps in predicting timelines that are chaotic (it seems to be quite good at that).
It only helps because the chaotic system under consideration has periodic components.
The attractor shown in figure 1e has such periodic components, and identifying these does help, but only with very near term forecasting. When the accumulated forecast error crosses a threshold, it suddenly causes a large phase error, best seen from about point 75 onwards in the x and y components. From that point onwards the forecast is useless.
You are missing what is missing. Their source code does not fill in the missing pieces.
This is an utterly brilliant hack for dimensionality reduction leading to pattern recognition. That it even beats SVMs (albeit with a single carefully chosen example ;-) is icing on the cake.
One thing I don't understand is the addition of the constant 3 to the row index (in the paper just after formula 6). Intuitively this should be only 2, because the last row vector of the local topology lags the last state captured in the distance matrix by one row, and then we want to move ahead one more row to start forecasting.
What am I missing?
Isn’t it because m = n - 2 (above equation 4) and you want to get to n + 1?
Yes, you're right. Off by one error on my part caused by concentrating on the bottom half of figure 1a while trying to visualize this and formulating my question.
naive question but can forecasting in a time series be applied backward to interpolate it? in a case like this FReT algorithm, the idea would be that the information in the FReT interpolated series (or SETAR, NNET, etc) would have a higher fidelity to the total information in the sequence.
Interesting idea.
The FReT algorithm (if I understand correctly) works on equally spaced points in time and projects on the same grid into the future. So it would seem that interpolation is not possible with this method.
Sorry, am I missing something? "Topology" here just seems to mean connectivity, and I can't even tell why they have a notion of 3x3 connectivity-matrix structure. A whole lot of this seems under-explained.
There's an earlier paper [0] involving the same authors which explains this a bit better.
AIUI, they use the 3x3 neighbourhoods to capture local directional and curvature (i.e. gradient) information in the distance matrix. They then apply two heuristics (reduction to an 8-bit binary number and binning into sextiles) to reduce the floating point gradient information to coarse integers to aid pattern recognition.
The more recent paper adds another heuristic (empirically chosen similarity threshold) to aid finding starting points of recurring patterns.
[0] https://doi.org/10.1038/s41531-021-00240-4 , Equation (5) onwards.
Thanks. What I don’t understand is how searching for previous patterns that are similar helps in predicting timelines that are chaotic (it seems to be quite good at that).
It only helps because the chaotic system under consideration has periodic components.
The attractor shown in figure 1e has such periodic components, and identifying these does help, but only with very near term forecasting. When the accumulated forecast error crosses a threshold, it suddenly causes a large phase error, best seen from about point 75 onwards in the x and y components. From that point onwards the forecast is useless.
You are missing what is missing. Their source code does not fill in the missing pieces.