Blur is perhaps surprisingly one of the degradations we know best how to undo. It's been studied extensively because there's just so many applications, for microscopes, telescopes, digital cameras. The usual tricks revolve around inverting blur kernels, and making educated guesses about what the blur kernel and underlying image might look like. My advisors and I were even able to train deep neural networks using only blurry images using a really mild assumption of approximate scale-invariance at the training dataset level [1].
I didn't learn about this trick (deconvolution) until grad school and even then it seemed like spooky mystery to me.
One salient point not touched on here, is that an awful lot of the time, the things folks are blurring out specifically is text. And since we know an awful lot about what text ought to look like, we have a lot more information to guide the reconstruction...
Good point, though you have to beware that text-aware image enhancement sometimes replaces characters with what it thinks is a more likely character from context.
I've seen my phone camera's real-time viewfinder show text on a sign with one letter different from the real sign. If I wasn't looking at the sign at the same time, I might not have noticed the synthetic replacement.
>> sometimes replaces characters with what it thinks is a more likely character from context
>But then, it’s not wrong to scratch your head. Blurring amounts to averaging the underlying pixel values. If you average two numbers, there’s no way of knowing if you’ve started with 1 + 5 or 3 + 3. In both cases, the arithmetic mean is the same and the original information appears to be lost. So, is the advice wrong?
Well, if you have a large enough averaging window (like is the case with bluring letters) they have constraints (a fixed number of shapes) information for which is partly retained.
Not very different from the information retained in minesweeper games.
Captain Disillusion recently covered this subject in a more popular science format as well
8 months ago, for those of us who got excited by the idea of a "recent" new video from CD.
Can this be applied to camera shutter/motion blur, at low speeds the slight shake of the camera produces this type of blur. This is usually resolved with IBIS to stabilize the sensor.
The ability to reverse is very dependent on the transformation being well known, in this case it is deterministic and known with certainty.
Any algorithm to reverse motion blur will depend on the translation and rotation of the camera in physical space, and the best the algorithm could do will be limited by the uncertainty in estimating those values.
If you apply a fake motion blur like in photoshop or after effects then that could probably be reversed pretty well.
> and the best the algorithm could do will be limited by the uncertainty in estimating those values
That's relatively easy if you're assuming simple translation and rotation, as opposed to a squiggle movement or something. Because you can simply detect how much sharper the image gets, and hone in on the right values.
I recall a paper from many years ago (early 2010s) describing methods to estimate the camera motion and remove motion blur from blurry image contents only. I think they used a quality metric on the resulting “unblurred” image as a loss function for learning the effective motion estimate. This was before deep learning took off; certainly today’s image models could do much better at assessing the quality of the unblurred image than a hand-crafted metric.
Or... from the note at the top, had it? Very strange, features are almost never removed. I really wonder what the architectural reason was here.
The missing piece of the puzzle is how to determine the blur kernel from the blurry image. There's a whole body of literature on that that's called blind deblurring.
I wonder if the "night mode" on newer phone cameras is doing something similar. Take a long exposure, use the IMU to produce a kernel that tidies up the image post facto. The night mode on my S24 actually produces some fuzzy, noisy artifacts that aren't terribly different from the artifacts in the OP's deblurs.
reminds me of the guy who used the photoshop swirl effect to mask his face in csam he produced, who was found out when someone just undid the swirl
This is the case I always think of when it comes to reversing image filters.
Action Lab just did a video on physical swirling vs mixing. Swirling is reversible.
Encode the image as a boundary condition of a laminar flow and you can recover the original image from an observation.
If, however, you observe after turbulence has set in, then some of the information has been lost, it's in the entropy now. How much, that depends on the turbulent flow.
Treat the dynamics and time of evolution as your private key, laminar flow is a form of encryption.
If you encode code your data directly in the fluid, then turbulence becomes the statistical TTL on the data.
In practice unblurring (deconvolution) doesn't really work as well as you'd hope because it is usually blind (you don't know the blur function), and it is ill-conditioned, so any small mistakes or noise get enormously amplified.
This is classical deconvolution. Modern de-blurring implementations are DNN-based.
How do we apply this to geospatial face and licence plate blurs?
Ok, what about gaussian blur?
[deleted]
My (admittedly superficial) knowledge about blur reversibility is that an attacker may know what kind of stuff is behind the blur.
I mean knowledge like "a human face, but the potential set of humans is known to the attacker" or even worse "a text, but the font is obvious from the unblurred part of the doc".
This was also my understanding.
It's essentially like "cracking" a password when you have its hash and know the hashing algorithm. You don't have to know how to reverse the blur, you just need to know how to do it the normal way, you can then essentially brute force through all possible characters one at a time to see if it looks the same after applying the blur.
Thinking about this, adding randomness to the blurring would likely help.
Or far more simply, just mask the sensitive data with a single color which is impossible to reverse (for rasterized images, this is not a good idea for PDFs which tend to maintain the text "hidden" underneath).
> mask the sensitive data with a single color which is impossible to reverse
You note the pitfall of text remaining behind the redaction in PDFs (and other layered formats), but there are also pitfalls here around alpha channels. There have been several incidents where folks drew not-quite-opaque redaction blocks over their images.
> just mask the sensitive data with a single color which is impossible to reverse (for rasterized images, this is not a good idea for PDFs
Also not a good idea for masking already compressed images of text, like jpg, because some of the information might bleed out in uncovered areas.
Interesting - does a little extra coverage solve this or is it possible to use distant pixels to find the original?
The parade is easy: just add a small amount of random noise (even not visible to the human eye) to the blurred picture, and suddenly the "blur inversion" fails spectacularly
Does this actually work? I would have thought that, given the deconvolution step is just a linear operator with reasonable coefficients, adding a small amount of noise to the blurred image would just add similarly small amount of noise to the unblurred result.
To reconstruct the image one has to cut off those frequencies in the corrupted image where the signal to noise is poor. In many original images, the signal in high frequencies are sacrificable, so get rid of those and then invert.
If one blindly inverts the linear blur transform then yes, the reconstruction would usually be a complete unrecognisable mess because the inverse operator is going to dramatically boost the noise as well.
Those unblurring methods look "amazing" like that but they are just very fragile, add even a modicum of noise to the blurred image and the deblurring will almost certainly completely fail, this is well-known in signal-processing
Did you see the part where he saved with more and more lossy compression and showed that it still was recoverable?
Not necessarily.
If, however, one just blindly uses the (generalized)inverse of the point-spread function, then you are absolutely correct for the common point-spread functions that we encounter in practice (usually very poorly conditioned).
One way to deal with this is to cut off those frequencies where the signal to noise in that frequency bin is poor. This however requires some knowledge about the spectrum of the noise and signal. Weiner filter uses that knowledge to work out an optimal filter.
If one doesn't know about the statistics of the noise, not about the point-spread function, then it gets harder and you are in the territory of blind deconvolution.
So just a word of warning, if you a relying only on sprinkling a little noise in blurred images to save yourself, you are on very, very dangerous ground.
Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.
The reason the filters used in the post are easily reversible is because none of them are binomial (i.e. the discrete equivalent of a gaussian blur). A binomial blur uses the coefficients of a row of Pascal's triangle, and thus is what you get when you repeatedly average each pixel with its neighbor (in 1D).
When you do, the information at the Nyquist frequency is removed entirely, because a signal of the form "-1, +1, -1, +1, ..." ends up blurred _exactly_ into "0, 0, 0, 0...".
All the other blur filters, in particular the moving average, are just poorly conceived. They filter out the middle frequencies the most, not the highest ones. It's equivalent to doing a bandpass filter and then subtracting that from the original image.
Here's an interactive notebook that explains this in the context of time series. One important point is that the "look" that people associate with "scientific data series" is actually an artifact of moving averages. If a proper filter is used, the blurryness of the signal is evident.
https://observablehq.com/d/a51954c61a72e1ef
> Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.
Blur is perhaps surprisingly one of the degradations we know best how to undo. It's been studied extensively because there's just so many applications, for microscopes, telescopes, digital cameras. The usual tricks revolve around inverting blur kernels, and making educated guesses about what the blur kernel and underlying image might look like. My advisors and I were even able to train deep neural networks using only blurry images using a really mild assumption of approximate scale-invariance at the training dataset level [1].
[1] https://ieeexplore.ieee.org/document/11370202
I didn't learn about this trick (deconvolution) until grad school and even then it seemed like spooky mystery to me.
One salient point not touched on here, is that an awful lot of the time, the things folks are blurring out specifically is text. And since we know an awful lot about what text ought to look like, we have a lot more information to guide the reconstruction...
Good point, though you have to beware that text-aware image enhancement sometimes replaces characters with what it thinks is a more likely character from context.
I've seen my phone camera's real-time viewfinder show text on a sign with one letter different from the real sign. If I wasn't looking at the sign at the same time, I might not have noticed the synthetic replacement.
>> sometimes replaces characters with what it thinks is a more likely character from context
Like the JBIG2 algorithm used in a zero click PDF-as-GIF exploit in iMessage a while back: https://projectzero.google/2021/12/a-deep-dive-into-nso-zero...
The vulnerability of that algorithm to character-swapping caused incorrect invoices, incorrect measurements in blueprints, incorrect metering of medicine, etc. https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
>But then, it’s not wrong to scratch your head. Blurring amounts to averaging the underlying pixel values. If you average two numbers, there’s no way of knowing if you’ve started with 1 + 5 or 3 + 3. In both cases, the arithmetic mean is the same and the original information appears to be lost. So, is the advice wrong?
Well, if you have a large enough averaging window (like is the case with bluring letters) they have constraints (a fixed number of shapes) information for which is partly retained.
Not very different from the information retained in minesweeper games.
Captain Disillusion recently covered this subject in a more popular science format as well
https://youtu.be/xDLxFGXuPEc
8 months ago, for those of us who got excited by the idea of a "recent" new video from CD.
Can this be applied to camera shutter/motion blur, at low speeds the slight shake of the camera produces this type of blur. This is usually resolved with IBIS to stabilize the sensor.
The ability to reverse is very dependent on the transformation being well known, in this case it is deterministic and known with certainty. Any algorithm to reverse motion blur will depend on the translation and rotation of the camera in physical space, and the best the algorithm could do will be limited by the uncertainty in estimating those values.
If you apply a fake motion blur like in photoshop or after effects then that could probably be reversed pretty well.
> and the best the algorithm could do will be limited by the uncertainty in estimating those values
That's relatively easy if you're assuming simple translation and rotation, as opposed to a squiggle movement or something. Because you can simply detect how much sharper the image gets, and hone in on the right values.
I recall a paper from many years ago (early 2010s) describing methods to estimate the camera motion and remove motion blur from blurry image contents only. I think they used a quality metric on the resulting “unblurred” image as a loss function for learning the effective motion estimate. This was before deep learning took off; certainly today’s image models could do much better at assessing the quality of the unblurred image than a hand-crafted metric.
Record gyro motion at time of shutter?
Absolutely, Photoshop has it:
https://helpx.adobe.com/photoshop/using/reduce-camera-shake-...
Or... from the note at the top, had it? Very strange, features are almost never removed. I really wonder what the architectural reason was here.
The missing piece of the puzzle is how to determine the blur kernel from the blurry image. There's a whole body of literature on that that's called blind deblurring.
For instance: https://deepinv.github.io/deepinv/auto_examples/blind-invers...
I believe Microsoft of all people solved this a while ago by using the gyroscope in a phone to produce a de-blur kernel that cleaned up the image.
Its somewhere here: https://www.microsoft.com/en-us/research/product/computation...
I wonder if the "night mode" on newer phone cameras is doing something similar. Take a long exposure, use the IMU to produce a kernel that tidies up the image post facto. The night mode on my S24 actually produces some fuzzy, noisy artifacts that aren't terribly different from the artifacts in the OP's deblurs.
reminds me of the guy who used the photoshop swirl effect to mask his face in csam he produced, who was found out when someone just undid the swirl
This is the case I always think of when it comes to reversing image filters.
Action Lab just did a video on physical swirling vs mixing. Swirling is reversible.
https://www.bbc.com/news/world-us-canada-39411025
Encode the image as a boundary condition of a laminar flow and you can recover the original image from an observation.
If, however, you observe after turbulence has set in, then some of the information has been lost, it's in the entropy now. How much, that depends on the turbulent flow.
Don't miss out on this video by smarter every day
https://youtu.be/j2_dJY_mIys?si=ArMd0C5UzbA8pmzI
Treat the dynamics and time of evolution as your private key, laminar flow is a form of encryption.
If you encode code your data directly in the fluid, then turbulence becomes the statistical TTL on the data.
In practice unblurring (deconvolution) doesn't really work as well as you'd hope because it is usually blind (you don't know the blur function), and it is ill-conditioned, so any small mistakes or noise get enormously amplified.
This is classical deconvolution. Modern de-blurring implementations are DNN-based.
How do we apply this to geospatial face and licence plate blurs?
Ok, what about gaussian blur?
My (admittedly superficial) knowledge about blur reversibility is that an attacker may know what kind of stuff is behind the blur.
I mean knowledge like "a human face, but the potential set of humans is known to the attacker" or even worse "a text, but the font is obvious from the unblurred part of the doc".
This was also my understanding.
It's essentially like "cracking" a password when you have its hash and know the hashing algorithm. You don't have to know how to reverse the blur, you just need to know how to do it the normal way, you can then essentially brute force through all possible characters one at a time to see if it looks the same after applying the blur.
Thinking about this, adding randomness to the blurring would likely help.
Or far more simply, just mask the sensitive data with a single color which is impossible to reverse (for rasterized images, this is not a good idea for PDFs which tend to maintain the text "hidden" underneath).
> mask the sensitive data with a single color which is impossible to reverse
You note the pitfall of text remaining behind the redaction in PDFs (and other layered formats), but there are also pitfalls here around alpha channels. There have been several incidents where folks drew not-quite-opaque redaction blocks over their images.
> just mask the sensitive data with a single color which is impossible to reverse (for rasterized images, this is not a good idea for PDFs
Also not a good idea for masking already compressed images of text, like jpg, because some of the information might bleed out in uncovered areas.
Interesting - does a little extra coverage solve this or is it possible to use distant pixels to find the original?
The parade is easy: just add a small amount of random noise (even not visible to the human eye) to the blurred picture, and suddenly the "blur inversion" fails spectacularly
Does this actually work? I would have thought that, given the deconvolution step is just a linear operator with reasonable coefficients, adding a small amount of noise to the blurred image would just add similarly small amount of noise to the unblurred result.
To reconstruct the image one has to cut off those frequencies in the corrupted image where the signal to noise is poor. In many original images, the signal in high frequencies are sacrificable, so get rid of those and then invert.
https://en.wikipedia.org/wiki/Wiener_deconvolution
If one blindly inverts the linear blur transform then yes, the reconstruction would usually be a complete unrecognisable mess because the inverse operator is going to dramatically boost the noise as well.
Those unblurring methods look "amazing" like that but they are just very fragile, add even a modicum of noise to the blurred image and the deblurring will almost certainly completely fail, this is well-known in signal-processing
Did you see the part where he saved with more and more lossy compression and showed that it still was recoverable?
Not necessarily.
If, however, one just blindly uses the (generalized)inverse of the point-spread function, then you are absolutely correct for the common point-spread functions that we encounter in practice (usually very poorly conditioned).
One way to deal with this is to cut off those frequencies where the signal to noise in that frequency bin is poor. This however requires some knowledge about the spectrum of the noise and signal. Weiner filter uses that knowledge to work out an optimal filter.
https://en.wikipedia.org/wiki/Wiener_deconvolution
If one doesn't know about the statistics of the noise, not about the point-spread function, then it gets harder and you are in the territory of blind deconvolution.
So just a word of warning, if you a relying only on sprinkling a little noise in blurred images to save yourself, you are on very, very dangerous ground.
Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.
The reason the filters used in the post are easily reversible is because none of them are binomial (i.e. the discrete equivalent of a gaussian blur). A binomial blur uses the coefficients of a row of Pascal's triangle, and thus is what you get when you repeatedly average each pixel with its neighbor (in 1D).
When you do, the information at the Nyquist frequency is removed entirely, because a signal of the form "-1, +1, -1, +1, ..." ends up blurred _exactly_ into "0, 0, 0, 0...".
All the other blur filters, in particular the moving average, are just poorly conceived. They filter out the middle frequencies the most, not the highest ones. It's equivalent to doing a bandpass filter and then subtracting that from the original image.
Here's an interactive notebook that explains this in the context of time series. One important point is that the "look" that people associate with "scientific data series" is actually an artifact of moving averages. If a proper filter is used, the blurryness of the signal is evident. https://observablehq.com/d/a51954c61a72e1ef
> Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.
FWIW, this does not read as constructive.
[dead]