53

Researchers develop a camera that can focus on different distances at once

It is a new neat idea to selectively adjust focus distance for different regions of the scene!

- processing: while there is no post processing, it needs scene depth information which requires pre computation, segmentation and depth estimation. Not a one-shot technique and quality depends on computational depth estimates being good

- no free lunch. The optical setup needs to trade in some light for this cool effect to work. Apart from the limitations of the prototype, how much loss is expected in theory? How does this compare to a regular camera setup with lower aperture? F/36 seems excessive for comparison.

- resolution - what resolutions have been achieved? (maybe not the 12 MPixels of the sensor? For practical or theoretical reasons? ) What depth range can the prototype capture? "photo of Paris Arc de triumphe displayed on a screen". This is suspiciously omitted

- how does the bokeh look like when out of focus? At the edge of an object? The introduction of weird or unnatural artifacts would seriously limit the acceptance

Don't get me wrong - nice technique! But to my liking the paper is omitting fundamental properties

32 minutes agoschobi

Isn't this the lytro camera?

7 hours agokrackers

I believe the lytro camera was a plenoptic, or light field, camera. Light field cameras capture information about the intensity together with the direction of light emanating from a scene. Conventional cameras record only light intensity at various wavelengths.

While conventional cameras capture a single high-resolution focal plane and light field cameras sacrifice resolution to "re-focus" via software after the fact, the CMU Split-Lohmann camera provides a middle ground, using an adaptive computational lens to physically focus every part of the image independently. This allows it to capture a "deep-focus" image where objects at multiple distances are sharp simultaneously, maintaining the high resolution of a conventional camera while achieving the depth flexibility of a light field camera without the blur or data loss.

Something I find interesting is that while holograms and the CMU camera both manipulate the "phase" of light, they do so for opposite reasons: a hologram records phase to recreate a 3D volume, whereas the CMU camera modulates phase to fix a 2D image.

6 hours agostevenjgarner

I remember Lytro. There was a lot of fanfare behind that company and then they fizzled. They had a lauded CEO/founder and their website demonstrated clearly how the post-focus worked. It felt like they were going to be the next camera revolution. Their rise and demise story would make a good Isaacson-style documentary.

3 hours agohbarka

I think the product was just too early for its time, and there is not much demand for it. For what it's worth, the founder (Ren Ng) went back to academia, and was highly instrumental in computer vision research, e.g. being the PI on the paper for NeRF: (https://dl.acm.org/doi/abs/10.1145/3503250)

2 hours agochychiu

The article mentions a spatial light modulator, which I believe the Lytro camera did not have.

7 hours agoanalog31

The image(s) were also trash unfortunately and a PITA to process. Barely usable even in ideal circumstances.

6 hours agoForgeties79

eh??

Processing was as simple as "click on the thing you want in focus". and 4MP was just fine for casual use it was targetting

5 hours agoNooneAtAll3

Paper has some more useful examples:

https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...

6 hours agoachille

The paper describes a split Alvarez (Lohmann) lens [1,2] with a phase modulator between them. I didn't do the math, but it looks like the phase modulator is optically equivalent to a mechanical shift of the Alvarez lenses over regions of the field of view. Alvarez lenses have higher aberrations, and are relatively bulky, compared to normal lenses. AR was referenced in the paper, but this lens will be hard to make compact, and have great image quality, over large fields of view.

1. https://www.laserfocusworld.com/optics/article/16555776/alva... 2. https://pdfs.semanticscholar.org/55af/9b325ba16fa471e55b2e49...

40 minutes agoDarkSucker

It's not even loading for me (probably because it's a huge file).

5 hours agoJohn7878781

As soon as I saw the headline, I began thinking about microphotography- no more blurry microbes! I could get excited for something like this.

7 hours agoQbit_Enjoyer

I wonder if this camera might somehow record depth information, or be modified to do such a thing.

That would make it really useful, maybe replacing carmera+lidar.

5 hours agom463

It even requires depth information -

While this methods has no post processing, it requires a pre processing step to pre-calture the scene, segment it, estimate depth an compute the depth map.

an hour agoschobi

I also like my 3d games without DOF.