Sunday, February 20, 2011

Fraunhofer Institute Developed 300um-thick Camera

Tech-On: Fraunhofer Institute for Applied Optics and Precision Engineering exhibited a "super-slim camera" with 300μm-thick optical system at Nano Tech 2011.

The camera has 150 x 100 pixel array and based on the compound eyes of insects. The view angle of the camera is 80° x 65°. A thin glass part is placed between the lens and the image sensor. The total thickness of the lens, glass part, spacer, etc is 300μm.

The possible applications for the camera are in various sensors and monitoring devices as well as for medical purposes.


16 comments:

  1. I'd really like to see some images from any of the plenoptic cameras recently posted. A low light image would be a bonus.

    ReplyDelete
  2. I've tried this kind of stuff many year ago by placing a SelFoc lens array extracted from a cheap document scanner in front of a CMOS sensor. I saw myself multiplicated in the image, quite funny. But the image quality was bad. From what I see, I guess it should be also a Selfoc lens array, non ??

    -yang ni

    ReplyDelete
  3. such kind of stuff : http://www.nsgeurope.com/sla.shtml

    -yang ni

    ReplyDelete
  4. I was actually thinking of the post-processed image.

    ReplyDelete
  5. the use of SelFoc is that the image is not optically reversed. So depending on the optical configuration, you can also get an uniform optical image on the sensor surface which is the case in a scanner.

    -yang ni

    ReplyDelete
  6. There are several approaches to such a "super-slim camera" using multi-aperture optics. The one shown here uses microoptical fabrication technology on wafer-level. Hence, a batch of micro-images is created by an array of tiny microlenses. The final image is then indeed formed by a post-processing of the number of micro-images from all optical channels (like Mr. Fossum pointed out). A recent prototype is able to acquire real-time video with VGA resolution at a thickness of only 1.4mm. With a future improvement in resolution the applications could address consumer devices like mobile phones or notebooks.
    Details about this approach and example images which have been acquired with the device may be found here: http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-24-24379

    Andreas Brueckner

    ReplyDelete
  7. Thanks Andreas. I enjoyed the paper and was very happy to see an image. Not so bad for early work! I hope you can find a solution to the MTF drop off.

    So, is the main advantage of this approach a reduction in camera thickness? It seems the cost is rather high. MTF (at least for now), noise (could be severe in low light) and chip size for a given output resolution (e.g,. 3 MPix -> ~VGA) Have you been able to produce 3D images yet?

    Any comments you have would be welcome. I like the plenoptic approach in many ways but I am worried about the drawbacks.

    ReplyDelete
  8. Oh, another cost is the post processing hardware and energy.

    ReplyDelete
  9. Very interesting !!!!

    ReplyDelete
  10. To comment on Eric's questions:

    Yes, you're right the main advantage/purpose is to reduce the thickness of a camera lens module. Mobile phone and notebook makers ask for ever-thinner camera modules so that for a fixed pixel size the only way to do that could be a paradigm shift for the optical setup. It may look like but it is not a plenoptic camera at all.
    Concerning the MTF performance: we are already working on a new prototype with enhanced image quality per channel so that the overall MTF performance will improve. Dealing with noise is the same as in other miniature camera devices (e.g. see OVT CameraCube) - a task for the image sensor developers.
    At the moment we are not interested in any 3D resolution for such a device. We are even trying to further shrink the lateral dimensions which will in return decrease the residual parallax. This, as you pointed out, has to be done in order to achieve a better fit of chip size for given output resolution. This ratio mostly dominates the costs for such a device.
    For now, the post processing is quite simple so that it rund in real-time on any embedded platform like smart phones carry today.

    Andreas Brueckner

    ReplyDelete
  11. Thanks again Andreas. I was probably too loose with the term "plenoptic". Maybe I should have said "multi-optic".

    On the noise however, I wonder if there is any subtraction involved in generating the output pixels or is it all additive computation? Generally SNR suffers with computed imaging unless it is just positively-weighted summation with shot-noise limited signals. The classic example is subtraction of two large signals each with shot noise. The result is a small signal with large noise. Seems you may be doing a lot of distortion correction etc. that may involve subtraction.

    ReplyDelete
  12. One drawback seems like the requirement for tight alignment between the pixel array and optics. Or you could calibrate in production. Mobile test engineers hate anything that cuts into manufacturing time.

    How fast could the processing run? Video rates? A major drawback with multi-optics is you can't turn them off. You have to run intensive processing for every image, all the time.

    ReplyDelete
  13. In the current version there is a software distortion correction involved which includes a gray-level interpolation (please see Proc. SPIE 7875, 78750B (2011) for details). We did not check yet, how much this processing step is influencing the noise. However, from the system architectural point of view there is the chance to even reduce temporal noise due to some redundant sampling accross the micro-images.

    Concerning the alignment to the image sensor: The most critical tolerance is the back focal distance (z-height) which has to be mounted within about p/m 10µm from the nominal value. However, this accuracy level is also found in single-aperture WLO. The only big difference between both is that for the multi-aperture optics also the degree of rotation between the optics module and sensor plane matters. If there is residual rotation it makes the post processing much more complex. But keep in mind: The alignment of wafer-level optics is never going to be done by hand.

    With the current approach video rates of 30 fps would be no problem.

    Andreas Brueckner

    ReplyDelete
  14. Processing only uses additions and multiplications with constants at the moment, so I don't expect any detrimental effects on noise. As soon as you start to do demosaicing or deconvolution, that will involve negative coefficients, but that is true for single-aperture cameras as well.

    Processing can be decreased a lot or even eliminated with clever optics/sensor design.

    Alexander Oberdörster

    ReplyDelete
  15. Well, Alexander, why not take and post an image at 100 lux comparing your sensor (fairly) some other benchmark sensor? I am not saying it will be a lot worse but it is an easy thing to do in the lab and will be useful for promoting your technology.

    ReplyDelete
  16. I'm wondering, are these systems fixed-focus?

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.