Key | Citation | Details |
[GSM+2007]
|
Green, P.,
Sun, W.,
Matusik, W. and
Durand, F. 2007.
Multi-aperture photography.
In SIGGRAPH '07: ACM SIGGRAPH 2007 papers,
ACM,
New York, NY, USA, 68.
The emergent field of computational photography is proving that, by coupling generalized imaging optics with software processing, the quality and flexibility of imaging systems can be increased. In this paper, we capture and manipulate multiple images of a scene taken with different aperture settings (f-numbers). We design and implement a prototype optical system and associated algorithms to capture four images of the scene in a single exposure, each taken with a different aperture setting. Our system can be used with commercially available DSLR cameras and photographic lenses without modification to either. We leverage the fact that defocus blur is a function of scene depth and f/# to estimate a depth map. We demonstrate several applications of our multi-aperture camera, such as post-exposure editing of the depth of field, including extrapolation beyond the physical limits of the lens, synthetic refocusing, and depth-guided deconvolution.
@inproceedings{GSM+2007,
author = {Green, Paul and Sun, Wenyang and Matusik, Wojciech and Durand, Fr\'{e}do},
title = {Multi-aperture photography},
booktitle = {SIGGRAPH '07: ACM SIGGRAPH 2007 papers},
year = {2007},
pages = {68},
location = {San Diego, California},
doi = {10.1145/1275808.1276462},
publisher = {ACM},
address = {New York, NY, USA},
project = {http://people.csail.mit.edu/green/multiaperture/},
abstract = {The emergent field of computational photography is proving that, by coupling generalized imaging optics with software processing, the quality and flexibility of imaging systems can be increased. In this paper, we capture and manipulate multiple images of a scene taken with different aperture settings (f-numbers). We design and implement a prototype optical system and associated algorithms to capture four images of the scene in a single exposure, each taken with a different aperture setting. Our system can be used with commercially available DSLR cameras and photographic lenses without modification to either. We leverage the fact that defocus blur is a function of scene depth and f/# to estimate a depth map. We demonstrate several applications of our multi-aperture camera, such as post-exposure editing of the depth of field, including extrapolation beyond the physical limits of the lens, synthetic refocusing, and depth-guided deconvolution.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[HK2008]
|
Hasinoff, S.W. and
Kutulakos, K.N. 2008.
Light-Efficient Photography.
In ECCV '08: Proceedings of the 10th European Conference on Computer Vision,
Springer-Verlag,
Berlin, Heidelberg, 45–59.
We consider the problem of imaging a scene with a given depth of field at a given exposure level in the shortest amount of time possible. We show that by (1) collecting a sequence of photos and (2) controlling the aperture, focus and exposure time of each photo individually, we can span the given depth of field in less total time than it takes to expose a single narrower-aperture photo. Using this as a starting point, we obtain two key results. First, for lenses with continuously-variable apertures, we derive a closed-form solution for the globally optimal capture sequence, i.e., that collects light from the specified depth of field in the most efficient way possible. Second, for lenses with discrete apertures, we derive an integer programming problem whose solution is the optimal sequence. Our results are applicable to off-the-shelf cameras and typical photography conditions, and advocate the use of dense, wide-aperture photo sequences as a light-efficient alternative to single-shot, narrow-aperture photography.
@inproceedings{HK2008,
author = {Hasinoff, Samuel W. and Kutulakos, Kiriakos N.},
title = {Light-Efficient Photography},
booktitle = {ECCV '08: Proceedings of the 10th European Conference on Computer Vision},
year = {2008},
isbn = {978-3-540-88692-1},
pages = {45--59},
location = {Marseille, France},
doi = {10.1007/978-3-540-88693-8_4},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
project = {http://people.csail.mit.edu/hasinoff/lightefficient/},
abstract = {We consider the problem of imaging a scene with a given depth of field at a given exposure level in the shortest amount of time possible. We show that by (1) collecting a sequence of photos and (2) controlling the aperture, focus and exposure time of each photo individually, we can span the given depth of field in less total time than it takes to expose a single narrower-aperture photo. Using this as a starting point, we obtain two key results. First, for lenses with continuously-variable apertures, we derive a closed-form solution for the globally optimal capture sequence, i.e., that collects light from the specified depth of field in the most efficient way possible. Second, for lenses with discrete apertures, we derive an integer programming problem whose solution is the optimal sequence. Our results are applicable to off-the-shelf cameras and typical photography conditions, and advocate the use of dense, wide-aperture photo sequences as a light-efficient alternative to single-shot, narrow-aperture photography. }
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[HKD+2009]
|
Hasinoff, S.W.,
Kutulakos, K.N.,
Durand, F. and
Freeman, W.T. 2009.
Time-Constrained Photography.
In Proceedings of the IEEE International Conference on Computer Vision (ICCV '09).
Capturing multiple photos at different focus settings is a powerful approach for reducing optical blur, but how many photos should we capture within a fixed time budget? We develop a framework to analyze optimal capture strategies balancing the tradeoff between defocus and sensor noise, incorporating uncertainty in resolving scene depth. We derive analytic formulas for restoration error and use Monte Carlo integration over depth to derive optimal capture strategies for different camera designs, under a wide range of photographic scenarios. We also derive a new upper bound on how well spatial frequencies can be preserved over the depth of field. Our results show that by capturing the optimal number of photos, a standard camera can achieve performance at the level of more complex computational cameras, in all but the most demanding of cases. We also show that computational cameras, although specifically designed to improve one-shot performance, generally benefit from capturing multiple photos as well.
@inproceedings{HKD+2009,
Author = {Samuel W. Hasinoff and Kiriakos N. Kutulakos and Fr\'{e}do
Durand and William T. Freeman},
Title = {Time-Constrained Photography},
Booktitle = {Proceedings of the {IEEE} International Conference on Computer Vision ({ICCV '09})},
Year = {2009},
project = {http://people.csail.mit.edu/hasinoff/timecon/},
abstract = {Capturing multiple photos at different focus settings is a powerful approach for reducing optical blur, but how many photos should we capture within a fixed time budget? We develop a framework to analyze optimal capture strategies balancing the tradeoff between defocus and sensor noise, incorporating uncertainty in resolving scene depth. We derive analytic formulas for restoration error and use Monte Carlo integration over depth to derive optimal capture strategies for different camera designs, under a wide range of photographic scenarios. We also derive a new upper bound on how well spatial frequencies can be preserved over the depth of field. Our results show that by capturing the optimal number of photos, a standard camera can achieve performance at the level of more complex computational cameras, in all but the most demanding of cases. We also show that computational cameras, although specifically designed to improve one-shot performance, generally benefit from capturing multiple photos as well. }
}
|
[BibTeX]
[Abstract]
[Project]
|
[KF2009]
|
Krishnan, D. and
Fergus, R. 2009.
Dark flash photography.
In SIGGRAPH '09: ACM SIGGRAPH 2009 papers,
ACM,
New York, NY, USA, 1–11.
Camera flashes produce intrusive bursts of light that disturb or dazzle. We present a prototype camera and flash that uses infra-red and ultra-violet light mostly outside the visible range to capture pictures in low-light conditions. This "dark" flash is at least two orders of magnitude dimmer than conventional flashes for a comparable exposure. Building on ideas from flash/no-flash photography, we capture a pair of images, one using the dark flash, other using the dim ambient illumination alone. We then exploit the correlations between images recorded at different wavelengths to denoise the ambient image and restore fine details to give a high quality result, even in very weak illumination. The processing techniques can also be used to denoise images captured with conventional cameras.
@inproceedings{KF2009,
author = {Krishnan, Dilip and Fergus, Rob},
title = {Dark flash photography},
booktitle = {SIGGRAPH '09: ACM SIGGRAPH 2009 papers},
year = {2009},
isbn = {978-1-60558-726-4},
pages = {1--11},
location = {New Orleans, Louisiana},
doi = {10.1145/1576246.1531402},
publisher = {ACM},
address = {New York, NY, USA},
project = {http://www.cs.nyu.edu/~fergus/research/dark_flash.html},
abstract = {Camera flashes produce intrusive bursts of light that disturb or dazzle. We present a prototype camera and flash that uses infra-red and ultra-violet light mostly outside the visible range to capture pictures in low-light conditions. This "dark" flash is at least two orders of magnitude dimmer than conventional flashes for a comparable exposure. Building on ideas from flash/no-flash photography, we capture a pair of images, one using the dark flash, other using the dim ambient illumination alone. We then exploit the correlations between images recorded at different wavelengths to denoise the ambient image and restore fine details to give a high quality result, even in very weak illumination. The processing techniques can also be used to denoise images captured with conventional cameras.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[LFD+2007]
|
Levin, A.,
Fergus, R.,
Durand, F. and
Freeman, W.T. 2007.
Image and depth from a conventional camera with a coded aperture.
In SIGGRAPH '07: ACM SIGGRAPH 2007 papers,
ACM,
New York, NY, USA, 70.
A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.
@inproceedings{LFD+2007,
author = {Levin, Anat and Fergus, Rob and Durand, Fr\'{e}do and Freeman, William T.},
title = {Image and depth from a conventional camera with a coded aperture},
booktitle = {SIGGRAPH '07: ACM SIGGRAPH 2007 papers},
year = {2007},
pages = {70},
location = {San Diego, California},
doi = {10.1145/1275808.1276464},
publisher = {ACM},
address = {New York, NY, USA},
project = {http://groups.csail.mit.edu/graphics/CodedAperture/},
abstract = {A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[LHG+2009]
|
Levin, A.,
Hasinoff, S.W.,
Green, P.,
Durand, F. and
Freeman, W.T. 2009.
4D frequency analysis of computational cameras for depth of field extension.
In SIGGRAPH '09: ACM SIGGRAPH 2009 papers,
ACM,
New York, NY, USA, 1–14.
Depth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography—wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational imaging modify the acquisition process to extend the DOF through deconvolution. Because deconvolution quality is a tight function of the frequency power spectrum of the defocus kernel, designs with high spectra are desirable. In this paper we study how to design effective extended-DOF systems, and show an upper bound on the maximal power spectrum that can be achieved. We analyze defocus kernels in the 4D light field space and show that in the frequency domain, only a low-dimensional 3D manifold contributes to focus. Thus, to maximize the defocus spectrum, imaging systems should concentrate their limited energy on this manifold. We review several computational imaging systems and show either that they spend energy outside the focal manifold or do not achieve a high spectrum over the DOF. Guided by this analysis we introduce the lattice-focal lens, which concentrates energy at the low-dimensional focal manifold and achieves a higher power spectrum than previous designs. We have built a prototype lattice-focal lens and present extended depth of field results.
@inproceedings{LHG+2009,
author = {Levin, Anat and Hasinoff, Samuel W. and Green, Paul and Durand, Fr\'{e}do and Freeman, William T.},
title = {4D frequency analysis of computational cameras for depth of field extension},
booktitle = {SIGGRAPH '09: ACM SIGGRAPH 2009 papers},
year = {2009},
isbn = {978-1-60558-726-4},
pages = {1--14},
location = {New Orleans, Louisiana},
doi = {http://doi.acm.org/10.1145/1576246.1531403},
publisher = {ACM},
address = {New York, NY, USA},
project = {http://www.wisdom.weizmann.ac.il/~levina/papers/lattice/},
abstract = {Depth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography---wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational imaging modify the acquisition process to extend the DOF through deconvolution. Because deconvolution quality is a tight function of the frequency power spectrum of the defocus kernel, designs with high spectra are desirable. In this paper we study how to design effective extended-DOF systems, and show an upper bound on the maximal power spectrum that can be achieved. We analyze defocus kernels in the 4D light field space and show that in the frequency domain, only a low-dimensional 3D manifold contributes to focus. Thus, to maximize the defocus spectrum, imaging systems should concentrate their limited energy on this manifold. We review several computational imaging systems and show either that they spend energy outside the focal manifold or do not achieve a high spectrum over the DOF. Guided by this analysis we introduce the lattice-focal lens, which concentrates energy at the low-dimensional focal manifold and achieves a higher power spectrum than previous designs. We have built a prototype lattice-focal lens and present extended depth of field results.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[LSC+2008]
|
Levin, A.,
Sand, P.,
Cho, T.S.,
Durand, F. and
Freeman, W.T. 2008.
Motion-invariant photography.
In SIGGRAPH '08: ACM SIGGRAPH 2008 papers,
ACM,
New York, NY, USA, 1–9.
Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal) we show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, we show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is invariant to object velocity. Thus, a single deconvolution kernel can be used to remove blur and create sharp images of scenes with objects moving at different speeds, without requiring any segmentation and without knowledge of the object speeds. Apart from motion invariance, we prove that the derived parabolic motion preserves image frequency content nearly optimally. That is, while static objects are degraded relative to their image from a static camera, a reliable reconstruction of all moving objects within a given velocities range is made possible. We have built a prototype camera and present successful deblurring results over a wide variety of human motions.
@inproceedings{LSC+2008,
author = {Levin, Anat and Sand, Peter and Cho, Taeg Sang and Durand, Fr\'{e}do and Freeman, William T.},
title = {Motion-invariant photography},
booktitle = {SIGGRAPH '08: ACM SIGGRAPH 2008 papers},
year = {2008},
pages = {1--9},
location = {Los Angeles, California},
doi = {10.1145/1399504.1360670},
publisher = {ACM},
address = {New York, NY, USA},
project = {http://groups.csail.mit.edu/graphics/pubs/MotionInvariant/},
abstract = {Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal) we show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, we show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is invariant to object velocity. Thus, a single deconvolution kernel can be used to remove blur and create sharp images of scenes with objects moving at different speeds, without requiring any segmentation and without knowledge of the object speeds. Apart from motion invariance, we prove that the derived parabolic motion preserves image frequency content nearly optimally. That is, while static objects are degraded relative to their image from a static camera, a reliable reconstruction of all moving objects within a given velocities range is made possible. We have built a prototype camera and present successful deblurring results over a wide variety of human motions.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[MBN2007]
|
Moreno-Noguer, F.,
Belhumeur, P.N. and
Nayar, S.K. 2007.
Active refocusing of images and videos.
In SIGGRAPH '07: ACM SIGGRAPH 2007 papers,
ACM,
New York, NY, USA, 67.
We present a system for refocusing images and videos of dynamic scenes using a novel, single-view depth estimation method. Our method for obtaining depth is based on the defocus of a sparse set of dots projected onto the scene. In contrast to other active illumination techniques, the projected pattern of dots can be removed from each captured image and its brightness easily controlled in order to avoid under- or over-exposure. The depths corresponding to the projected dots and a color segmentation of the image are used to compute an approximate depth map of the scene with clean region boundaries. The depth map is used to refocus the acquired image after the dots are removed, simulating realistic depth of field effects. Experiments on a wide variety of scenes, including close-ups and live action, demonstrate the effectiveness of our method.
@inproceedings{MBN2007,
author = {Moreno-Noguer, Francesc and Belhumeur, Peter N. and Nayar, Shree K.},
title = {Active refocusing of images and videos},
booktitle = {SIGGRAPH '07: ACM SIGGRAPH 2007 papers},
year = {2007},
pages = {67},
location = {San Diego, California},
doi = {10.1145/1275808.1276461},
publisher = {ACM},
address = {New York, NY, USA},
project = {http://www.cs.columbia.edu/CAVE/projects/active_refocus/},
abstract = {We present a system for refocusing images and videos of dynamic scenes using a novel, single-view depth estimation method. Our method for obtaining depth is based on the defocus of a sparse set of dots projected onto the scene. In contrast to other active illumination techniques, the projected pattern of dots can be removed from each captured image and its brightness easily controlled in order to avoid under- or over-exposure. The depths corresponding to the projected dots and a color segmentation of the image are used to compute an approximate depth map of the scene with clean region boundaries. The depth map is used to refocus the acquired image after the dots are removed, simulating realistic depth of field effects. Experiments on a wide variety of scenes, including close-ups and live action, demonstrate the effectiveness of our method.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[NKZ+2008]
|
Nagahara, H.,
Kuthirummal, S.,
Zhou, C. and
Nayar, S.K. 2008.
Flexible Depth of Field Photography.
In ECCV '08: Proceedings of the 10th European Conference on Computer Vision,
Springer-Verlag,
Berlin, Heidelberg, 60–73.
The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector, during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate three applications of flexible DOF. First, we describe extended DOF, where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Applying deconvolution to a captured image gives an image with extended DOF and yet high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness while objects in between are severely blurred. Finally, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.
@inproceedings{NKZ+2008,
author = {Nagahara, Hajime and Kuthirummal, Sujit and Zhou,
Changyin and Nayar, Shree K.}, title = {Flexible Depth of
Field Photography}, booktitle = {ECCV '08: Proceedings of the
10th European Conference on Computer Vision}, year = {2008},
isbn = {978-3-540-88692-1}, pages = {60--73}, location
= {Marseille, France}, doi = {10.1007/978-3-540-88693-8_5},
publisher = {Springer-Verlag}, address = {Berlin, Heidelberg},
project = {http://www1.cs.columbia.edu/CAVE/projects/flexible_dof/},
abstract = {The range of scene depths that appear focused in an image is known
as the depth of field (DOF). Conventional cameras are limited by a
fundamental trade-off between depth of field and signal-to-noise
ratio (SNR). For a dark scene, the aperture of the lens must be
opened up to maintain SNR, which causes the DOF to reduce. Also,
today's cameras have DOFs that correspond to a single slab that is
perpendicular to the optical axis. In this paper, we present an
imaging system that enables one to control the DOF in new and
powerful ways. Our approach is to vary the position and/or orientation
of the image detector, during the integration time of a single
photograph. Even when the detector motion is very small (tens of
microns), a large range of scene depths (several meters) is captured
both in and out of focus.
Our prototype camera uses a micro-actuator to translate the detector
along the optical axis during image integration. Using this device,
we demonstrate three applications of flexible DOF. First, we describe
extended DOF, where a large depth range is captured with a very
wide aperture (low noise) but with nearly depth-independent defocus
blur. Applying deconvolution to a captured image gives an image
with extended DOF and yet high SNR. Next, we show the capture of
images with discontinuous DOFs. For instance, near and far objects
can be imaged with sharpness while objects in between are severely
blurred. Finally, we show that our camera can capture images with
tilted DOFs (Scheimpflug imaging) without tilting the image detector.
We believe flexible DOF imaging can open a new creative dimension
in photography and lead to new capabilities in scientific imaging,
vision, and graphics.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[RAT2006]
|
Raskar, R.,
Agrawal, A. and
Tumblin, J. 2006.
Coded exposure photography: motion deblurring using fluttered shutter.
In SIGGRAPH '06: ACM SIGGRAPH 2006 Papers,
ACM,
New York, NY, USA, 795–804.
In a conventional single-exposure photograph, moving objects or moving cameras cause motion blur. The exposure time defines a temporal box filter that smears the moving object across the image by convolution. This box filter destroys important high-frequency spatial details so that deblurring via deconvolution becomes an ill-posed problem.Rather than leaving the shutter open for the entire exposure duration, we "flutter" the camera's shutter open and closed during the chosen exposure time with a binary pseudo-random sequence. The flutter changes the box filter to a broad-band filter that preserves high-frequency spatial details in the blurred image and the corresponding deconvolution becomes a well-posed problem. We demonstrate that manually-specified point spread functions are sufficient for several challenging cases of motion-blur removal including extremely large motions, textured backgrounds and partial occluders.
@inproceedings{RAT2006,
author = {Raskar, Ramesh and Agrawal, Amit and Tumblin, Jack},
title = {Coded exposure photography: motion deblurring using fluttered shutter},
booktitle = {SIGGRAPH '06: ACM SIGGRAPH 2006 Papers},
year = {2006},
isbn = {1-59593-364-6},
pages = {795--804},
location = {Boston, Massachusetts},
doi = {10.1145/1179352.1141957},
publisher = {ACM},
address = {New York, NY, USA},
project = {http://www.umiacs.umd.edu/~aagrawal/sig06/sig06Main.html},
abstract = {In a conventional single-exposure photograph, moving objects or moving cameras cause motion blur. The exposure time defines a temporal box filter that smears the moving object across the image by convolution. This box filter destroys important high-frequency spatial details so that deblurring via deconvolution becomes an ill-posed problem.Rather than leaving the shutter open for the entire exposure duration, we "flutter" the camera's shutter open and closed during the chosen exposure time with a binary pseudo-random sequence. The flutter changes the box filter to a broad-band filter that preserves high-frequency spatial details in the blurred image and the corresponding deconvolution becomes a well-posed problem. We demonstrate that manually-specified point spread functions are sufficient for several challenging cases of motion-blur removal including extremely large motions, textured backgrounds and partial occluders.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|
[VRA+2007]
|
Veeraraghavan, A.,
Raskar, R.,
Agrawal, A.,
Mohan, A. and
Tumblin, J. 2007.
Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing.
In SIGGRAPH '07: ACM SIGGRAPH 2007 papers,
ACM,
New York, NY, USA, 69.
We describe a theoretical framework for reversibly modulating 4D light fields using an attenuating mask in the optical path of a lens based camera. Based on this framework, we present a novel design to reconstruct the 4D light field from a 2D camera image without any additional refractive elements as required by previous light field cameras. The patterned mask attenuates light rays inside the camera instead of bending them, and the attenuation recoverably encodes the rays on the 2D sensor. Our mask-equipped camera focuses just as a traditional camera to capture conventional 2D photos at full sensor resolution, but the raw pixel values also hold a modulated 4D light field. The light field can be recovered by rearranging the tiles of the 2D Fourier transform of sensor values into 4D planes, and computing the inverse Fourier transform. In addition, one can also recover the full resolution image information for the in-focus parts of the scene. We also show how a broadband mask placed at the lens enables us to compute refocused images at full sensor resolution for layered Lambertian scenes. This partial encoding of 4D ray-space data enables editing of image contents by depth, yet does not require computational recovery of the complete 4D light field.
@inproceedings{VRA+2007,
author = {Veeraraghavan, Ashok and Raskar, Ramesh and Agrawal, Amit and Mohan, Ankit and Tumblin, Jack},
title = {Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing},
booktitle = {SIGGRAPH '07: ACM SIGGRAPH 2007 papers},
year = {2007},
pages = {69},
location = {San Diego, California},
doi = {10.1145/1275808.1276463},
publisher = {ACM},
address = {New York, NY, USA},
project = {http://www.umiacs.umd.edu/~aagrawal/sig07/MatlabCodeImages.html},
abstract = {We describe a theoretical framework for reversibly modulating 4D light fields using an attenuating mask in the optical path of a lens based camera. Based on this framework, we present a novel design to reconstruct the 4D light field from a 2D camera image without any additional refractive elements as required by previous light field cameras. The patterned mask attenuates light rays inside the camera instead of bending them, and the attenuation recoverably encodes the rays on the 2D sensor. Our mask-equipped camera focuses just as a traditional camera to capture conventional 2D photos at full sensor resolution, but the raw pixel values also hold a modulated 4D light field. The light field can be recovered by rearranging the tiles of the 2D Fourier transform of sensor values into 4D planes, and computing the inverse Fourier transform. In addition, one can also recover the full resolution image information for the in-focus parts of the scene. We also show how a broadband mask placed at the lens enables us to compute refocused images at full sensor resolution for layered Lambertian scenes. This partial encoding of 4D ray-space data enables editing of image contents by depth, yet does not require computational recovery of the complete 4D light field.}
}
|
[BibTeX]
[Abstract]
[DOI]
[Project]
|