YorkSpace has migrated to a new version of its software. Access our Help Resources to learn how to use the refreshed site. Contact diginit@yorku.ca if you have any questions about the migration.
 

Leveraging Dual-Pixel Sensors for Camera Depth of Field Manipulation

dc.contributor.advisorBrown, Michael S.
dc.contributor.authorAbuolaim, Abdullah Ahmad Taleb
dc.date.accessioned2022-03-03T14:04:59Z
dc.date.available2022-03-03T14:04:59Z
dc.date.copyright2021-10
dc.date.issued2022-03-03
dc.date.updated2022-03-03T14:04:58Z
dc.degree.disciplineElectrical Engineering & Computer Science
dc.degree.levelDoctoral
dc.degree.namePhD - Doctor of Philosophy
dc.description.abstractCapturing a photo with clear scene details is important in photography and for computer vision applications. The range of distance in the real world that makes the scene's objects appear with clear details is known to be the camera's depth of field (DoF). The DoF is controlled by either adjusting lens distance to sensor (i.e., focus distance), aperture size, and/or focal length of the cameras. At capture time, especially for video recording, DoF adjustment is often restricted to lens movements as adjusting other parameters introduces artifacts that can be visible in the recorded video. Nevertheless, the desired DoF is not always achievable at capture time due to many reasons like the physical constraints of the camera optics. This leads to another direction of adjusting DoF after effect as a post-processing step. Although pre- or post-capture DoF manipulation is essential, there are few datasets and simulation platforms that enable investigating DoF at capture time. Another limitation is the lack of real datasets for DoF extension (i.e., defocus deblurring), where the prior work relies on synthesizing defocus blur and ignores the physical formation of defocus blur in real cameras (e.g., lens aberration and radial distortion). To address this research gap, this thesis revisits DoF manipulation from two point of views: (1) adjusting DoF at capture time, a.k.a. camera autofocus (AF), within the context of dynamic scenes (i.e., video AF); (2) computationally manipulating the DoF as a post-capturing process. To this aim, we leverage a new imaging sensor technology known as the dual-pixel (DP) sensor. DP sensors are used to optimize camera AF and can provide good cues to estimate the amount of defocus blur present at each pixel location. In particular, this thesis provides the first 4D temporal focal stack dataset along with AF platform to examine video AF. It also presents insights about user preference that lead to propose two novel video AF algorithms. As for post-capture DoF manipulation, we examine the problem of reducing defocus blur (i.e., extending DoF) by introducing a new camera aperture adjustment to collect the first dataset that has images with real defocus blur and their corresponding all-in-focus ground truth. We also propose the first end-to-end learning-based defocus deblurring method. We extend image defocus deblurring to a new domain application (i.e., video defocus deblurring) by designing a data synthesis framework to generate realistic DP video data through modeling physical camera constraints, such as lens aberration and redial distortion. Finally, we build on top of a data synthesis framework to synthesize shallow DoF with other aesthetic effects, such as multi-view synthesis and image motion.
dc.identifier.urihttp://hdl.handle.net/10315/39110
dc.languageen
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subjectArtificial intelligence
dc.subject.keywordsComputational photography
dc.subject.keywordsComputational imaging
dc.subject.keywordsImage processing
dc.subject.keywordsCameras
dc.subject.keywordsDepth of field manipulation
dc.subject.keywordsDual-pixel sensor
dc.subject.keywordsOptics
dc.subject.keywordsAutofocus
dc.subject.keywordsVideo autofocus
dc.subject.keywordsDefocus deblurring
dc.subject.keywordsSynthetic depth of field
dc.subject.keywordsCamera depth of field
dc.subject.keywordsBokeh effect
dc.subject.keywordsNimat effect
dc.subject.keywordsComputer vision
dc.subject.keywordsLow-level computer vision
dc.subject.keywordsArtificial intelligence
dc.subject.keywordsMachine learning
dc.subject.keywordsDeep learning
dc.titleLeveraging Dual-Pixel Sensors for Camera Depth of Field Manipulation
dc.typeElectronic Thesis or Dissertation

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Abuolaim_Abdullah_AT_2021_PhD.pdf
Size:
144.15 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
license.txt
Size:
1.87 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
YorkU_ETDlicense.txt
Size:
3.39 KB
Format:
Plain Text
Description: