advertisement

Structure from motion

75 %
25 %
advertisement
Information about Structure from motion
Education

Published on February 27, 2008

Author: Denise

Source: authorstream.com

advertisement

3D from Pictures:  3D from Pictures Jiajun Zhu Sept.29 2006 University of Virginia What can we compute from a collection of pictures?:  What can we compute from a collection of pictures? - 3D structure - camera poses and parameters:  - 3D structure - camera poses and parameters One of the most important / exciting results in computer vision from 90s’:  One of the most important / exciting results in computer vision from 90s’ It is difficult, largely due to numerical computation in practice. But this is SO powerful!!! 2 SIGGRAPH papers with several sketches this year! show a few demo videos:  But this is SO powerful!!! 2 SIGGRAPH papers with several sketches this year! show a few demo videos Now let’s see how this works!:  Now let’s see how this works! Input: (1) A collection of pictures. Output: (1) camera parameters (2) sparse 3D scene structure Consider 1 camera first:  Consider 1 camera first What’s the relation between pixels and rays in space? Slide9:  ~ Slide10:  P is a 3x4 Matrix 7 degree of freedom: 1 from focal length 3 from rotation 3 from translation Simplified projective camera model P x = P X = K [ R | t ] X Consider 1 camera:  x = P X Consider 1 camera P3x4 has 7 degrees of freedom Given one image, we observe x Can we recover X or P? If P is known, what do we know about X? If X is known, can we recover P? # unknown = 7 Each X gives 2 equations 2n >= 7 i.e. n >= 4 This is a Camera Calibration Problem:  This is a Camera Calibration Problem Input: n>4 world to image point correspondences {Xi  xi} Output: camera parameters P = K[R|T] Slide13:  Direct Linear Transform (DLT) Slide14:  Direct Linear Transform (DLT) n  4 points minimize subject to constraint use SVD p is the last column vector of V: p = Vn Slide15:  Objective Given n≥4, 3D to 2D point correspondences {Xi↔xi’}, determine P Algorithm Linear solution: Normalization: DLT Minimization of geometric error: Iteratively optimization (Levenberg-Marquardt): Denormalization: Implementation in Practice Slide16:  Objective Given camera projection matrix P, decompose P = K[R|t] Algorithm Perform RQ decomposition of M, so that K is the upper-triangular matrix and R is orthonormal matrix. How to recover K, R and t from P? This is what we learn from 1 Camera:  This is what we learn from 1 Camera Let’s consider 2 cameras:  Let’s consider 2 cameras Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image? (ii) Camera geometry (motion): Given a set of corresponding image points {xi ↔x’i}, i=1,…,n, what are the cameras P and P’ for the two views? Slide19:  Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image? Slide20:  The Fundamental Matrix F x’T Fx = 0 Slide21:  What does Fundamental Matrix F tell us? x’T Fx = 0 Fundamental matrix F relates corresponding pixels If the intrinsic parameter (i.e. focal length in our camera model) of both cameras are known, as K and K’. Then we can derive (not here) that: K’TFK = t cross product R t and R are translation and rotation for the 2nd camera i.e. P = [I|0] and P’ = [R|t] Slide22:  Good thing is that … x’T Fx = 0 Fundamental matrix F can be computed: from a set of pixel correspondences: {x’  x} Slide23:  Compute F from correspondence: separate known from unknown (data) (unknowns) (linear) How many correspondences do we need? Slide24:  What can we do now? (1) Given F, K and K’, we can estimate the relative translation and rotation for two cameras: (2) Given 8 correspondences: {x’  x}, we can compute F P = [I | 0] and P’ = [R | t] Given K and K’, and 8 correspondences {x’  x}, we can compute: P = [I | 0] and P’ = [R | t] This answers the 2nd question:  This answers the 2nd question Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image? (ii) Camera geometry (motion): Given a set of corresponding image points {xi ↔x’i}, i=1,…,n, what are the cameras P and P’ for the two views? But how to make this automatic?:  But how to make this automatic? Given K and K’, and 8 correspondences {x’  x}, we can compute: P = [I | 0] and P’ = [R | t] (1) Estimating intrinsic K and K’ (auto-calibration) will not be discussed here. (involve much projective geometry knowledge) (2) Let’s see how to find correspondences automatically. (i.e. Feature detection and matching) Lowe’s SIFT features :  Lowe’s SIFT features invariant to with position, orientation and scale Scale:  Scale Look for strong responses of DOG filter (Difference-Of-Gaussian) over scale space Only consider local maxima in both position and scale Orientation:  Orientation Create histogram of local gradient directions computed at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale, orientation) Simple matching:  Simple matching For each feature in image 1 find the feature in image 2 that is most similar (compute correlation of two vectors) and vice-versa Keep mutual best matches Can design a very robust RANSAC type algorithm What have we learnt so far?:  What have we learnt so far? What have we learnt so far?:  What have we learnt so far? Consider more then 2 cameras:  Consider more then 2 cameras K K’ P P’ X P’’ Slide34:  Objective Given N images { Q1, …, QN } with reasonable overlaps Compute N camera projection matrices { P1, …, PN }, where each Pi = Ki[Ri |ti], Ki is the intrinsic parameter, Ri and ti are rotation and translation matrix respectively Slide35:  Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. (iii) incrementally add new camera Pk into the system, estimate its camera matrix by DLT (calibration) (iv) repeat (iii) until all the cameras are estimated. Slide36:  Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. (iii) incrementally add new camera Pk into the system, estimate its camera matrix by DLT (calibration) (iv) repeat (iii) until all the cameras are estimated. However, this won’t work! Slide37:  Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. Then non-linearly minimize reprojection errors (LM). (iii) incrementally add new camera Pk into the system, estimate initial value by DLT, then non-linearly optimize the system. (iv) repeat (iii) until all the cameras are estimated. Replaces with more robust non-linear optimization Tired?:  Tired? Recall the camera calibration algorithm:  Recall the camera calibration algorithm Objective Given n≥4, 3D to 2D point correspondences {Xi↔xi’}, determine P Algorithm Linear solution: Normalization: DLT Minimization of geometric error: Iteratively optimization (Levenberg-Marquardt): Denormalization: We are lucky! 1st time huge amount of visual data is easily accessible. High-level description of these data also become available. How do we explore them? Analysis them? Wisely use them? :  We are lucky! 1st time huge amount of visual data is easily accessible. High-level description of these data also become available. How do we explore them? Analysis them? Wisely use them? What’s the contribution of this paper? How to extract high-level information? - Computer Vision, Machine Learning Tools. Structure from motion, and more computer vision tools reach a certain robust point for graphics application. - Internet Image search - Human Label game with purpose What is the space of all the pictures? :  What is the space of all the pictures? in the past present the future? What’s the space of all the videos? :  What’s the space of all the videos? in the past present the future? What else? :  What else? Using Search Engine? :  Using Search Engine? Using human computation power? :  Using human computation power? Using human computation power? :  Using human computation power? Using human computation power? :  Using human computation power? What else? :  What else? What else?:  What else? Book: “Multiple View Geometry in Computer Vision” Hartley and Zisserman Online Tutorial: http://www.cs.unc.edu/~marc/tutorial.pdf http://www.cs.unc.edu/~marc/tutorial/ Matlab Toolbox: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/TORR1/index.html :  Book: “Multiple View Geometry in Computer Vision” Hartley and Zisserman Online Tutorial: http://www.cs.unc.edu/~marc/tutorial.pdf http://www.cs.unc.edu/~marc/tutorial/ Matlab Toolbox: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/TORR1/index.html

Add a comment

Related presentations

Related pages

Structure from motion - Wikipedia, the free encyclopedia

Structure from motion (SfM) is a range imaging technique for estimating three-dimensional structures from two-dimensional image sequences that may be ...
Read more

VisualSFM : A Visual Structure from Motion System

VisualSFM : A Visual Structure from Motion System. Changchang Wu . VisualSFM is a GUI application for 3D reconstruction using structure from motion (SFM).
Read more

Structure from Motion – Open Source – Visual SFM ...

Erfahrungsbericht zur Structure from Motion Open Source Software VisualSFM. Hinterfragt wird die Anwendbarkeit in archäologischer Feldforschung.
Read more

Image Matching / Structure from Motion (SfM) — Professur ...

Structure from Motion beschreibt den Prozess der 3D Oberflächenbrechnung mit 2D Bildinformationen aus unterschiedlichen Perspektiven. Damit 3D Punkte aus ...
Read more

Structure from Motion – Kommerzielle Software – Agisofts ...

Erfahrungsbericht zur Structure from Motion Software Agisoft PhotoScan. Hinterfragt wird die Anwendbarkeit in archäologischer Feldforschung.
Read more

Structure from Motion - scanner.imagefact.de

Structure from Motion. Structure from Motion SFM ist ein Begriff aus dem Bereich Computer Vision und bezeichnet einen automatischen Prozess, der die ...
Read more

Structure and Motion - Research Centres and Facilities ...

The Structure and Motion Laboratory is part of the Comparative Physiology and Medicine Research Programme at the Royal Veterinary College. With state-of ...
Read more

Structure From Motion - LOGICNET

Structure From Motion. Structure From Motion is a method for creating 3D models from 2D pictures of an object. This could be pictures taken with an ...
Read more

Structure and motion - Computer Science

Initial structure and motion Up: Visual 3D Modeling from Previous: Three and four view Contents Structure and motion In the previous section it was seen ...
Read more

Structure Sensor - 3D scanning, augmented reality, and ...

Structure Sensor has been featured on sites like The Verge, Mashable, USA Today, and The Washington Post.
Read more