Information about [Accepted for ICPR 2014] Generalized Coupled Line Cameras and...

Accepted as poster presentation for ICPR 2014, Stockholm, Sweden on August 24~28, 2014.

Revised version will be submitted by May 21, 2014.

Title: Generalized Coupled Line Cameras and Application in Quadrilateral Reconstruction

Author: Joo-Haeng Lee

Affiliation: Human-Robot Interaction Research Team, ETRI, KOREA

Abstract:

oupled line camera (CLC) provides a geometric framework to derive an analytic solution to reconstruct an unknown scene rectangle and the relevant projective structure from a single image quadrilateral. We extend this approach as generalized coupled line camera (GCLC) to handle a scene quadrilateral. First, we generalize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilateral is available. A completely unknown quadrilateral can be reconstructed from four views by numerically approximating the required parameters through non-linear optimization. We also describe a improved method to handle an off-centered case by geometrically inferring a centered proxy quadrilateral using a vanishing line, which accelerates a 2D reconstruction process without relying on homography or cal- ibration. The proposed method is straightforward to implement since each step is expressed as a simple analytic equation. We present the experimental results on real and synthetic examples.

Revised version will be submitted by May 21, 2014.

Title: Generalized Coupled Line Cameras and Application in Quadrilateral Reconstruction

Author: Joo-Haeng Lee

Affiliation: Human-Robot Interaction Research Team, ETRI, KOREA

Abstract:

oupled line camera (CLC) provides a geometric framework to derive an analytic solution to reconstruct an unknown scene rectangle and the relevant projective structure from a single image quadrilateral. We extend this approach as generalized coupled line camera (GCLC) to handle a scene quadrilateral. First, we generalize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilateral is available. A completely unknown quadrilateral can be reconstructed from four views by numerically approximating the required parameters through non-linear optimization. We also describe a improved method to handle an off-centered case by geometrically inferring a centered proxy quadrilateral using a vanishing line, which accelerates a 2D reconstruction process without relying on homography or cal- ibration. The proposed method is straightforward to implement since each step is expressed as a simple analytic equation. We present the experimental results on real and synthetic examples.

d s0 s2 Θ0 Ψ2 Ψ0 u0 u2 v0v2 vm pc l0 l2 m0m2 (a) Camera pose when d = 1.7. cv0v2 vm (b) Circular trajectory of pc for varying d. Fig. 1. An example of a canonical line camera: m0 = m2 = 1, l0 = 0.6, l2 = 0.4, and α = 0.2. II. PRELIMINARIES OF COUPLED LINE CAMERAS We summarize the previous work on coupled line cameras for self containment [1], [2]. A. Line Camera Deﬁnition 1. A line camera captures an image line uiui+2 from a scene line vivi+2 where vi = (mi, 0, 0) and vi+2 = (−mi+2, 0, 0) for positive mi and mi+2. See Figure 1a. Deﬁnition 2. In a centered line camera, the principal axis passes through the center vm of the scene line vivi+2: vm = (vi + vi+2)/2. (1) Deﬁnition 3. A canonical line camera is a centered line camera with two constraints for simple formulation: vm = (0, 0, 0)T and equilateral unit division: ∥vi − vm∥ = ∥vi∥ = ∥vi+2∥ = 1. (2) For a line camera Ci, let d be the length of the principal axis from the center of projection pc to vm. Let θi be the orientation angle of the principal axis measured between vmpc and vmvi. Deﬁnition 4. For a canonical line camera, its pose equation is expressed as follow: cos θi = li − li+2 li + li+2 d = αi d (3) where li = ∥ui −um∥ is the length of parial diagonals. Let αi be the line division coefﬁcient of the canonical conﬁguration αi = li − li−2 li + li+2 (4) According to Eq.(3), we can observe the relation among θi, d and αi. Note that when αi is ﬁxed, pc is deﬁned along a circular trajectory or on a solution sphere of radius 0.5/|α|. See Figure 1b. B. Coupled Line Cameras Deﬁnition 5. Coupled line camera is a pair of line cameras, that share the principal axis and the center of projection. By coupling two canonical line cameras, we can represent a projective structure with a rectangle base. See Figure 2. v0 v1 v2 v3 Φ vm G (a) Scene rectangle G (b) 1st line camera C0 (c) 2nd line camera C1 (d) Coupling C0 and C1 (e) Projective structure u0 u1 u2 u3 Ρ um Q (f) Projection of G to Q Fig. 2. Coupling of two canonical line cameras to represent a projective structure with a rectangle base. Deﬁnition 6. For coupled line camera, we can derive a coupling constraint: β = l1 l0 = tan ψ1 tan ψ0 = sin θ1 (d − cos θ0) sin θ0 (d − cos θ1) (5) where β is the coupling coefﬁcient deﬁned by the ratio of the lengths, l0 and l1, of two partial diagonals of Q. See Figure 2f. C. Projective Reconstruction Algorithm 1 (Single View Reconstruction with CLC). The unknown elements of projective structure, such as the scene rectangle G and the center of projection pc, can be recon- structed from a single image quadrilaterals Q as in the below. First, the pose equation of Eq.(3) and the coupling constraint of Eq.(5) can be rearranged into a system of equations: d = β sin θ0 cos θ1 − cos θ0 sin θ1 β sin θ0 − sin θ1 = cos θ0 α0 = cos θ1 α1 (6) Then, the length d of the common principal axis can be computed from the system of equations in Eq.(6) as follows: d = A0 A1 (7) where A0 = (1 − α1)2 β2 − (1 − α0)2 and A1 = α2 0(1 − α1)2 β2 − (1 − α0)2 α2 1. Once d is computed, two orientation angles, θ0 and θ1, can be computed using Eq.(3). The base rectangle G can be reconstructed by computing its unknown shape parameter, the diagonal angle φ: cos φ = cos ρ sin θ0 sin θ1 + cos θ0 cos θ1 (8) where ρ is the diagonal angle of the image quadrilateral Q. Finally, the projective structure can be reconstructed by computing the coordinates of a center of projection pc: pc = d (sin φ cos θ0, cos θ1 − cos φ cos θ0, sin ρ sin θ0 sin θ1) sin φ (9) CONFIDENTIAL. Limited circulation. For review only. Preprint submitted to 22nd International Conference on Pattern Recognition. Received December 18, 2013.

d s0 s2 Θ0 Ψ2 Ψ0 u0 u2 v0v2 vm pc l0 l2 m0m2 (a) Camera pose when d = 1.7. cv0v2 vm (b) Trajectory of pc when d is not ﬁxed. Fig. 3. An example of a generalized line camera: m0 = 1, m2 = 1.4, l0 = 0.6, l2 = 0.4, and α = 0.2. D. Determinant Condition When Eq.(7) has a valid value, two conditions should be satisﬁed: (1) A0 and A1 have the same sign; and (2) the length d of the common principal axis should not exceed the diameter of each solution sphere: d ≤ min(1/∥α0∥, 1/∥α1∥). These conditions can be combined into Boolean expressions: D = D0 ∨ D1 (10) D0 = β ≥ 1 − α0 1 − α1 ∧ 1 ≤ α0 α1 (11) D1 = β ≤ 1 − α0 1 − α1 ∧ 1 ≥ α0 α1 (12) where ∧ and ∨ are Boolean and and or operations, respec- tively. Since α0, α1 and β are the coefﬁcients from a given image quadrilateral Q, we can determine if Q is an image of any scene rectangle before actual reconstruction. Once the determinant D is satisﬁed, Algorithm 1 can be applied. E. Off-Centered Case CLC assumes the principal axis passes through the centers of the image quadrilateral Q and the scene rectangle G. When handling an off-centered quadrilateral Qg, a centered proxy quadrilateral Q should be found ﬁrst by solving equations that formulate edge parallelism between Q and Qg, centering constraint of Q, and a vanishing line derived from Qg [1]. Once Q is found, the centered proxy rectangle G can be reconstructed using Algorithm 1. Since the inferred Q does not guarantee congruency to Qg, the target scene rectangle Gg should be reconstructed using a homography H between Q and G: Gg = HQg. In this paper, we propose a new method to handle an off- centered case. First, we derive a centered proxy quadrilateral Q that is perspectively congruent to Qg. Then, we show that the target scene rectangle Gg can be geometrically derived without relying on homography. See Section III-E. III. GENERALIZATION OF COUPLED LINE CAMERAS As a main contribution of this paper, we generalize a line camera to support a non-canonical conﬁguration. Then, we show that a pair of generalized line cameras can be coupled to represent a projective structure with a quadrilateral base other than a rectangle. Finally, we describe how we can reconstruct a projective structure from a single view with a sufﬁcient prior knowledge to constraint the solution space. We also describe how to handle off-centered cases. v0 v1 v2 v3 Φ vm G (a) Scene quad. G (b) 1st line camera C0 (c) 2nd line camera C1 (d) Coupling C0 and C1 (e) Projective Structure u0 u1 u2 u3 Ρ um Q (f) Projection of G to Q Fig. 4. Coupling of two generalied line cameras to represent a projective structure with a quadrilateral base. A generalized line camera Ci is assigned for each diagonal of a scene quadrilateral G. actual values of diagonal parameters. A. Generalized Line Camera Deﬁnition 7. In a general conﬁguration of a line camera, the principal axis may not bisect the scene line: we may not consider the centering constraints of Eqs.(1)-(2). See Figure 3 where m0 ̸= m2. Accordingly, the pose equation of a canonical line camera in Eq.(3) should be generalized with two additional parameters, m0 and m2. Assuming m0 > 0 and m2 > 0, the following geometric relation holds: li : li+2 = mi sin θ0 d d − ˆdi : mi+2 sin θ0 d d + ˆdi+2 (13) where ˆd0 = m0 cos θ0 and ˆd2 = m2 cos θ0. Deﬁnition 8. The generalized pose equation can be derived from Eq.(13): cos θi = mi+2li − mili+2 mimi+2 (li + li+2) d = αg,i d (14) where αg,i is the generalized division coefﬁcient αg,i = mi+2li − mili+2 mimi+2 (li + li+2) . (15) For a ﬁxed αg,i, the center of projection pc is deﬁned over a circular trajectory as in Figure 3b, or on a solution sphere [2]. B. Coupling Generalized Line Cameras By coupling two generalized line cameras, we can represent a projective structure with a quadrilateral base G with vertices: v0 = m0(1, 0), v1 = m1(cos φ, sin φ), v2 = −m2/m0v0, and v3 = −m3/m1v1 where mi’s are the relative lengths of partial diagonals or diagonal parameters of G. See Figure 4. Deﬁnition 9. A generalized coupling constraint βg is deﬁned as follows: βg = l1 l0 = m1 sin θi m0 sin θ0 (d − m0 cos θ0) (d − m1 cos θ1) (16) CONFIDENTIAL. Limited circulation. For review only. Preprint submitted to 22nd International Conference on Pattern Recognition. Received December 18, 2013.

C. Projective Reconstruction Using a trigonometric identity and the pose equation of Eq.(14), we can derive the equation for β2 g by squaring the both sides of Eq.(16): sin2 θi = 1 − cos2 θi = 1 − α2 g,i d2 (17) β2 g = m2 1(1 − m0αg,0)2 (1 − α2 g,1d2 ) m2 0(1 − m1αg,1)2(1 − α2 g,0d2) (18) From Eq.(18), the length d of the common principal axis can be expressed with GCLC parameters: d = Ag,0 Ag,1 (19) where Ag,0 = m2 0(1 − m1αg,1)2 β2 g − m2 1(1 − m0αg,0)2 and Ag,1 = m2 0α2 g,0(1 − m1αg,1)2 β2 g − m2 1(1 − m0αg,0)2 α2 g,1. Eq.(19) states that d can be computed from known diagonal parameters, mi and li, of a single pair of scene and image quadrilaterals, not relying their diagonal angles, φ and ρ. Algorithm 2 (Single View Reconstruction with GCLC). Once the length d of the common principal axis has been found using Eq.(19) with prior knowledge on diagonal parameters, we can compute the orientation angles, θ0 and θ1, using the pose equation of Eq.(14). Then, the diagonal angle φ of a scene quadrilateral and the center of projection pc can be computed using Eqs.(8) and (9), respectively. If we have no prior knowledge on diagonal parameters mi of G, we can infer them using multiple image quadrilaterals Qj from different views. By setting m0 = 1, the number of unknown diagonal parameters of G is reduced to three: m1, m2 and m3. For each Qj, the crossing angle φj of Eq.(8) is expressed with m1, m2 and m3, and coefﬁcients derived from known diagonal parameters li,j of Qj. Since the reconstructed φj’s should be identical regardless of views, the following identity should hold: cos φj = cos φj+1. Hence, if we have four different views, we can formulate three equations of three unknowns, m1, m2 and m3: cos φ0 = cos φ1 = cos φ2 = cos φ3 (20) The number of views are varying according to the degree of freedom in diagonal parameters. Although an analytic solution for Eq.(20) is not found yet, the problem can be formulated as minimization of the following objective function: fobj = n−1 j=0 ∥ cos φj − cos φj+1∥2 (21) where n is the number of views. Generally, Eq.(21) can be solved using a numerical nonlinear optimization method [7]. Since optimization may get stuck in a local minima, we may check the validity using determinant of Eq. 24. Algorithm 3 (n-View Reconstruction with GCLC). When Algorithm 2 cannot be applied due to lack of knowledge on the scene rectangle G, but we have multiple image (a) Reference: Gg and Qg (b) Inferring a centered Q in blue (c) Reconstruction of G and Gg (d) Congruency of G and Gg Fig. 5. Reconstruction of a synthetic quadrilateral Gg from an off-centered quadrilateral Qg: m0 = 1, m1 = 0.75, m2 = 1.35, m3 = 1.4 and φ = 1.35. Diagonal parameters mi and the vanishing line is given. quadrilaterals Qj from n different views, we can ﬁnd the unknown mi’s by minimizing the objective function of Eq.(21). Then, we can apply Algorithm 2 for one of the views to reconstruct the projective structure. The number of views required in Algorithm 3 depends on the number of unknown mi’s. For a general quadrilateral of three unknown mi’s except m0 = 1, at least 4 views are required according to Eq.(20). For a parallelogram with known m0 = m2 = 1 and unknown m1 = m3, at least 2 views are required to ﬁnd m1. See Section IV for real examples. D. Determinant Condition Similarly as in Section II-D, we can derive, from Eqs.(14) and (19), a condition Dg that can determine if Q is projection of a centered scene quadrilateral G with known mi’s. Dg = Dg,0 ∨ Dg,1 (22) Dg,0 = β ≥ m1(1−m0αg,0) m0(1−m1αg,1) ∧ 1 ≤ αg,0 αg,1 (23) Dg,1 = β ≤ m1(1−m0αg,0) m0(1−m1αg,1) ∧ 1 ≥ αg,0 αg,1 (24) E. Off-Centered Case Let an off-centered image quadrilateral Qg be projection of a scene quadrilateral Gg, which is also off-centered and unknown yet. See Fig. 5a. To apply Algorithms 2 and 3, we provide a method to ﬁnd a centered proxy quadrilateral Q that is an image of a centered scene quadrilateral G. Specially, G is guaranteed to be congruent to Gg through parallel translation by t. We also show that the translation vector t can be computed in image space. Hence, we do not need to compute homography H between G and Q to reconstruct Gg as in CLC. See Section II-E and [1]. Algorithm 4 (Reconstruction from an Off-Centered Quadri- lateral). An off-centered scene quadrilateral Gg can be recon- structed from its image Qg by adding extra steps to the GCLC methods presented in Section III-C. See Figure 5: CONFIDENTIAL. Limited circulation. For review only. Preprint submitted to 22nd International Conference on Pattern Recognition. Received December 18, 2013.

Qg Q om ug,0 ug,1 ug,2 ug,3 um u0 u1 u2 u3 w0 w1 wd,0 wd,1 wm Fig. 6. Derivation of a centered proxy quadrilateral Q that is perspectively congruent to Qg. Assume the vanishing line w0w1 is given. 1) Infer a centered proxy quadrilateral Q from Qg such that Q is projection of a centered scene quadrilateral G that is congruent to the target quadrilateral Gg. See Algorithm 5; 2) Apply Algorithm 2 to Q to reconstruct the corresponding centered quadrilateral G and the center of projection pc. If multiple Qg,j are available, apply Algorithm 3. 3) The target scene quadrilateral Gg can be computed as translation of G: Gg = G + t where t can be computed from displacement s = um − om between centers of Q and Qg using Algorithm 6. Algorithm 5 (Centered Proxy Quadrilateral). Assuming a vanishing line w0w1 is given, we can ﬁnd a centered proxy quadrilateral Q by perspectively translating an off-centered quadrilateral Qg. See Figure 6: 1) Find the intersection points wd,i between the vanishing line w0w1 and each diagonal ug,iug,i+2 of Gg. 2) Find the intersection point wm between the vanishing line w0w1 and the line of translation omum. 3) Find the intersection point u0 between the line ug,0wm and the line omwd,0. Similarly, ﬁnd u2 from ug,2wm and omwd,0. 4) Find the intersection point u1 between the line ug,1wm and the line omwd,1. Similarly, ﬁnd u3 from ug,3wm and omwd,1. 5) The i-th vertex of Q is ui. Note that Algorithm 5 is composed of simple line-line inter- sections rather than geometric constraint solving as in [1]. Algorithm 6 (Perspective-to-Euclidean Vector Transforma- tion). With GCLC deﬁned with known Q and G (as in Fig. 4), we can project an image vector s to a scene vector t. First, we perspectively decompose s along two diagonals of Q: 1) Find the intersection points us,0 between the line u0om and the line umwd,1. Similarly, ﬁnd us,1 from u1om and umwd,0. 2) For each decomposition coefﬁcient si of us,i, compute the coefﬁcient ti for vi using Eq. 26. 3) The corresponding scene vector t can be expressed as a vector sum of two diagonal vectors, t0v0 + t1v1, of G assuming vm = 0. See Fig. 4b. Algorithm 6 is based on the following property of a general- ized line camera. um ug,m u0 u1 us,0 us,1 w0 w1 wd,0 wd,1 v0 vm vg,m v1 vt,1 vt,0 G Gg Q Qg Fig. 7. Perspective-to-Euclidean vector transformation. d Θ0 v0v2 vm vt,0vt,2 pc l0 l2 u0 u2 us,0 us,2 m0m2 um Fig. 8. Scaling transformation in a generalized line camera, which is explained as a cross ratio between corresponding four points. Using projective invariance of cross-ratio [6], the following holds for two sets of collinear points, (vt,0, v0, vm, v2) and (us,0, u0, um, u2), in the scene and images lines, respectively: sili(li + li+2) li(sili + li+2) = timi(mi + mi+2) mi(timi + mi+2) (25) where si = ∥us,i − um∥/li and ti = ∥vt,i − vm∥/mi. (See Fig. 8.) By solving Eq.(25) for ti, we get the following relation between ti and si: ti = simi+2(li + li+2) simi+2li + ((1 − si)mi + mi+2)li+2 (26) Hence, if a line camera is deﬁned, a scaling factor si of image line can be mapped to ti of the scene, and vice versa. IV. EXPERIMENT We give experimental results on real and synthetic exam- ples. All the experiments were performed in Mathematica implementations. We applied Algorithm 4 to real-world quadrilaterals found in web images of modern architectures. We assume each image is independently taken by unknown cameras and not altered (by cropping). Each input quadrilateral Qg,j is speciﬁed in red lines in Fig. 9a and Fig 10a. To infer a centered proxy quadri- lateral Qj using Algorithm 5, we ﬁnd a vanishing line using patterns of parallel lines such as window frames [8]. Once a set of centered quadrilaterals Qj are found, we estimate unknown diagonal parameters mi that minimize the objective function fobj of Eq.(21). In the experiment, we used NMinimize[] function of Mathematica for non-linear optimization [7]. With mi known, we can reconstruct the centered scene quadrilateral Gj which is congruent to the target scene quadrilateral Gg,j. See Fig. 9b and Fig 10b. The result of reconstructed 3D view frustum is omitted for the page limit. CONFIDENTIAL. Limited circulation. For review only. Preprint submitted to 22nd International Conference on Pattern Recognition. Received December 18, 2013.

#1 #2 #3 #4 (a) Input: web images of Fountain Place in Dallas, Texas. (b) A reconstructed quadrilateral with different textures of given images. Fig. 9. Reconstruction of a quadrilateral from fours views using Algorithm 4. #1 #2 (a) Input: web images of the Dockland in Hamburg, Germany. (b) A reconstructed parallelogram with different textures of given images. Fig. 10. Reconstruction of a parallelogram from two view using Algorithm 4. For a quadrilateral case of Fig. 9, four images were used. The optimization converges when fobj ≤ 3.7 × 10−4 with m1 = 2.46639, m2 = 0.476389, m3 = 1.25378. The mean φ of four φj is 1.77297 with variance 5.9974 × 10−5 . The optimization takes about 3 seconds in 2.6 GHz Intel Core i7. Time for other reconstruction steps is trivial through evaluation of analytic expressions. For a parallelogram of Fig. 10, it converges when fobj ≤ 10−30 with m1 = 2.87419 and φ = 0.606594 in 0.06 second. We also applied Algorithm 4 to the synthetic quadrilateral G of Fig. 4 with four different views. The optimization for mi converges when fobj < 10−15 in 3 seconds. The mean error of reconstructed mi is 1.2×10−7 . Timing is similar to the real example of Fig. 9, but precision is much higher due to absence of noise sources such as lens distortion or feature detection. When added random noises of 1-pixel radius to vertices of Qj in 1280 × 1024 image, the precision dropped with errors 6.9 × 10−3 and 4.3 × 10−3 in mi and φ, respectively. V. CONCLUSION We proposed a novel method to reconstruct a scene quadri- lateral and projective structure based on generalized coupled line cameras (GCLC). The method gives an analytic solution for a single-view reconstruction when prior knowledge on diagonal parameters is given. Otherwise, required parameters can be approximated beforehand from multiple views through optimization. We also provide an improved method to handle off-centered cases by geometrically inferring a centered proxy quadrilateral, which accelerates a 2D reconstruction process without relying on homography or calibration. The overall computation is quite efﬁcient since each key step is represented as a simple analytic equation. Experiments show a reliable result on real images from uncalibrated cameras. To apply the proposed method to a real-world case with an off-centered quadrilateral, a vanishing line should be available for each view. This condition can be easily satisﬁed in a specially textured quadrilateral of artifacts [9]. Otherwise, we need other types of prior knowledge to infer a centered quadrilateral. For example, a predeﬁned parametric polyhedral model can be a good candidate [10]. Lastly, coupled line projectors (CLP) [11] is a dual of CLC. We expect that generalized CLP can be combined with GCLC for a projector-based augmented reality application. REFERENCES [1] J.-H. Lee, “Camera calibration from a single image based on coupled line cameras and rectangle constraint,” in ICPR 2012, 2012, pp. 758– 762. 1, 2, 3, 4, 5 [2] ——, “A new solution for projective reconstruction based on coupled line cameras,” ETRI Journal, vol. 35, no. 5, pp. 939–942, 2013. 1, 2, 3 [3] P. Sturm and S. Maybank, “On plane-based camera calibration: A general algorithm, singularities, applications,” in CVPR 1999, 1999, pp. 432–437. 1 [4] Z. Zhang, “A ﬂexible new technique for camera calibration,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330– 1334, 2000. 1 [5] Z. Zhang and L.-W. He, “Whiteboard scanning and image enhancement,” Digital Signal Processing, vol. 17, no. 2, pp. 414–432, 2007. 1 [6] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. Cambridge University Press, 2004. 1, 5 [7] J. A. Nelder and R. Mead, “A simplex method for function minimiza- tion,” The computer journal, vol. 7, no. 4, pp. 308–313, 1965. 4, 5 [8] J.-C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in manhattan world,” in CVPR 2012, 2012, pp. 638–645. 5 [9] Z. Zhang, A. Ganesh, X. Liang, and Y. Ma, “Tilt: transform invariant low-rank textures,” International journal of computer vision, vol. 99, no. 1, pp. 1–24, 2012. 6 [10] P. E. Debevec, C. J. Taylor, and J. Malik, “Modeling and rendering architecture from photographs: A hybrid geometry-and image-based approach,” in SIGGRAPH 1996. ACM, 1996, pp. 11–20. 6 [11] J.-H. Lee, “An analytic solution to a projector pose estimation problem,” ETRI Journal, vol. 34, no. 6, pp. 978–981, 2012. 6 CONFIDENTIAL. Limited circulation. For review only. Preprint submitted to 22nd International Conference on Pattern Recognition. Received December 18, 2013.

oakley holbrook | 21/05/15

red herve leger dress http://gmaxgo.com/red-herve-leger-dress.htm herve leger swim
[url=http://kenpokai.fr/oakley-holbrook.asp]oakley holbrook[/url]

holbrook oakley | 21/05/15

karen millen black http://lawtech-design.co.uk/karen-millen-sale.htm karen millen sale
[url=http://instants-mobiles.fr/lunette-oakley.htm]holbrook oakley[/url]

herve leger gown | 22/05/15

oakley store http://spectacles-marina.com/oakley-store.html oakley store
[url=http://villagemelli.com/herve-leger-white-dress.htm]herve leger gown[/url]

oakley jawbone | 23/05/15

oakley gascan http://supdere.com/oakley-gascan.html oakley gascan
[url=http://restaurant-le-saint-roch.fr/oakley-jawbone.asp]oakley jawbone[/url]

karen millen coat | 24/05/15

herve leger by max azria http://easyaffiliatesprogram.com/herve-leger-by-max-azria.php herve leger by max azria
[url=http://moonfern.co.uk/karen-millen-coat.php]karen millen coat[/url]

nike air max pas cher | 25/05/15

herve leger pas cher http://composthumus.com/herve-leger-pas-cher.htm herve leger pas cher
[url=http://www.3imicro.fr/nike-air-max-femme.htm]nike air max pas cher[/url]

karen millen coat | 26/05/15

herve leger bandage dress http://soultalkcounselling.com/herve-leger-bandage-dress.html herve leger bandage dress
[url=http://tuningfactory.co.uk/karen-millen-outlet.php]karen millen coat[/url]

Presentación que realice en el Evento Nacional de Gobierno Abierto, realizado los ...

In this presentation we will describe our experience developing with a highly dyna...

Presentation to the LITA Forum 7th November 2014 Albuquerque, NM

Un recorrido por los cambios que nos generará el wearabletech en el futuro

Um paralelo entre as novidades & mercado em Wearable Computing e Tecnologias Assis...

Microsoft finally joins the smartwatch and fitness tracker game by introducing the...

Analysis on Coupled Line Cameras Using Projective ... This was further developed as generalized coupled line cameras ... reconstruction computes the camera ...

Read more

New Geometric Interpretation and Analytic Solution for Quadrilateral ... generalized coupled line camera ... application of projective reconstruction ...

Read more

... Decision letter with cover and ... Title Generalized Coupled Line Cameras and ... and Application in Quadrilateral Reconstruction

Read more

Coupled Line Cameras as a New Geometric Tool for Quadrilateral Reconstruction - Coupled line cameras; ... Computer Vision: Algorithms and Applications, ...

Read more

Read "Review on coupled line cameras and its application in rectangle ... reconstruction, camera ... from a single image quadrilateral ...

Read more

... & Industry Applications; Robotics & Control Systems; Signal ... New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction ...

Read more

Recently coupled line cameras ... and analytic solution for quadrilateral reconstruction. In: ... a general algorithm, singularities, applications. In: ...

Read more

... reconstruction based on coupled line cameras ... “Generalized coupled line cameras and application in quadrilateral reconstruction ...

Read more

## Add a comment