Natural Image Stitching with the Global Similarity Prior

Yu-Sheng Chen     Yung-Yu Chuang
National Taiwan University

Although AANAP, SPHP and our method all use similarity, our method gives much better results. The differences come from how similarity is utilized. In the example of stitching six images, AutoStitch introduces obvious distortion because of its spherical projection (top left). SPHP cannot handle 2D topology between images and suffers from distortion (bottom left). AANAP's result exhibits unnatural rotation and shape distortion (top right). Our result looks the most natural among all results (bottom right).


This paper proposes a method for stitching multiple images together so that the stitched image looks as natural as possible. Our method adopts the local warp model and guides the warping of each image with a grid mesh. An objective function is designed for specifying the desired characteristics of the warps. In addition to good alignment and minimal local distortion, we add a global similarity prior in the objective function. This prior constrains the warp of each image so that it resembles a similarity transformation as a whole. The selection of the similarity transformation is crucial to the naturalness of the results. We propose methods for selecting the proper scale and rotation for each image. The warps of all images are solved together for minimizing the distortion globally. A comprehensive evaluation shows that the proposed method consistently outperforms several state-of-the-art methods, including AutoStitch, APAP, SPHP and ANNAP.


Yu-Sheng Chen and Yung-Yu Chuang. Natural Image Stitching with Global Similarity Prior.
Proceedings of European Conference on Computer Vision 2016 (ECCV 2016), Part V, pp. 186-201, October 2016, Amsterdam, Netherland. BibTeX


  1. Poster (2.6MB PDF), Short Presentation (44.7MB PDF) and Thesis Presentation (60.8MB PDF)

  2. ECCV 2016 paper (13.6MB PDF)

  3. Supplementary document (25MB PDF)
    We tested four state-of-the-art methods and ours on 42 sets of images in same setting (grid size, feature points and parameters).

  4. All input data (447MB ZIP)
    42 sets of images: 6 from [1], 3 from [2], 3 from [3], 7 from [4], 4 from [5] and 19 collected by ourselves.

  5. All our results (581MB ZIP) and All our debug data (1.6GB ZIP)

  6. Source code (github)


  1. Chang, C.H., Sato, Y., Chuang, Y.Y.: Shape-preserving half-projective warps for image stitching. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3254-3261. CVPR'14 (2014)
  2. Gao, J., Kim, S.J., Brown, M.S.: Constructing image panoramas using dual-homography warping. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. pp. 49-56. CVPR'11 (2011)
  3. Lin, C., Pankanti, S., Ramamurthy, K.N., Aravkin, A.Y.: Adaptive as-natural-as-possible image stitching. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015. pp. 1155-1163 (2015)
  4. Nomura, Y., Zhang, L., Nayar, S.K.: Scene collages and flexible camera arrays. In: Proceedings of the 18th Eurographics Conference on Rendering Techniques. pp. 127-138. EGSR'07 (2007)
  5. Zaragoza, J., Chin, T.J., Brown, M.S., Suter, D.: As-projective-as-possible image stitching with moving dlt. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. pp. 2339-2346. CVPR'13 (2013)

Thanks to Tzu-Mao Li for providing the template of this website.