Hi there Nerian community,
At our company we are using Nerian Scarlet and we are very satisfied with it. I was wondering if there is any out of the box option to do image stitching from left and right camera? I realize we can do this using key-point feature extraction and matching on both images but since this procedure is already done along epipolar lines in order to compute disparity, I was wondering if there is a way to obtain homography matrix required to to the stitching? I wasn't able to find anything useful in the docs.
Many thanks and kind regards,
Jan
Image Stitching
-
- Posts: 123
- Joined: Mon Mar 25, 2019 1:12 pm
Re: Image Stitching
Hi Jan,
do you mean to stitch the left and right camera image into a single image? As the cameras share mostly the same field-of-view, there wouldn't be very much image data gained. In the best case (an object placed at the minimum measurable depth), the increased field-of-view due to stitching would be as large as the configured disparity range.
Anyway, the output left and right camera image are both rectified, meaning the lens distortions and alignment errors are all corrected. The images hence only differ by a horizontal shift - the stereo disparity. Hence, if you have the disparity map, you have a correspondence for every point that is in the field of view of both cameras. However, for a point that is not within the field of view of both cameras, you don't get a disparity measure. This means that you also couldn't stitch the image data for such a point into a common frame, unless you guess the depth (and hence disparity) of that point.
do you mean to stitch the left and right camera image into a single image? As the cameras share mostly the same field-of-view, there wouldn't be very much image data gained. In the best case (an object placed at the minimum measurable depth), the increased field-of-view due to stitching would be as large as the configured disparity range.
Anyway, the output left and right camera image are both rectified, meaning the lens distortions and alignment errors are all corrected. The images hence only differ by a horizontal shift - the stereo disparity. Hence, if you have the disparity map, you have a correspondence for every point that is in the field of view of both cameras. However, for a point that is not within the field of view of both cameras, you don't get a disparity measure. This means that you also couldn't stitch the image data for such a point into a common frame, unless you guess the depth (and hence disparity) of that point.
Re: Image Stitching
Hi Konstantin,
thanks, we'll try to work with that.
Kind regards,
Jan
thanks, we'll try to work with that.
Kind regards,
Jan