WO2018228436A1 - 双视角图像校准及图像处理方法、装置、存储介质和电子设备 - Google Patents

双视角图像校准及图像处理方法、装置、存储介质和电子设备 Download PDF

Info

Publication number
WO2018228436A1
WO2018228436A1 PCT/CN2018/091085 CN2018091085W WO2018228436A1 WO 2018228436 A1 WO2018228436 A1 WO 2018228436A1 CN 2018091085 W CN2018091085 W CN 2018091085W WO 2018228436 A1 WO2018228436 A1 WO 2018228436A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pair
feature point
information
optimized
Prior art date
Application number
PCT/CN2018/091085
Other languages
English (en)
French (fr)
Inventor
孙文秀
Original Assignee
深圳市商汤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市商汤科技有限公司 filed Critical 深圳市商汤科技有限公司
Priority to JP2019569277A priority Critical patent/JP6902122B2/ja
Priority to SG11201912033WA priority patent/SG11201912033WA/en
Publication of WO2018228436A1 publication Critical patent/WO2018228436A1/zh
Priority to US16/710,033 priority patent/US11380017B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the embodiments of the present application relate to computer vision technologies, and in particular, to a dual-view image calibration method, apparatus, storage medium, and electronic device, and an image processing method, apparatus, storage medium, and electronic device.
  • Dual-view image calibration is a key step in processing two images of different angles of view (such as two images taken by a dual camera).
  • the corresponding pixels on the two images are located on the same horizontal line, which is the processing of image depth of field calculation. Prerequisites.
  • the embodiment of the present application provides a dual-view image calibration technical solution and an image processing solution.
  • a dual-view image calibration method including: matching a first image pair to obtain a first feature point pair set, where the first image pair includes two different perspectives corresponding to the same scene. Separately capturing two images; acquiring at least a plurality of different first base matrices of the first image pair according to the first set of feature point pairs, and acquiring that the first image pair passes through the first basic matrix Performing first image deformation information of relative deformation before and after mapping transformation; determining a first optimization base matrix from the plurality of first base matrices according to at least the first image deformation information; and calibrating according to the first optimized basic matrix The first image pair is described.
  • the acquiring the first image deformation information indicating the relative deformation of the first image before and after performing the mapping transformation on the first basic matrix comprising: pairing the first image according to the first basic matrix The two images are subjected to mapping transformation; and the first image deformation information is acquired according to a distance between corresponding feature points of at least one pair of mappings in each image.
  • the point pair subset generates at least two first base matrices.
  • the method further includes: determining matching error information of each feature point pair subset; determining the first optimized basic matrix from the plurality of first base matrices according to the first image deformation information
  • the method includes: determining, according to the matching error information and the first image deformation information, a first optimized basis matrix from the plurality of first base matrices.
  • the matching error information includes: a proportion of feature point pairs in the feature point pair subset that do not satisfy the predetermined matching condition in the feature point pair subset or the first feature point pair set.
  • the method further includes: storing or updating the first optimized base matrix.
  • the method further includes: storing or updating the first feature point pair to concentrate information of at least one pair of feature point pairs satisfying a predetermined matching condition.
  • the ratio of the logarithm of the stored or updated feature point pairs to the total feature point pairs included in the feature point pair set is less than a set threshold.
  • the at least one pair of information of the feature point pairs satisfying the predetermined matching condition includes: coordinates of at least one pair of feature point pairs satisfying the predetermined matching condition.
  • the method further comprises: calibrating the second image pair according to the first optimized base matrix.
  • the method further includes: matching a second image pair to obtain a second feature point pair set; determining mapping cost information according to the second feature point pair set, the mapping cost information including the second image Pairing second image deformation information and/or matching error information of the feature point pair subset; said calibrating the second image pair according to the first optimized base matrix, comprising: satisfying a predetermined threshold condition in response to the mapping cost information, The second image pair is calibrated according to the first optimized base matrix.
  • the method further includes: acquiring a second optimized basic matrix corresponding to the second image pair, in response to the mapping cost information not satisfying a predetermined threshold condition; and performing, according to the second optimized basic matrix The two image pairs are calibrated.
  • the acquiring the second optimized basic matrix corresponding to the second image pair comprises: matching a second image pair to obtain a second feature point pair set of the second image pair; according to the second a feature point pair set and a stored feature point pair, acquiring a plurality of different second base matrices of the second image pair, and acquiring second image deformation information corresponding to each of the second base matrices; The second image deformation information determines the second optimized basis matrix from the plurality of second base matrices.
  • the method further includes: updating the stored first optimized basic matrix by using the second optimized basic matrix; and/or adopting at least one pair of the second feature point set to satisfy a predetermined matching condition.
  • the information of the feature point pair updates the stored feature point pair information.
  • the method further comprises: capturing an image pair by a device provided with two cameras.
  • the device with two cameras includes: a dual camera mobile terminal, a dual camera smart glasses, a dual camera robot, a dual camera drone or a dual camera unmanned vehicle.
  • an image processing method comprising: calibrating at least one image pair respectively captured by two different viewing angles corresponding to the same scene by using any one of the foregoing two-view image calibration methods; Applying processing based on the calibrated image pair, including any one or more of the following: three-dimensional reconstruction processing, image blurring processing, depth of field calculation, and augmented reality processing.
  • a dual-view image calibration apparatus including: a feature matching module, configured to feature a first image pair to obtain a first feature point pair set, where the first image pair includes Two images obtained by respectively capturing two different perspectives of the same scene; the first acquiring module, configured to acquire, according to the first feature point pair set, a plurality of different first basic matrices of the first image pair And acquiring first image deformation information indicating a relative deformation of the first image pair before and after mapping transformation through the first base matrix; and a first determining module, configured to use at least the plurality of image deformation information from the plurality of Determining a first optimized base matrix in the first base matrix; a first calibration module, configured to calibrate the first image pair according to the first optimized base matrix.
  • a feature matching module configured to feature a first image pair to obtain a first feature point pair set, where the first image pair includes Two images obtained by respectively capturing two different perspectives of the same scene
  • the first acquiring module configured to acquire, according to the first feature point pair set, a plurality
  • the first acquiring module includes a first acquiring unit, configured to perform mapping transformation on two images in the first image pair according to the first basic matrix; and according to at least one pair of mappings in each image
  • the first image deformation information is acquired by a distance between corresponding feature points.
  • the first obtaining module further includes a second acquiring unit, configured to generate at least two first basic matrices according to the first feature point pair, respectively, by concentrating at least two different feature point pair subsets.
  • a second acquiring unit configured to generate at least two first basic matrices according to the first feature point pair, respectively, by concentrating at least two different feature point pair subsets.
  • the device further includes a second determining module, configured to determine matching error information of each feature point pair subset; the first determining module is configured to deform according to the matching error information and the first image Information determines a first optimized base matrix from the plurality of first base matrices.
  • a second determining module configured to determine matching error information of each feature point pair subset; the first determining module is configured to deform according to the matching error information and the first image Information determines a first optimized base matrix from the plurality of first base matrices.
  • the matching error information includes: a proportion of feature point pairs in the feature point pair subset that do not satisfy the predetermined matching condition in the feature point pair subset or the first feature point pair set.
  • the apparatus further includes a first storage module, configured to store or update the first optimized base matrix.
  • the first storage module is further configured to store or update information that the first feature point pair concentrates at least one pair of feature points that satisfy a predetermined matching condition.
  • the ratio of the logarithm of the stored or updated feature point pairs to the total feature point pairs included in the feature point pair set is less than a set threshold.
  • the at least one pair of information of the feature point pairs satisfying the predetermined matching condition includes: coordinates of at least one pair of feature point pairs satisfying the predetermined matching condition.
  • the apparatus further includes: a second calibration module, configured to calibrate the second image pair according to the first optimized base matrix.
  • the apparatus further includes a third determining module, configured to perform feature matching on the second image pair to obtain a second feature point pair set; and determine mapping cost information according to the second feature point pair set, the mapping cost information Include second image deformation information of the second image pair and/or matching error information of the feature point pair subset; the second calibration module is configured to satisfy a predetermined threshold condition in response to the mapping cost information, according to the An optimized base matrix calibrates the second image pair.
  • a third determining module configured to perform feature matching on the second image pair to obtain a second feature point pair set; and determine mapping cost information according to the second feature point pair set, the mapping cost information Include second image deformation information of the second image pair and/or matching error information of the feature point pair subset; the second calibration module is configured to satisfy a predetermined threshold condition in response to the mapping cost information, according to the An optimized base matrix calibrates the second image pair.
  • the device further includes: a second acquiring module, configured to acquire a second optimized basic matrix corresponding to the second image pair, in response to the mapping cost information not satisfying a predetermined threshold condition; and a third calibration module, And calibrating the second image pair according to the second optimized base matrix.
  • a second acquiring module configured to acquire a second optimized basic matrix corresponding to the second image pair, in response to the mapping cost information not satisfying a predetermined threshold condition
  • a third calibration module And calibrating the second image pair according to the second optimized base matrix.
  • the second acquiring module includes: a feature matching unit, configured to match a second image pair to obtain a second feature point pair set of the second image pair; and a third acquiring unit, configured to a second feature point pair and a stored feature point pair, acquiring a plurality of different second base matrices of the second image pair, and acquiring second image deformation information corresponding to each of the second base matrices; And determining, according to the second image deformation information, the second optimized basis matrix from the plurality of second base matrices.
  • the device further includes a second storage module, configured to update the stored first optimized basic matrix by using the second optimized basic matrix; and/or adopting at least one of the second feature point sets
  • the stored feature point pair information is updated for the information of the feature point pair satisfying the predetermined matching condition.
  • the device further comprises a shooting module for capturing an image pair by means of a device provided with two cameras.
  • the device with two cameras includes: a dual camera mobile terminal, a dual camera smart glasses, a dual camera robot, a dual camera drone or a dual camera unmanned vehicle.
  • an image processing apparatus is further provided for calibrating at least one image pair respectively captured by two different viewing angles corresponding to the same scene by using any one of the foregoing two-view image calibration methods. And applying processing based on the calibrated image pair, the application processing including any one or more of the following: three-dimensional reconstruction processing, image blurring processing, depth of field calculation, augmented reality processing.
  • a computer readable storage medium having stored thereon computer program instructions, wherein the program instructions are executed by a processor to implement any of the foregoing two-view image calibration methods or The steps of the aforementioned image processing method.
  • an electronic device including: a processor and a memory; the memory is configured to store at least one executable instruction, the executable instruction causing the processor to perform any of the foregoing An operation corresponding to the item bi-view image calibration method; and/or the executable instruction causes the processor to perform an operation corresponding to the image processing method.
  • At least two cameras are further included, and the processor and the at least two cameras complete communication with each other through the communication bus.
  • a computer program comprising computer readable code, the processor in the device executing to implement any of the foregoing when the computer readable code is run on a device.
  • the first feature pair of the first image pair is obtained by performing feature matching on the first image pair obtained by capturing the same scene at different angles, and according to the first feature point pair. Collecting a plurality of different first basic matrices, and first image deformation information corresponding to each first basic matrix, thereby determining a first optimized basic matrix according to the first image deformation information, and according to the first optimized basic matrix.
  • the first image pair is calibrated, and the automatic calibration of the dual-view image pair is realized, which can effectively avoid the calibration error caused by the error of the calibration parameter caused by the displacement of the camera lens due to the collision.
  • FIG. 1 is a flow chart showing a two-view image calibration method according to an embodiment of the present application.
  • FIG. 2 is a flow chart showing a two-view image calibration method according to another embodiment of the present application.
  • FIG. 3 is a view showing a first image of a first image pair according to another embodiment of the present application.
  • FIG. 4 is a second image showing a first image pair according to another embodiment of the present application.
  • FIG. 5 is a composite image showing a first image pair according to another embodiment of the present application.
  • FIG. 6 is a first image showing a calibrated first image pair in accordance with another embodiment of the present application.
  • FIG. 7 is a second image showing a calibrated first image pair in accordance with another embodiment of the present application.
  • FIG. 8 is a composite image showing a calibrated first image pair in accordance with another embodiment of the present application.
  • FIG. 9 is a logic block diagram showing a dual-view image calibration apparatus according to an embodiment of the present application.
  • FIG. 10 is a logic block diagram showing a dual-view image calibration apparatus according to another embodiment of the present application.
  • FIG. 11 is a schematic structural view showing an electronic device according to an embodiment of the present application.
  • FIG. 12 is a schematic structural view showing a dual-camera mobile phone according to an embodiment of the present application.
  • Embodiments of the present application can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, and the like include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients Machines, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
  • Electronic devices such as terminal devices, computer systems, servers, etc., can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
  • program modules may be located on a local or remote computing system storage medium including storage devices.
  • FIG. 1 is a flow chart showing a two-view image calibration method according to an embodiment of the present application.
  • the feature matches the first image pair to obtain a first feature point pair set.
  • the first image pair includes two images respectively captured by two different angles of view corresponding to the same scene.
  • the two images included in the first image pair are obtained by the two imaging elements capturing the same scene at the same time based on two different viewing angles, and the two imaging elements may be integrated or separated, for example, by integrating two A camera's dual-camera device (such as a dual-camera) captures the resulting image pair at a time.
  • the two images included in the first image pair are obtained by the same camera capturing the same scene at different times based on two different viewing angles.
  • feature detection and extraction are performed on two images included in the first image pair, and feature points extracted from the two images are matched to obtain matching features on the two images.
  • a set of point pairs as a set of first feature point pairs.
  • a convolutional neural network a color histogram, a Histogram of Oriented Gradient (HOG), and a corner detection algorithm (Small univalue segment assimilating nucleus) may be used.
  • SUSAN and the like, but is not limited thereto.
  • a gray-scale correlation matching In the feature matching of the extracted feature points, a gray-scale correlation matching, a SIFT (Scale-invariant feature transform) algorithm, and a SURF (Speeded-Up Robust Features) algorithm may be used. But it is not limited to this.
  • the step S102 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a feature matching module 402 executed by the processor.
  • step S104 a plurality of different first base matrices of the first image pair are acquired according to at least the first feature point pair set, and a first image deformation indicating a relative deformation of the first image pair before and after the first base matrix map transform is acquired. information.
  • the fundamental matrix is the geometric relationship between two two-dimensional images obtained from two different viewpoints of the same three-dimensional scene.
  • the base matrix may indicate a matching relationship between feature point pairs on two images of the first image pair.
  • the base matrix can be a 3 x 3 matrix representing the polar geometry between the first image and the second image.
  • the method for acquiring the first basic matrix and the first image deformation information is not limited, and the first first basic matrix can be calculated according to the first feature point pair set of the first image pair, and the corresponding first is calculated.
  • the method of image deformation information can be applied to the embodiment to acquire the first image deformation information.
  • an 8-point method of linearly calculating a basic matrix, or a RANMAT sample-based random sampling consistency algorithm (RANSAC) may be used to obtain a plurality of different firsts according to the first feature point pair set.
  • the base matrix may be used to obtain a plurality of different firsts according to the first feature point pair set.
  • the number of corresponding feature point pairs on the image before and after the mapping transformation may be changed for the two images in the first image pair, or the distance between the feature point pairs, etc.
  • the degree of deformation of the two images is separately calculated, and then the first image deformation information is comprehensively calculated by weighting, summation, and the like.
  • the step S104 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by the first acquisition module 404 being executed by the processor.
  • step S106 the first optimized base matrix is determined from the plurality of first base matrices based on at least the first image deformation information.
  • the first optimized basic matrix is a first basic matrix that can accurately represent the matching relationship between the first feature point pair and the concentrated feature point pair in the obtained plurality of first base matrices. Determining, according to the first image deformation information, the first optimized basic matrix, which is equivalent to determining the first optimized basic rectangle according to the degree of image deformation, for example, determining that the first basic matrix having a smaller degree of deformation of the first image pair is the first The basic matrix is optimized, thereby improving the accuracy of the acquired first optimized basis matrix.
  • the first basic matrix with the smallest degree of relative deformation between the first image pairs is obtained from the plurality of first basic matrices according to the first image deformation information as the first optimized basic matrix.
  • the first image deformation information may be combined with the matching error of the first base matrix and the proportion of the feature point pairs satisfying the matching error of the first base matrix to determine the first optimization basis.
  • step S106 may be performed by the processor invoking a corresponding instruction stored in the memory, or may be performed by the first determining module 406 being executed by the processor.
  • the first image pair is calibrated according to the first optimized base matrix.
  • the first optimized basic matrix is decomposed into a first transform matrix and a second transform matrix, and two images in the first image pair are respectively transformed based on the first transform matrix and the second transform matrix, to implement Calibration of the first image pair.
  • the matched key point pairs are located on the same horizontal line, and the matched key point pairs of the calibrated first image pair can be located at the same depth, facilitating three-dimensional reconstruction of the first image pair
  • image processing operations such as processing, image blurring, depth of field calculation, and augmented reality processing.
  • step S108 may be performed by the processor invoking a corresponding instruction stored in the memory or by the first calibration module 408 being executed by the processor.
  • the first feature pair of the first image pair is obtained by performing feature matching on the first image pair obtained by capturing the same scene at different angles, and according to the first feature point pair. Collecting a plurality of different first basic matrices, and first image deformation information corresponding to each first basic matrix, thereby determining a first optimized basic matrix according to the first image deformation information, and according to the first optimized basic matrix The first image pair is calibrated to achieve automatic calibration of the dual view image pair.
  • the dual-view calibration method of the embodiment can be used to automatically calibrate the image pairs captured by the dual-camera device, which can effectively avoid the calibration error caused by the displacement of the dual-lens lens during use.
  • the calibration error moreover, for the dual-camera device, there is no need to set up a complicated double-camera calibration device before leaving the factory, and no special personnel are required to calibrate by shooting the checkerboard image, which reduces the production difficulty of the dual-camera device and improves the production. effectiveness.
  • the dual-view image calibration method of this embodiment may be performed by a camera, a processor, or a dual camera device, etc., but it should be apparent to those skilled in the art that in practical applications, any device or process having corresponding image processing and data processing functions
  • the dual-view image calibration method of the embodiment of the present application can be performed by referring to the embodiment.
  • step S202 the feature matches the first image pair to obtain a first feature point pair set.
  • the first image pair includes two images respectively captured by two different angles of view corresponding to the same scene.
  • the first image pair may be obtained by two separate cameras or by one device provided with two cameras, or may be obtained by one camera sequentially shooting the same scene at different viewing angles.
  • an image pair image taken by a device (dual camera device) provided with two cameras is taken as an example to describe the two-view image calibration method of the present application.
  • FIGS. 3 and 4 illustrate a first image and a second image included in a first image pair taken by a dual camera, the two images having the same image body, but the corresponding feature point pairs on the two images are not perfectly aligned.
  • the composite image of the first image pair shown in FIG. 5 the boy's head, clothes, shoes, and the like are not aligned.
  • acquiring a first image pair captured by the dual camera device performing a feature extraction operation on the first image pair by using a method of image feature extraction, such as a convolutional neural network or a SUSAN algorithm, and
  • a method of image feature extraction such as a convolutional neural network or a SUSAN algorithm
  • the feature extracted from the two images of the first image pair is feature-matched by a feature matching method such as the SIFT algorithm or the SURF algorithm, and the first feature point pair set of the first image pair is obtained.
  • step S202 may be performed by the processor invoking a corresponding instruction stored in the memory or by the feature matching module 502 being executed by the processor.
  • step S204 a plurality of first base matrices are generated by concentrating a plurality of different feature point pair subsets according to the first feature point pair.
  • the first basic matrix that is, a corresponding first base matrix is separately generated according to each feature point pair subset.
  • the feature point pair subset includes a partial feature point pair of the first feature point pair set, and the selected plurality of feature point pair subsets include feature point pairs that are not completely identical, that is, the plurality of feature point pair subsets include features Point pairs can be completely different or partially identical.
  • the step S204 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by the first acquisition module 504 being executed by the processor.
  • step S206 matching error information for each subset of feature points is determined.
  • the corresponding matching error information is determined according to the first basic matrix corresponding to the subset of each feature point.
  • the matching error information includes a proportion of feature point pairs in the feature point pair subset that do not satisfy the matching condition in the feature point pair subset or the first feature point pair set. For example, for each feature point pair subset (or for each first base matrix), the proportion of feature point pairs that do not satisfy the predetermined matching condition in the feature point pair subset or the first feature point pair set is acquired.
  • the predetermined matching condition may be that the matching error of the feature point pair in the subset of the feature points is less than a preset matching error threshold.
  • the obtained ratio is (PT)/ P.
  • t1 (for example, t1 is 0.3) is a matching error threshold, and is used for filtering feature point pairs that can satisfy the matching relationship indicated by the first basic matrix from the feature point pair subset, or filtering out the first basic matrix cannot be satisfied.
  • the key pair of the indicated matching relationship By using the ratio as the matching error information of the feature point pair subset, it is possible to determine the number of feature point pairs that the matching relationship indicated by the corresponding first base matrix satisfies, and further determine the accuracy of the first base matrix.
  • the matching error information of the feature point pair subset can be regarded as the matching error information of the corresponding first base matrix, and the form of the matching error information is not limited to the above ratio, and can also be used to determine the representation of the first basic matrix. Other forms of accuracy of the matching relationship.
  • step S206 may be performed by the processor invoking a corresponding instruction stored in the memory, or may be performed by the second determining module 510 being executed by the processor.
  • step S208 the first image pair is mapped and transformed according to the first base matrix.
  • the first optimized basic matrix is decomposed into a first transform matrix and a second transform matrix, and two images in the first image pair are respectively mapped and transformed based on the first transform matrix and the second transform matrix.
  • the step S208 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by the first acquisition module 504 or the first acquisition unit 5042 executed by the processor.
  • step S210 first image deformation information is acquired according to a distance between at least one pair of corresponding feature points before and after mapping in each image.
  • the first distance and the second distance may be, but are not limited to, an Euclidean distance.
  • the first vertex may include four vertices (0, 0), (0, h-1), (w-1, 0), (w-1, h-1) of the first image
  • the first distance may be The average distance D1 between the four vertices and the corresponding mapping points
  • the second distance may be an average distance D2 between the four vertices on the second image and the corresponding mapping points
  • the information can be ⁇ (D1+D2), where ⁇ is a weight constant.
  • the step S210 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by the first obtaining module 504 or the first acquiring unit 5042 operated by the processor.
  • a first optimized basis matrix is determined from the plurality of first base matrices based on the matching error information and the first image deformation information.
  • a first basic matrix having a small matching error and/or a small image deformation is selected from the plurality of first basic matrices as the first optimized basic matrix.
  • the first image deformation information is preferentially selected, and the first basic matrix with the smallest image distortion is selected as the first optimized basic matrix, which is equivalent to determining the first optimized basic matrix based only on the first image deformation information;
  • the number of the first basic matrices is at least two, and the first matching basic matrix is selected from which the matching error is the smallest according to the matching error information.
  • the first optimization base matrix is selected in consideration of two factors.
  • a mapping cost score cost (P-T)/P+ ⁇ (D1+D2)
  • a first optimization basis matrix with a minimum mapping cost score cost is selected from a plurality of first basic matrices.
  • the first item of cost is an alternative representation (P-T)/P of matching error information
  • the second item is an alternative expression ⁇ (D1+D2) of image deformation information. It should be understood that the above is merely an example, and the matching error information and the image deformation information are not limited to the above expression.
  • step S212 may be performed by the processor invoking a corresponding instruction stored in the memory, or may be performed by the first determining module 506 being executed by the processor.
  • the first image pair is calibrated according to the first optimized base matrix.
  • the first optimal matching matrix is decomposed into a first transformation matrix and a second transformation matrix, and the first image of the first image pair respectively shown in FIGS. 3 and 4 is based on the first transformation matrix and the second transformation matrix, respectively.
  • the map is transformed with the second image, and the transformed image can refer to the calibrated first image and the second image shown in FIGS. 6 and 7, respectively.
  • FIG. 8 after combining the transformed first image and the second image, it may be determined that the feature points on the transformed first image and the second image are substantially on the same horizontal line, for example, the merged image shown in FIG. The boy’s head, clothes and shoes are aligned.
  • the first image pair shown in FIG. 3 and FIG. 4 can be used as an input, and the above steps S202 to S214 are performed, and the processing is performed through feature matching, calculating a basic matrix, determining an optimized basic matrix, and calibration, and outputting FIG. 6 And the calibrated first image pair shown in FIG.
  • step S214 may be performed by the processor invoking a corresponding instruction stored in the memory or by the first calibration module 508 being executed by the processor.
  • the first optimization base matrix is stored or updated.
  • the first optimized basic matrix is stored, which can be used to calibrate other image pairs captured by the same imaging device.
  • the stored first optimized basic matrix is updated by the first optimized basic matrix determined this time.
  • the first feature point pair is stored or updated to collect information of at least one pair of feature points that satisfy a predetermined matching condition. If a feature point pair is previously stored, the stored feature point pair is updated.
  • the matching information of the feature point pairs satisfying the predetermined matching condition is consistent with the basic attributes of the imaging device that captures the image pair, and may be based on the feature point pairs of other image pairs when calibrating other image pairs captured by the same imaging device.
  • other image pairs may be calibrated according to the stored information of the feature point pairs, that is, the other image pairs are calibrated by incremental calibration.
  • the information of the stored feature point pairs includes at least, but not limited to, coordinates of the feature point pairs, so as to calculate a corresponding basic matrix according to the stored feature point pairs.
  • the ratio of the logarithm of the stored or updated feature point pairs to the total feature point pairs included in the feature point pair set is less than a set threshold. In other words, limit the number of feature point pairs stored each time to avoid taking up too much storage space.
  • the total number of stored feature point pairs may also be limited. When the total number of stored feature point pairs reaches a set number, some previously stored partial feature point pairs are deleted, for example, the partial feature point pairs whose storage time is the earliest is deleted, or Delete some feature point pairs whose coordinates coincide.
  • step S216 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by the first storage module 512 being executed by the processor.
  • step S2128 the feature matches the second image pair to obtain a second feature point pair set, and the mapping cost information is determined according to the second feature point pair set.
  • the mapping cost information includes second image deformation information of the second image pair and/or matching error information of the feature point pair subset.
  • the second image pair and the first image pair are two image pairs captured by the same camera, and the second image pair and the first image pair may be two image pairs obtained by shooting at different times and different scenes.
  • the feature matches the second image pair to obtain the second feature point pair set.
  • the second image deformation information of the second image pair and/or the matching error information of the feature point pair subset are acquired according to the second image pair set.
  • the error information, the second item is the second image deformation information of the second image pair. It is explained here that the alternative way of mapping the cost information is not limited to the above-described mapping cost score.
  • step S218 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a third determining module 514 being executed by the processor.
  • step S220 it is judged whether or not the mapping cost information satisfies a predetermined threshold condition.
  • step S222 is performed; if the mapping cost information does not satisfy the predetermined threshold condition, step S224 is performed.
  • the predetermined matching condition may be used to determine whether the matching relationship indicated by the first optimized basic matrix can accurately reflect the matching relationship between the feature point pairs of the second image pair, thereby determining to calibrate the second image pair by using the first optimized basic matrix. Or recalculate the second optimized base matrix to calibrate the second image pair.
  • the mapping cost information is the mapping cost score cost
  • the second item of cost is image deformation information ⁇ (D1+D2), and (D1+D2) is used to measure images of two images in the second image pair.
  • the degree of deformation generally cannot exceed 10% of the diagonal length of the image; ⁇ can be the reciprocal of the diagonal length of any one of the second image pairs, that is, the mapping cost fraction cost is less than 0.2, which can be preset
  • the score threshold is 0.2, and the corresponding predetermined threshold condition may be that the mapping cost score is less than 0.2.
  • step S220 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a third determining module 514 being executed by the processor.
  • the second image pair is calibrated according to the first optimized base matrix.
  • the second image pair is calibrated according to the stored first optimized base matrix in response to the mapping cost information satisfying the predetermined threshold condition.
  • the alternative manner refer to the manner of calibrating the first image pair in the foregoing step S214.
  • step S222 can be performed by the processor invoking a corresponding instruction stored in the memory or by the second calibration module 516 being executed by the processor.
  • step S224 a second optimized basic matrix corresponding to the second image pair is acquired, and the second image pair is calibrated according to the second optimized basic matrix.
  • mapping cost information not satisfying the predetermined threshold condition, acquiring a second optimized basis matrix corresponding to the second image pair, and calibrating the second image pair according to the second optimized basis matrix.
  • the feature matches the second image pair to obtain the second feature point pair set of the second image pair, and obtains the feature point pair according to the second feature point pair set and the stored feature point pair.
  • a plurality of different second basic matrices of the second image pair and acquiring second image deformation information corresponding to each of the second basic matrices, and determining a second optimized basic matrix from the plurality of second basic matrices based on at least the second image deformation information And calibrating according to the determined second optimized basic matrix second image pair.
  • matching error information of the feature point pair subset of the second image pair may also be acquired to determine the second optimized basic matrix in combination with the second image deformation information.
  • the step S224 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a second acquisition module 518 and a third calibration module 520 that are executed by the processor.
  • the above is a dual-view image calibration method of the present embodiment.
  • the method can be used to calibrate an image pair taken by a dual-camera device (a device with two cameras), or can sequentially shoot the same image for an ordinary photographic device.
  • the two-view image of the scene is calibrated.
  • the method may be performed to calibrate the captured image pair during later image processing, or may be performed during the process of capturing the image pair and generating the image pair.
  • the method is performed to calibrate the acquired image pair to directly generate the calibrated image pair, so that the dual camera device can be bound to other application processing, thereby improving image processing efficiency.
  • the dual camera device includes, but is not limited to, a dual camera mobile terminal, a dual camera smart glasses, a dual camera robot, a dual camera drone or a dual camera unmanned vehicle.
  • a dual-camera mobile terminal (such as a dual-camera mobile phone) performs the method in the process of capturing an image pair, directly obtaining the calibrated image pair, and also conveniently performing direct depth-of-field calculation and image imaginary on the obtained calibrated image pair. Processing and so on.
  • the dual-camera performs the method in the process of capturing an image pair, and generates a calibrated image pair, which is convenient for directly obtaining information from the calibrated image pair for stereo matching, three-dimensional scene reconstruction, etc., and can be efficiently Get a stereo vision system.
  • the dual-view calibration method of the embodiment can perform automatic calibration on the image pair captured by the dual-camera device, and can effectively avoid the calibration error caused by the calibration error caused by the movement of the dual-lens lens during use;
  • the camera equipment eliminates the need for complicated double-shot calibration equipment before leaving the factory, which reduces the production difficulty of the double-camera equipment and improves the production efficiency.
  • the first feature pair of the first image pair is obtained by performing feature matching on the first image pair obtained by capturing the same scene at different angles, and according to the first feature point pair.
  • the optimized basic matrix of the pair is used to incrementally calibrate the second image pair to ensure accuracy and improve processing efficiency.
  • the dual-view image calibration method of this embodiment may be performed by a camera, a processor, or a dual camera device, etc., but it should be apparent to those skilled in the art that in practical applications, any device or process having corresponding image processing and data processing functions
  • the dual-view image calibration method of the embodiment of the present application can be performed by referring to the embodiment.
  • the embodiment provides an image processing method, and uses the dual-view image calibration method in the first embodiment or the second embodiment to calibrate at least one image pair respectively captured by two different viewing angles corresponding to the same scene, and The calibrated image pair is applied.
  • the application processing may include, for example but not limited to, any one or more of the following: three-dimensional reconstruction processing, image blurring processing, depth of field calculation, augmented reality processing, and the like.
  • the image processing method of the present embodiment can be performed by an image capturing apparatus, and the captured image pairs are processed in real time to improve image processing efficiency.
  • the captured image pairs are calibrated so that the pair of matching feature points are located at the same depth, which facilitates online depth of field calculation of the image pair, thereby enabling online image imaginary
  • the processing is performed to generate an image with a blurring effect, or an online stereo matching, a three-dimensional reconstruction, an enhanced display, and the like are performed to obtain a three-dimensional stereoscopic image.
  • the image processing method of this embodiment can also perform post-processing on the dual-view image pair of the input image processing program by the processor calling the image processing instruction or the program execution.
  • the processor calling the image processing instruction or the program execution.
  • it is convenient to perform depth of field calculation on the calibrated image pair, and further image processing can be performed according to the calculated depth information; and, it can also be set in the image processing program.
  • the human-computer interaction item is convenient for the user to select an item for image processing, increase the operability of image processing, and improve the user experience.
  • any of the two-view image calibration methods or image processing methods provided by the embodiments of the present application may be performed by any suitable device having data processing capabilities, including but not limited to: a terminal device, a server, and the like.
  • any of the dual-view image calibration methods or image processing methods provided by the embodiments of the present application may be executed by a processor, such as the processor executing any of the dual perspectives mentioned in the embodiments of the present application by calling corresponding instructions stored in the memory.
  • Image calibration method or image processing method This will not be repeated below.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • the dual-view image calibration apparatus of the present embodiment includes: a feature matching module 402, configured to match a first image pair to obtain a first feature point pair set, where the first image pair includes two corresponding to the same scene.
  • the first obtaining module 404 is configured to acquire a plurality of different first basic matrices of the first image pair according to the first set of feature point pairs, and acquire a representation Determining, by the first image, the first image deformation information of the relative deformation before and after the mapping transformation by the first base matrix; the first determining module 406, configured to, according to the first image deformation information, the plurality of first basic matrixes Determining a first optimization base matrix; a first calibration module 408, configured to calibrate the first image pair according to the first optimized base matrix.
  • the dual-view image calibration device of the present embodiment can be used to implement the corresponding dual-view image calibration method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
  • FIG. 10 is a logic block diagram showing a two-view image calibration apparatus according to another embodiment of the present application.
  • the dual-view image calibration apparatus of the present embodiment includes: a feature matching module 502, configured to match a first image pair to obtain a first feature point pair set, where the first image pair includes two corresponding to the same scene.
  • the first obtaining module 504 is configured to acquire a plurality of different first base matrices of the first image pair according to the first set of feature point pairs, and acquire an indication Determining, by the first image, the first image deformation information of the relative deformation before and after the mapping transformation by the first base matrix; the first determining module 506, configured to use the plurality of first basic matrix according to the first image deformation information Determining a first optimization base matrix; a first calibration module 508, configured to calibrate the first image pair according to the first optimized base matrix.
  • the first obtaining module 504 includes a first acquiring unit 5042, configured to perform mapping transformation on two images in the first image pair according to the first basic matrix; and according to at least one pair of mappings in each image
  • the first image deformation information is acquired by a distance between corresponding feature points.
  • the first obtaining module 504 further includes a second obtaining unit 5044, configured to generate at least two first basic matrices according to the first feature point pair, respectively, by collecting at least two different feature point pair subsets.
  • a second obtaining unit 5044 configured to generate at least two first basic matrices according to the first feature point pair, respectively, by collecting at least two different feature point pair subsets.
  • the device further includes a second determining module 510, configured to determine matching error information of each feature point pair subset; the first determining module 506 is configured to perform deformation according to the matching error information and the first image Information determines a first optimized base matrix from the plurality of first base matrices.
  • a second determining module 510 configured to determine matching error information of each feature point pair subset
  • the first determining module 506 is configured to perform deformation according to the matching error information
  • the first image Information determines a first optimized base matrix from the plurality of first base matrices.
  • the matching error information includes: a proportion of feature point pairs in the feature point pair subset that do not satisfy the predetermined matching condition in the feature point pair subset or the first feature point pair set.
  • the device further includes a first storage module 512, configured to store or update the first optimized basic matrix.
  • the first storage module 512 is further configured to store or update information that the first feature point pair concentrates at least one pair of feature points that satisfy a predetermined matching condition.
  • the ratio of the logarithm of the stored or updated feature point pairs to the total feature point pairs included in the feature point pair set is less than a set threshold.
  • the at least one pair of information of the feature point pairs satisfying the predetermined matching condition includes: coordinates of at least one pair of feature point pairs satisfying the predetermined matching condition.
  • the apparatus further includes: a second calibration module 516, configured to calibrate the second image pair according to the first optimized base matrix.
  • the apparatus further includes a third determining module 514, configured to perform feature matching on the second image pair to obtain a second feature point pair set; and determine mapping cost information according to the second feature point pair set, the mapping cost The information includes second image deformation information of the second image pair and/or matching error information of the subset of feature points; the second calibration module 516 is configured to satisfy a predetermined threshold condition in response to the mapping cost information, according to the An optimized base matrix calibrates the second image pair.
  • a third determining module 514 configured to perform feature matching on the second image pair to obtain a second feature point pair set; and determine mapping cost information according to the second feature point pair set, the mapping cost The information includes second image deformation information of the second image pair and/or matching error information of the subset of feature points; the second calibration module 516 is configured to satisfy a predetermined threshold condition in response to the mapping cost information, according to the An optimized base matrix calibrates the second image pair.
  • the device further includes: a second obtaining module 518, configured to acquire a second optimized basic matrix corresponding to the second image pair, in response to the mapping cost information not satisfying a predetermined threshold condition; and a third calibration module 520. calibrate the second image pair according to the second optimized basic matrix.
  • a second obtaining module 518 configured to acquire a second optimized basic matrix corresponding to the second image pair, in response to the mapping cost information not satisfying a predetermined threshold condition
  • a third calibration module 520 calibrate the second image pair according to the second optimized basic matrix.
  • the second obtaining module 518 includes: a feature matching unit (not shown) for matching the second image pair to obtain the second feature point pair set of the second image pair; and the third acquiring unit (not shown in the figure), configured to acquire a plurality of different second basic matrices of the second image pair according to the second feature point pair set and stored feature point pairs, and acquire each of the second a second image deformation information corresponding to the base matrix; a determining unit (not shown) for determining the second optimized base matrix from the plurality of second base matrices based on at least the second image deformation information.
  • the device further includes a second storage module 522, configured to update the stored first optimized basic matrix by using the second optimized basic matrix; and/or, using the second feature point set to at least A pair of information of the feature point pairs satisfying the predetermined matching condition updates the stored feature point pair information.
  • a second storage module 522 configured to update the stored first optimized basic matrix by using the second optimized basic matrix; and/or, using the second feature point set to at least A pair of information of the feature point pairs satisfying the predetermined matching condition updates the stored feature point pair information.
  • the apparatus further includes a photographing module (not shown) for taking an image pair by means of a device provided with two cameras.
  • a photographing module (not shown) for taking an image pair by means of a device provided with two cameras.
  • the device with two cameras may include, but is not limited to, a dual camera mobile terminal, a dual camera smart glasses, a dual camera robot, a dual camera drone or a dual camera unmanned vehicle.
  • the dual-view image calibration device of the present embodiment can be used to implement the corresponding dual-view image calibration method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
  • the embodiment of the present application further provides an image processing apparatus for performing at least one image pair respectively captured by two different viewing angles corresponding to the same scene by using the dual-view image calibration method of the first embodiment or the second embodiment.
  • Calibration; applying processing based on the calibrated image pair which may include, but is not limited to, any one or more of the following: three-dimensional reconstruction processing, image blurring processing, depth of field calculation, augmented reality processing, and the like.
  • the image processing apparatus of this embodiment may include the dual-view image calibration apparatus of any of the foregoing embodiments.
  • the image processing apparatus of the present embodiment can be used to implement the image processing method of the foregoing embodiment, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
  • the embodiment of the present application further provides an electronic device, such as a mobile terminal, a personal computer (PC), a tablet computer, a server, and the like.
  • electronic device 700 includes one or more first processors, first communication elements, etc., such as one or more central processing units (CPUs) 701, and / or one or more image processor (GPU) 713 or the like, the first processor may be loaded into the random access memory (RAM) 703 according to executable instructions stored in read only memory (ROM) 702 or from storage portion 708.
  • the executable instructions execute various appropriate actions and processes.
  • the first read only memory 702 and the random access memory 703 are collectively referred to as a first memory.
  • the first communication component includes a communication component 712 and/or a communication interface 709.
  • the communication component 712 can include, but is not limited to, a network card, which can include, but is not limited to, an IB (Infiniband) network card
  • the communication interface 709 includes a communication interface of a network interface card such as a LAN card, a modem, etc.
  • the communication interface 709 is via, for example, the Internet.
  • the network performs communication processing.
  • the first processor can communicate with read only memory 702 and/or random access memory 703 to execute executable instructions, connect to communication component 712 via first communication bus 704, and communicate with other target devices via communication component 712 to complete
  • the operation corresponding to any two-view image calibration method provided by the embodiment of the present application, for example, the feature matching the first image pair to obtain a first feature point pair set, where the first image pair includes two different perspectives corresponding to the same scene.
  • RAM 703 various programs and data required for the operation of the device can be stored.
  • the CPU 701 or the GPU 713, the ROM 702, and the RAM 703 are connected to each other through the first communication bus 704.
  • ROM 702 is an optional module.
  • the RAM 703 stores executable instructions or writes executable instructions to the ROM 702 at runtime, the executable instructions causing the first processor to perform operations corresponding to the above-described communication methods.
  • An input/output (I/O) interface 705 is also coupled to the first communication bus 704.
  • the communication component 712 can be integrated or can be configured to have multiple sub-modules (e.g., multiple IB network cards) and be on a communication bus link.
  • the following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, etc.; an output portion 707 including a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a speaker; a storage portion 708 including a hard disk or the like And a communication interface 709 including a network interface card such as a LAN card, modem, or the like.
  • Driver 710 is also connected to I/O interface 705 as needed.
  • a removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 710 as needed so that a computer program read therefrom is installed into the storage portion 708 as needed.
  • FIG. 11 is only an optional implementation manner.
  • the number and type of components in the foregoing FIG. 11 may be selected, deleted, added, or replaced according to actual needs; Different function components can also be implemented in separate settings or integrated settings, such as GPU and CPU detachable settings or GPU can be integrated on the CPU, communication components can be separated, or integrated on the CPU or GPU. ,and many more.
  • These alternative embodiments are all within the scope of the present application.
  • embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising the corresponding execution
  • the instruction corresponding to the step of the dual-view image calibration method provided by the embodiment of the present application for example, the feature matching the first image pair to obtain the first feature point pair set, wherein the first image pair includes two different perspectives corresponding to the same scene respectively.
  • the program code may include instructions corresponding to the steps of the image processing method provided by the embodiment of the present application.
  • the two-view image calibration method of the first embodiment or the second embodiment is used to separately capture two different angles of view corresponding to the same scene.
  • the computer program can be downloaded and installed from the network via a communication component, and/or installed from the removable media 711.
  • the above-described functions defined in the method of the embodiments of the present application are executed when the computer program is executed by the first processor.
  • the electronic device 700 further includes at least two cameras, and the first processor (including the central processing unit CPU 701 and/or the image processor GPU 713) and the at least two cameras complete the mutual side through the first communication bus. Communication.
  • the electronic device 700 may be a dual-camera mobile phone integrated with two cameras A as shown in FIG.
  • the first processor and the communication bus built in the inside of the dual camera are not shown in FIG.
  • the two cameras transmit the captured image to the first processor through the first communication bus, and the first processor can perform the image pair on the dual-view image calibration method in the embodiment of the present application.
  • Calibration that is, the dual camera phone can automatically calibrate the captured image pair.
  • the electronic device 700 can also be a dual-camera mobile terminal other than a dual-camera mobile phone, or a dual-camera smart glasses, a dual-camera robot, a dual-camera drone, a dual-camera unmanned vehicle, and the like.
  • the electronic device 700 further includes at least two cameras, and the second processor (including the central processing unit CPU 701 and/or the image processor GPU 713) and the at least two cameras complete mutual communication through the second communication bus. .
  • the electronic device 700 may be a dual-camera mobile phone integrated with two cameras A as shown in FIG.
  • the two cameras transmit the captured image to the second processor through the second communication bus, and the second processor can process the image pair by using the image processing method in the embodiment of the present application.
  • the image pair after calibration based on the dual-view image calibration method of the embodiment of the present application is directly processed, and the image processing efficiency is high.
  • the electronic device 700 can also be a type of dual-camera mobile terminal other than a dual-camera mobile phone, as well as a dual-camera robot, a dual-camera smart glasses, a dual-camera drone or a dual-camera unmanned vehicle. Other dual camera equipment.
  • the methods, apparatus, and apparatus of the present application may be implemented in a number of ways.
  • the methods, apparatus, and apparatus of the present application can be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present application are not limited to the order described above unless otherwise specifically stated.
  • the present application can also be implemented as a program recorded in a recording medium, the programs including machine readable instructions for implementing the method according to the present application.
  • the present application also covers a recording medium storing a program for executing the method according to the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供一种双视角图像校准及图像处理方法、装置、存储介质和电子设备。其中,双视角图像校准方法包括:特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;根据所述第一优化基础矩阵校准所述第一图像对。采用本申请的技术方案,可以实现双视角图像的自动校准,避免因摄像设备标定参数出现误差导致的校准误差。

Description

双视角图像校准及图像处理方法、装置、存储介质和电子设备
本申请要求在2017年06月14日提交中国专利局、申请号为CN201710448540.X、申请名称为“双视角图像校准及图像处理方法、装置、存储介质和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机视觉技术,尤其涉及一种双视角图像校准方法、装置、存储介质和电子设备,以及一种图像处理方法、装置、存储介质和电子设备。
背景技术
双视角图像校准是对两个不同视角图像(如双摄像头拍摄的两张图像)进行处理的关键步骤,用于使两张图像上对应的像素点位于同一水平线上,是进行图像景深计算等处理的前提条件。
发明内容
本申请实施例提供一种双视角图像校准技术方案以及一种图像处理方案。
根据本申请实施例的一方面,提供一种双视角图像校准方法,包括:特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;根据所述第一优化基础矩阵校准所述第一图像对。
可选地,所述获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息,包括:根据所述第一基础矩阵对所述第一图像对中的二张图像进行映射变换;根据每张图像中至少一对映射前后相应的特征点之间的距离,获取所述第一图像变形信息。
可选地,所述根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,包括:分别根据所述第一特征点对集中至少二个不同的特征点对子集生成至少二个第一基础矩阵。
可选地,所述方法还包括:确定每个特征点对子集的匹配误差信息;所述至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵,包括:根据所述匹配误差信息和所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵。
可选地,所述匹配误差信息包括:特征点对子集中不满足预定匹配条件的特征点对在特征点对子集或第一特征点对集的占比。
可选地,所述方法还包括:存储或更新所述第一优化基础矩阵。
可选地,所述方法还包括:存储或更新所述第一特征点对集中至少一对满足预定匹配条件的特征点对的信息。
可选地,所述存储或更新的特征点对的对数相对特征点对集所包括的总特征点对数的占比,小于设定阈值。
可选地,所述至少一对满足预定匹配条件的特征点对的信息,包括:至少一对满足预定匹配条件的特征点对的坐标。
可选地,所述方法还包括:根据所述第一优化基础矩阵校准第二图像对。
可选地,所述方法还包括:特征匹配第二图像对以得到第二特征点对集;根据所述第二特征点对集确定映射代价信息,所述映射代价信息包括所述第二图像对的第二图像变形信息和/或特征点对子集的匹配误差信息;所述根据所述第一优化基础矩阵校准第二图像对,包括:响应于所述映射代价信息满足预定门限条件,根据所述第一优化基础矩阵校准所述第二图像对。
可选地,所述方法还包括:响应于所述映射代价信息不满足预定门限条件,获取所述第二图像对对 应的第二优化基础矩阵;根据所述第二优化基础矩阵对所述第二图像对进行校准。
可选地,所述获取所述第二图像对对应的第二优化基础矩阵,包括:特征匹配第二图像对以得到所述第二图像对的第二特征点对集;根据所述第二特征点对集和存储的特征点对,获取所述第二图像对的多个不同的第二基础矩阵,以及获取各所述第二基础矩阵对应的第二图像变形信息;至少根据所述第二图像变形信息从所述多个第二基础矩阵中确定所述第二优化基础矩阵。
可选地,所述方法还包括:采用所述第二优化基础矩阵更新已存储的所述第一优化基础矩阵;和/或,采用所述第二特征点集中至少一对满足预定匹配条件的特征点对的信息更新已存储的特征点对信息。
可选地,所述方法还包括:通过设有二个摄像头的设备拍摄图像对。
可选地,所述带有二个摄像头的设备包括:双摄移动终端、双摄智能眼镜、双摄机器人、双摄无人机或双摄无人车。
根据本申请实施例的另一方面,还提供一种图像处理方法,包括:采用前述任一项双视角图像校准方法对对应同一场景的二个不同视角分别拍摄而得的至少一个图像对进行校准;基于校准后的图像对进行应用处理,所述应用处理包括以下任意一项或多项:三维重建处理、图像虚化处理、景深计算、增强现实处理。
根据本申请实施例的又一方面,还提供一种双视角图像校准装置,包括:特征匹配模块,用于特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;第一获取模块,用于至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;第一确定模块,用于至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;第一校准模块,用于根据所述第一优化基础矩阵校准所述第一图像对。
可选地,所述第一获取模块包括第一获取单元,用于根据所述第一基础矩阵对所述第一图像对中的二张图像进行映射变换;根据每张图像中至少一对映射前后相应的特征点之间的距离,获取所述第一图像变形信息。
可选地,所述第一获取模块还包括第二获取单元,用于分别根据第一特征点对集中至少二个不同的特征点对子集生成至少二个第一基础矩阵。
可选地,所述装置还包括第二确定模块,用于确定每个特征点对子集的匹配误差信息;所述第一确定模块用于根据所述匹配误差信息和所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵。
可选地,所述匹配误差信息包括:特征点对子集中不满足预定匹配条件的特征点对在特征点对子集或第一特征点对集的占比。
可选地,所述装置还包括第一存储模块,用于存储或更新所述第一优化基础矩阵。
可选地,所述第一存储模块还用于存储或更新所述第一特征点对集中至少一对满足预定匹配条件的特征点对的信息。
可选地,所述存储或更新的特征点对的对数相对特征点对集所包括的总特征点对数的占比,小于设定阈值。
可选地,所述至少一对满足预定匹配条件的特征点对的信息,包括:至少一对满足预定匹配条件的特征点对的坐标。
可选地,所述装置还包括:第二校准模块,用于根据所述第一优化基础矩阵校准第二图像对。
可选地,所述装置还包括第三确定模块,用于特征匹配第二图像对以得到第二特征点对集;根据所述第二特征点对集确定映射代价信息,所述映射代价信息包括所述第二图像对的第二图像变形信息和/或特征点对子集的匹配误差信息;所述第二校准模块用于响应于所述映射代价信息满足预定门限条件,根据所述第一优化基础矩阵校准所述第二图像对。
可选地,所述装置还包括:第二获取模块,用于响应于所述映射代价信息不满足预定门限条件,获取所述第二图像对对应的第二优化基础矩阵;第三校准模块,用于根据所述第二优化基础矩阵对所述第二图像对进行校准。
可选地,所述第二获取模块包括:特征匹配单元,用于特征匹配第二图像对以得到所述第二图像对的第二特征点对集;第三获取单元,用于根据所述第二特征点对集和存储的特征点对,获取所述第二图像对的多个不同的第二基础矩阵,以及获取各所述第二基础矩阵对应的第二图像变形信息;确定单元,用于至少根据所述第二图像变形信息从所述多个第二基础矩阵中确定所述第二优化基础矩阵。
可选地,所述装置还包括第二存储模块,用于采用所述第二优化基础矩阵更新已存储的所述第一优化基础矩阵;和/或,采用所述第二特征点集中至少一对满足预定匹配条件的特征点对的信息更新已存储的特征点对信息。
可选地,所述装置还包括拍摄模块,用于通过设有二个摄像头的设备拍摄图像对。
可选地,所述带有二个摄像头的设备包括:双摄移动终端、双摄智能眼镜、双摄机器人、双摄无人机或双摄无人车。
根据本申请实施例的再一方面,还提供一种图像处理装置,用于采用前述任一项双视角图像校准方法对对应同一场景的二个不同视角分别拍摄而得的至少一个图像对进行校准;以及基于校准后的图像对进行应用处理,所述应用处理包括以下任意一项或多项:三维重建处理、图像虚化处理、景深计算、增强现实处理。
根据本申请实施例的再一方面,还提供一种计算机可读存储介质,其上存储有计算机程序指令,其中,所述程序指令被处理器执行时实现前述任一项双视角图像校准方法或者前述图像处理方法的步骤。
根据本申请实施例的再一方面,还提供一种电子设备,包括:处理器和存储器;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行前述任一项双视角图像校准方法对应的操作;和/或,所述可执行指令使所述处理器执行前述图像处理方法对应的操作。
可选地,还包括至少二个摄像头,所述处理器和所述至少二个摄像头通过所述通信总线完成相互间的通信。
根据本申请实施例的再一方面,还提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现前述任一项所述的双视角图像校准方法或者前述任一项所述的图像处理方法中各步骤的指令。
根据本申请实施例的双视角图像校准方法,通过对在不同视角拍摄同一场景得到的第一图像对进行特征匹配,获取第一图像对的第一特征点对集,并根据第一特征点对集来获取多个不同的第一基础矩阵,以及各第一基础矩阵对应的第一图像变形信息,从而根据第一图像变形信息来确定第一优化基础矩阵,并根据第一优化基础矩阵来对第一图像对进行校准,实现了对双视角图像对的自动校准,可以有效地避免摄镜头因碰撞产生位移导致标定参数出现误差造成的校准误差。
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1是示出根据本申请一个实施例的双视角图像校准方法的流程图;
图2是示出根据本申请另一个实施例的双视角图像校准方法的流程图;
图3是示出根据本申请另一个实施例的第一图像对的第一图像;
图4是示出根据本申请另一个实施例的第一图像对的第二图像;
图5是示出根据本申请另一个实施例的第一图像对的合成图像;
图6是示出根据本申请另一个实施例的经过校准的第一图像对的第一图像;
图7是示出根据本申请另一个实施例的经过校准的第一图像对的第二图像;
图8是示出根据本申请另一个实施例的经过校准的第一图像对的合成图像;
图9是示出根据本申请一个实施例的双视角图像校准装置的逻辑框图;
图10是示出根据本申请另一个实施例的双视角图像校准装置的逻辑框图;
图11是示出根据本申请一个实施例的电子设备的结构示意图;
图12是示出根据本申请一个实施例的双摄手机的结构示意图。
具体实施方式
下面结合附图(若干附图中相同的标号表示相同的元素)和实施例,对本申请实施例的实施方式作进一步详细说明。以下实施例用于说明本申请,但不用来限制本申请的范围。
本领域技术人员可以理解,本申请实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本申请实施例可以应用于终端设备、计算机***、服务器等电子设备,其可与众多其它通用或专用计算***环境或配置一起操作。适于与终端设备、计算机***、服务器等电子设备一起使用的众所周知的终端设备、计算***、环境和/或配置的例子包括但不限于:个人计算机***、服务器计算机***、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的***、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机***﹑大型计算机***和包括上述任何***的分布式云计算技术环境,等等。
终端设备、计算机***、服务器等电子设备可以在由计算机***执行的计算机***可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机***/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算***存储介质上。
图1是示出根据本申请一个实施例的双视角图像校准方法的流程图。参照图1,在步骤S102,特征匹配第一图像对以得到第一特征点对集。其中,第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像。
可选地,第一图像对包括的二张图像,由二个摄像元件基于二个不同视角在同一时刻拍摄同一场景得到,所述二个摄像元件可以集成或分离设置,例如,由集成有二个摄像头的双摄设备(如双摄手机)一次拍摄得到的图像对。或者,第一图像对包括的二张图像,由同一摄像机基于二个不同视角在不同时刻拍摄同一场景得到。
本实施例中,在获取第一图像对之后,对第一图像对包括的二张图像进行特征检测和提取,并将从二张图像提取的特征点进行匹配,获取二张图像上匹配的特征点对的集合,作为第一特征点对集。其中,在对第一图像对进行特征检测和提取时,可以采用卷积神经网络、颜色直方图、方向梯度直方图(Histogram of Oriented Gradient,HOG)、角点检测算法(Small univalue segment assimilating nucleus,SUSAN)等方法,但不限于此。在对提取的特征点进行特征匹配时,可以采用灰度相关匹配、SIFT(Scale-invariant feature transform,尺度不变特征变换)算法、SURF(Speeded-Up Robust Features,加速健壮特征)算法等方法,但不限于此。
在一个可选示例中,该步骤S102可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的特征匹配模块402执行。
在步骤S104,至少根据第一特征点对集获取第一图像对的多个不同的第一基础矩阵,以及获取表示第一图像对经过第一基础矩阵映射变换前后的相对变形的第一图像变形信息。
其中,基础矩阵(Fundamental matrix)为同一三维场景在两个不同视点处得到的两幅二维图像之间的几何关系。在本实施例中,基础矩阵可以指示第一图像对的二张图像上的特征点对之间的匹配关系。 例如,基础矩阵可以为一个3×3的矩阵,表示第一图像和第二图像之间的对极几何关系。
本实施例中,对获取第一基础矩阵以及第一图像变形信息的方法不做限定,能够根据第一图像对的第一特征点对集计算多个第一基础矩阵,以及计算对应的第一图像变形信息的方法,均可以应用至本实施例来获取第一图像变形信息。例如,可以采用线性计算基础矩阵的8点法,或者非线性计算基础矩阵的随机采样一致性算法(RANdom Sample Comsensus,RANSAC)等方法,来根据第一特征点对集获取多个不同的第一基础矩阵。再例如,在计算第一图像变形信息时,可以针对第一图像对中的二张图像,根据映射变换前后的图像上相应的特征点对的数量变化,或者特征点对之间的距离等,分别计算二张图像的变形程度,再经过加权、求和等处理来综合计算第一图像变形信息。
在一个可选示例中,该步骤S104可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块404执行。
在步骤S106,至少根据第一图像变形信息从多个第一基础矩阵中确定第一优化基础矩阵。
其中,第一优化基础矩阵为获取的多个第一基础矩阵中,能够较为准确表示第一特征点对集中特征点对的匹配关系的第一基础矩阵。根据第一图像变形信息来确定第一优化基础矩阵,相当于根据图像变形程度来确定第一优化基础矩形,例如,可以确定使得第一图像对的变形程度较小的第一基础矩阵为第一优化基础矩阵,由此提高了获取的第一优化基础矩阵的准确性。
可选地,根据第一图像变形信息,从多个第一基础矩阵中,获取第一图像对之间的相对变形程度最小的第一基础矩阵,作为第一优化基础矩阵。
在实际应用中,还可以将第一图像变形信息与第一基础矩阵的匹配误差、以及满足第一基础矩阵的匹配误差的特征点对的占比等其他因素相结合,来确定第一优化基础矩阵,以进一步提高确定的第一优化基础矩阵的准确性。
在一个可选示例中,该步骤S106可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一确定模块406执行。
在步骤S108,根据第一优化基础矩阵校准第一图像对。
可选地,将第一优化基础矩阵分解为第一变换矩阵和第二变换矩阵,并基于第一转换矩阵和第二转换矩阵,分别对第一图像对中的二张图像进行变换,实现对第一图像对的校准。经过校准的第一图像对中的二张图像,匹配的关键点对位于同一水平线上,校准后的第一图像对的匹配的关键点对可以位于同一深度,方便对第一图像对进行三维重建处理、图像虚化处理、景深计算、增强现实处理等后续的图像处理操作。
在一个可选示例中,该步骤S108可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一校准模块408执行。
根据本申请实施例的双视角图像校准方法,通过对在不同视角拍摄同一场景得到的第一图像对进行特征匹配,获取第一图像对的第一特征点对集,并根据第一特征点对集来获取多个不同的第一基础矩阵,以及各第一基础矩阵对应的第一图像变形信息,从而根据第一图像变形信息来确定第一优化基础矩阵,并根据第一优化基础矩阵来对第一图像对进行校准,实现了对双视角图像对的自动校准。
在实际应用中,可以采用本实施例的双视角校准方法对双摄设备拍摄的图像对进行全自动校准,可以有效地避免双摄镜头在使用过程中因碰撞等因素产生位移导致的标定误差造成的校准误差;而且,对于双摄设备,无需在出厂前搭设工艺复杂的双摄标定设备,也无需专人的人员通过拍摄棋盘格图像进行校准,降低了双摄设备的生产难度,并提高了生产效率。
本实施例的双视角图像校准方法可以由摄像机、处理器或者双摄设备等来执行,但本领域技术人员应明了,在实际应用中,任意具有相应的图像处理和数据处理功能的设备或处理器,均可以参照本实施例来执行本申请实施例的双视角图像校准方法。
图2是示出根据本申请另一实施例的双视角图像校准方法的流程图。参照图2,在步骤S202,特征匹配第一图像对以得到第一特征点对集。
其中,第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像。第一图像对可以由二个分离设置的摄像头拍摄而得或者由设有二个摄像头的设备一次拍摄得到,也可以由一个摄像机在不 同视角顺序拍摄同一场景得到。
本实施例中,以设有二个摄像头的设备(双摄设备)拍摄的图像对为例,来说明本申请的双视角图像校准方法。
例如,图3和图4示出双摄设备拍摄的第一图像对包括的第一图像和第二图像,二张图像具有同一图像主体,但二张图像上对应的特征点对不能完全对齐。参照图5示出的第一图像对的合成图像,男孩的头顶、衣服和鞋子等没有对齐。
一种可选地的实施方式中,获取双摄设备拍摄的第一图像对,通过卷积神经网络或SUSAN算法等任意可进行图像特征提取的方法,对第一图像对进行特征提取操作,并通过SIFT算法或SURF算法等任意可进行特征匹配的方法,对从第一图像对的二张图像中提取的特征进行特征匹配,获取第一图像对的第一特征点对集。
在一个可选示例中,该步骤S202可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的特征匹配模块502执行。
在步骤S204,分别根据第一特征点对集中多个不同的特征点对子集生成多个第一基础矩阵。
本实施例中,在获取第一图像对的第一特征点对集之后,从中任意选取多个(多个为至少两个)特征点对子集,并分别根据各特征点对子集生成相应的第一基础矩阵。也即,根据每个特征点对子集根据分别生成一个对应第一基础矩阵。其中,特征点对子集包括第一特征点对集中的部分特征点对,且选取的多个特征点对子集包括特征点对不完全相同,也即,多个特征点对子集包括特征点对可以完全不同,也可以部分相同。
可选地,生成第一基础矩阵时,获取包括至少8组特征点对的特征点对子集,采用RANSAC算法来计算对应至少一个匹配矩阵,并将匹配误差最小的匹配矩阵确定为第一基础矩阵。如果x1和x2分别为特征点对子集中特征点对的坐标,x1和x2可用齐次坐标(homogeneous coordinates)进行表示,也即,用三维列向量表示二位坐标。例如,x1=[u,v,1]',计算得到的特征点对的匹配误差为x2'Fx1,其中,“'”表示转置。匹配误差越小,表示对应的匹配阵所指示特征点对的匹配关系越准确,匹配误差的理想值为零。
在一个可选示例中,该步骤S204可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块504执行。
在步骤S206,确定每个特征点对子集的匹配误差信息。
可选地,根据每个特征点对子集对应的第一基础矩阵,确定相应的匹配误差信息。
一种可选的实施方式中,匹配误差信息包括特征点对子集中不满足匹配条件的特征点对在特征点对子集或第一特征点对集中的占比。例如,针对各特征点对子集(或针对各第一基础矩阵),获取不满足预定匹配条件的特征点对在特征点对子集中或第一特征点对集的占比。其中,预定匹配条件可以为特征点对子集中特征点对的匹配误差小于预设的匹配误差阈值。例如,若第一特征点对集中的特征点对的总数为P,特征点对子集中满足匹配误差x2'Fx1<t1的关键点对的数量为T,则获取的占比为(P-T)/P。这里,t1(例如t1为0.3)为匹配误差阈值,用于从特征点对子集中筛选出能够满足第一基础矩阵所指示的匹配关系的特征点对,或者过滤掉不能满足第一基础矩阵所指示的匹配关系的关键点对。通过将该占比作为特征点对子集的匹配误差信息,可以判断对应的第一基础矩阵所指示的匹配关系所满足的特征点对的数量,进而判断第一基础矩阵的准确度。
在这里说明,特征点对子集的匹配误差信息可视为对应的第一基础矩阵的匹配误差信息,匹配误差信息的形式不限于上述占比,还可以为能够用于判断第一基础矩阵表述的匹配关系的准确度的其他形式。
在一个可选示例中,该步骤S206可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二确定模块510执行。
在步骤S208,根据第一基础矩阵对第一图像对进行映射变换。
可选地,将第一优化基础矩阵分解为第一变换矩阵和第二变换矩阵,基于第一变换矩阵和第二变换矩阵,分别对第一图像对中的二张图像进行映射变换。
在一个可选示例中,该步骤S208可以由处理器调用存储器存储的相应指令执行,也可以由被处理 器运行的第一获取模块504或其中的第一获取单元5042执行。
在步骤S210,根据每张图像中至少一对映射前后的相应的特征点之间的距离,获取第一图像变形信息。
可选地,获取第一图像对中第一图像的第一顶点,与映射变换后的第一图像上对应的第一映射点之间的第一距离;以及,第一图像对中第二图像的第二顶点,与映射变换后的第二图像上对应的第二映射点之间的第二距离;根据第一距离和第二距离获取第一图像变形信息。其中,第一距离和第二距离可以为但不限于欧氏距离。
例如,第一顶点可以包括第一图像的四个顶点(0,0),(0,h-1),(w-1,0),(w-1,h-1),第一距离可以为这四个顶点与对应的映射点之间的平均距离D1;相应地,第二距离可以为第二图像上的四个顶点与对应的映射点之间的平均距离D2;则第一图像变形信息可以为α(D1+D2),这里,α为权重常数。
在这里说明,在实际应用中,还可以在先执行上述步骤S208-S210获取第一图像变形信息之后,再执行步骤S206获取匹配误差信息。
在一个可选示例中,该步骤S210可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块504或其中的第一获取单元5042执行。
在步骤S212,根据匹配误差信息和第一图像变形信息从多个第一基础矩阵中确定第一优化基础矩阵。
可选地,根据匹配误差信息和第一图像变形信息,从多个第一基础矩阵中,选择匹配误差较小和/或图像变形较小的第一基础矩阵,作为第一优化基础矩阵。例如,优先考虑第一图像变形信息,选择图像变形最小的第一基础矩阵作为第一优化基础矩阵,这种情况相当于仅根据第一图像变形信息来确定第一优化基础矩阵;若图像变形最小的第一基础矩阵的数量为至少两个,再根据匹配误差信息从中选择匹配误差最小的作为第一优化基础矩阵。再例如,通过为匹配误差信息和第一图像变形信息设置不同的权重值,考虑两方面的因素来选择第一优化基础矩阵。
一种可行的实施方式中,通过设置映射代价分数cost=(P-T)/P+α(D1+D2),从多个第一基础矩阵中选择映射代价分数cost最小的作为第一优化基础矩阵。其中,cost的第一项为匹配误差信息的一种可选表示方式(P-T)/P,第二项为图像变形信息的一种可选表达方式α(D1+D2)。应当理解,以上仅为示例,匹配误差信息和图像变形信息并不限于上述表达。
在一个可选示例中,该步骤S212可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一确定模块506执行。
在步骤S214,根据第一优化基础矩阵校准第一图像对。
例如,将第一最优匹配矩阵分解为第一变换矩阵和第二变换矩阵,基于第一变换矩阵和第二变换矩阵,分别对图3和图4示出的第一图像对的第一图像和第二图像进行映射变换,变换后的图像可分别参照图6和图7示出的经过校准的第一图像和第二图像。参照图8,将变换后的第一图像和第二图像进行合并后,可以确定变换后的第一图像和第二图像上的特征点基本位于同一水平线上,例如,图8示出的合并图像中男孩的头顶、衣服和鞋子等均已对齐。
在实际应用中,可以将图3和图4示出的第一图像对作为输入,执行上述步骤S202至步骤S214,经过特征匹配、计算基础矩阵、确定优化基础矩阵和校准等处理,输出图6和图7示出的经过校准的第一图像对。
在一个可选示例中,该步骤S214可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一校准模块508执行。
在步骤S216,存储或更新第一优化基础矩阵。
本实施例中,在确定第一优化基础矩阵之后,存储第一优化基础矩阵,可用于对同一摄像设备拍摄的其他图像对进行校准。其中,若之前存储有第一优化基础矩阵,则通过本次确定的第一优化基础矩阵,更新存储的第一优化基础矩阵。
可选地,存储或更新第一特征点对集中至少一对满足预定匹配条件的特征点对的信息。若之前存储有特征点对,则更新存储的特征点对。其中,满足预定匹配条件的特征点对的匹配信息,符合拍摄图像 对的摄像设备的基本属性,可以在对同一摄像设备拍摄的其他图像对进行校准时,在依据其他图像对的特征点对的信息之外,还可以依据存储的特征点对的信息,对其他图像对进行校准,也即,采用增量校准的方式对其他图像对进行校准。其中,存储的特征点对的信息至少包括,但不限于特征点对的坐标,以便根据存储的特征点对计算相应的基础矩阵。
可选地,存储或更新的特征点对的对数相对特征点对集所包括的总特征点对数的占比,小于设定阈值。也就是说,限制每次存储的特征点对的数量,以避免占用过大存储空间。此外,还可以限制存储的特征点对的总数量,在存储的特征点对的总数量达到设定数量时,删除之前存储的部分特征点对,例如删除存储时间最早的部分特征点对,或者删除坐标重合的部分特征点对。
在一个可选示例中,该步骤S216可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一存储模块512执行。
在步骤S218,特征匹配第二图像对以得到第二特征点对集,根据第二特征点对集确定映射代价信息。其中,映射代价信息包括第二图像对的第二图像变形信息和/或特征点对子集的匹配误差信息。
其中,第二图像对与第一图像对为同一摄像机拍摄的二个图像对,且第二图像对与第一图像对可以为在不同时间、不同场景下拍摄而得二个图像对。
可选地,参见前述步骤S202中示出的特征匹配第一图像对的方式,来特征匹配第二图像对,获取第二特征点对集。进一步地,参见前述步骤S204至步骤S210,根据第二图像对集获取第二图像对的第二图像变形信息,和/或特征点对子集的匹配误差信息。
一种可选的实施方式中,映射代价信息包括上述映射代价分数cost=(P-T)/P+α(D1+D2),其中,第一项为第二图像对的特征点对子集的匹配误差信息,第二项为第二图像对的第二图像变形信息。在这里说明,映射代价信息的可选方式不限于上述映射代价分数。
在一个可选示例中,该步骤S218可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三确定模块514执行。
在步骤S220,判断映射代价信息是否满足预定门限条件。
如果映射代价信息满足预定门限条件,则执行步骤S222;如果映射代价信息不满足预定门限条件满足,则执行步骤S224。通过预定门限条件可以判断第一优化基础矩阵所指示的匹配关系,是否能够准确地反应第二图像对的特征点对之间的匹配关系,进而确定采用第一优化基础矩阵来校准第二图像对,还是重新计算第二优化基础矩阵来校准第二图像对。
可选地,在映射代价信息为上述映射代价分数cost时,cost的第二项为图像变形信息α(D1+D2),(D1+D2)用于衡量第二图像对中二张图像的图像变形程度,一般不能超过图像对角线长度的10%;α可以为第二图像对中的任一张图像的对角线长度的倒数,也即,映射代价分数cost小于0.2,则可以预设分数阈值为0.2,相应的预定门限条件可以为映射代价分数小于0.2。
在一个可选示例中,该步骤S220可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三确定模块514执行。
在步骤S222,根据第一优化基础矩阵校准第二图像对。
响应于映射代价信息满足预定门限条件,根据存储的第一优化基础矩阵校准第二图像对,可选方式参见前述步骤S214中校准第一图像对的方式。
在一个可选示例中,该步骤S222可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二校准模块516执行。
在步骤S224,获取第二图像对对应的第二优化基础矩阵,根据第二优化基础矩阵校准第二图像对。
响应于映射代价信息不满足预定门限条件,获取第二图像对对应的第二优化基础矩阵,并根据第二优化基础矩阵校准第二图像对。
可选地,在映射代价信息不满足预定门限条件时,特征匹配第二图像对以得到第二图像对的第二特征点对集,根据第二特征点对集和存储的特征点对,获取第二图像对的多个不同的第二基础矩阵,以及获取各第二基础矩阵对应的第二图像变形信息,至少根据第二图像变形信息从多个第二基础矩阵中确定第二优化基础矩阵,根据确定的第二优化基础矩阵第二图像对进行校准。进一步地,还可以获取第二图 像对的特征点对子集的匹配误差信息,以结合第二图像变形信息来确定第二优化基础矩阵。
在一个可选示例中,该步骤S224可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二获取模块518和第三校准模块520执行。
上述为本实施例的双视角图像校准方法,在实际应用中,该方法可用于对双摄设备(设有二个摄像头的设备)拍摄的图像对进行校准,也可以对普通摄影设备顺序拍摄同一场景的双视角图像进行校准。
针对双摄设备拍摄的图像对,可以在后期的图像处理过程中,执行该方法来校准拍摄的图像对,或者,在拍摄图像对并生成图像对的过程中,也可以执行该方法。
其中,通过双摄设备拍摄并生成图像对的过程中,执行该方法对获取的图像对进行校准,以直接生成校准后的图像对,方便双摄设备绑定其他应用处理,提高图像处理效率。这里,双摄设备包括但不限于双摄移动终端、双摄智能眼镜、双摄机器人、双摄无人机或双摄无人车等。
例如,双摄移动终端(如双摄手机)在拍摄图像对的过程中执行该方法,直接得到校准后的图像对,而且,还方便对得到的校准后的图像对直接进行景深计算、图像虚化处理等。再例如,双摄无人机在拍摄图像对的过程中执行该方法,生成校准后的图像对,方便直接从校准后的图像对中获取信息用于立体匹配、三维场景重建等处理,可以高效地获取立体视觉***。
而且本实施例的双视角校准方法,可以对双摄设备拍摄的图像对进行全自动校准,可以有效地避免双摄镜头在使用过程中发生移动导致的标定误差造成的校准误差;而且,对于双摄设备,无需在出厂前搭设工艺复杂的双摄标定设备,降低了双摄设备的生产难度,并提高了生产效率。
根据本申请实施例的双视角图像校准方法,通过对在不同视角拍摄同一场景得到的第一图像对进行特征匹配,获取第一图像对的第一特征点对集,并根据第一特征点对集来获取多个不同的第一基础矩阵,以及各第一基础矩阵对应的第一图像变形信息,从而根据第一图像变形信息来确定第一优化基础矩阵,并根据第一优化基础矩阵来对第一图像对进行校准,实现了对双视角图像对的自动校准;并且,通过存储第一优化基础矩阵和第一图像对的特征点对,并通过预定门限条件来选择用于校准第二图像对的优化基础矩阵,来对第二图像对进行增量校准,保证了准确度,并提高了处理效率。
本实施例的双视角图像校准方法可以由摄像机、处理器或者双摄设备等来执行,但本领域技术人员应明了,在实际应用中,任意具有相应的图像处理和数据处理功能的设备或处理器,均可以参照本实施例来执行本申请实施例的双视角图像校准方法。
本实施例提供一种图像处理方法,采用上述实施例一或实施例二中的双视角图像校准方法,对对应同一场景的二个不同视角分别拍摄而得的至少一个图像对进行校准,并给予校准后的图像对进行应用处理。其中,应用处理例如可以包括但不限于以下任意一项或多项:三维重建处理、图像虚化处理、景深计算、增强现实处理等。
在实际应用中,本实施例的图像处理方法可以由摄像设备执行,对拍摄的图像对进行实时处理来提高图像处理效率。例如,通过采用双视角图像校准方法对拍摄的图像对进行校准处理,使得获得的图像对中匹配的特征点对位于同一深度,方便对图像对进行在线的景深计算,进而可以进行在线的图像虚化处理来生成具有虚化效果的图像,或者进行在线的立体匹配、三维重建、增强显示等处理,来获得三维立体视觉图像。
本实施例的图像处理方法还可以通过处理器调用图像处理指令或程序执行,对输入图像处理程序的双视角图像对进行后期处理。例如,通过采用双视角图像校准方法对图像对进行校准处理,方便对校准后的图像对进行景深计算,并可根据计算得到深度信息进行进一步的图像处理;而且,还可以在图像处理程序中设置人机交互项,方便用户选择来设置图像处理的项目,增加图像处理的可操作性,提高用户体验。
本申请实施例提供的任一种双视角图像校准方法或者图像处理方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本申请实施例提供的任一种双视角图像校准方法或者图像处理方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本申请实施例提及的任一种双视角图像校准方法或者图像处理方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬 件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图9是示出根据本申请一个实施例的双视角图像校准装置的逻辑框图。参照图9,本实施例的双视角图像校准装置包括:特征匹配模块402,用于特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;第一获取模块404,用于至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;第一确定模块406,用于至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;第一校准模块408,用于根据所述第一优化基础矩阵校准所述第一图像对。
本实施例的双视角图像校准装置可用于实现前述方法实施例中相应的双视角图像校准方法,并具有相应的方法实施例的有益效果,在此不再赘述。
图10是示出根据本申请另一个实施例的双视角图像校准装置的逻辑框图。参照图10,本实施例的双视角图像校准装置包括:特征匹配模块502,用于特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;第一获取模块504,用于至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;第一确定模块506,用于至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;第一校准模块508,用于根据所述第一优化基础矩阵校准所述第一图像对。
可选地,第一获取模块504包括第一获取单元5042,用于根据所述第一基础矩阵对所述第一图像对中的二张图像进行映射变换;根据每张图像中至少一对映射前后相应的特征点之间的距离,获取所述第一图像变形信息。
可选地,第一获取模块504还包括第二获取单元5044,用于分别根据第一特征点对集中至少二个不同的特征点对子集生成至少二个第一基础矩阵。
可选地,所述装置还包括第二确定模块510,用于确定每个特征点对子集的匹配误差信息;第一确定模块506用于根据所述匹配误差信息和所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵。
可选地,所述匹配误差信息包括:特征点对子集中不满足预定匹配条件的特征点对在特征点对子集或第一特征点对集的占比。
可选地,所述装置还包括第一存储模块512,用于存储或更新所述第一优化基础矩阵。
可选地,第一存储模块512还用于存储或更新所述第一特征点对集中至少一对满足预定匹配条件的特征点对的信息。
可选地,所述存储或更新的特征点对的对数相对特征点对集所包括的总特征点对数的占比,小于设定阈值。
可选地,所述至少一对满足预定匹配条件的特征点对的信息,包括:至少一对满足预定匹配条件的特征点对的坐标。
可选地,所述装置还包括:第二校准模块516,用于根据所述第一优化基础矩阵校准第二图像对。
可选地,所述装置还包括第三确定模块514,用于特征匹配第二图像对以得到第二特征点对集;根据所述第二特征点对集确定映射代价信息,所述映射代价信息包括所述第二图像对的第二图像变形信息和/或特征点对子集的匹配误差信息;第二校准模块516用于响应于所述映射代价信息满足预定门限条件,根据所述第一优化基础矩阵校准所述第二图像对。
可选地,所述装置还包括:第二获取模块518,用于响应于所述映射代价信息不满足预定门限条件,获取所述第二图像对对应的第二优化基础矩阵;第三校准模块520,用于根据所述第二优化基础矩阵对所述第二图像对进行校准。
可选地,第二获取模块518包括:特征匹配单元(图中未示出),用于特征匹配第二图像对以得到所述第二图像对的第二特征点对集;第三获取单元(图中未示出),用于根据所述第二特征点对集和存 储的特征点对,获取所述第二图像对的多个不同的第二基础矩阵,以及获取各所述第二基础矩阵对应的第二图像变形信息;确定单元(图中未示出),用于至少根据所述第二图像变形信息从所述多个第二基础矩阵中确定所述第二优化基础矩阵。
可选地,所述装置还包括第二存储模块522,用于采用所述第二优化基础矩阵更新已存储的所述第一优化基础矩阵;和/或,采用所述第二特征点集中至少一对满足预定匹配条件的特征点对的信息更新已存储的特征点对信息。
可选地,所述装置还包括拍摄模块(图中未示出),用于通过设有二个摄像头的设备拍摄图像对。
可选地,所述带有二个摄像头的设备例如可以包括但不限于:双摄移动终端、双摄智能眼镜、双摄机器人、双摄无人机或双摄无人车等。
本实施例的双视角图像校准装置可用于实现前述方法实施例中相应的双视角图像校准方法,并具有相应的方法实施例的有益效果,在此不再赘述。
本申请实施例还提供了一种图像处理装置,用于采用前述实施例一或实施例二的双视角图像校准方法,对对应同一场景的二个不同视角分别拍摄而得的至少一个图像对进行校准;基于校准后的图像对进行应用处理,所述应用处理例如可以包括但不限于以下任意一项或多项:三维重建处理、图像虚化处理、景深计算、增强现实处理等。
在实际应用中,本实施例的图像处理装置,可以包括前述任一实施例的双视角图像校准装置。
本实施例的图像处理装置可用于实现前述实施例的图像处理方法,并具有相应的方法实施例的有益效果,在此不再赘述。
本申请实施例还提供了一种电子设备,例如可以是移动终端、个人计算机(PC)、平板电脑、服务器等。下面参考图11,其示出了适于用来实现本申请一个实施例的终端设备或服务器的电子设备700的结构示意图。如图11所示,电子设备700包括一个或多个第一处理器、第一通信元件等,所述一个或多个第一处理器例如:一个或多个中央处理单元(CPU)701,和/或一个或多个图像处理器(GPU)713等,第一处理器可以根据存储在只读存储器(ROM)702中的可执行指令或者从存储部分708加载到随机访问存储器(RAM)703中的可执行指令而执行各种适当的动作和处理。本实施例中,第一只读存储器702和随机访问存储器703统称为第一存储器。第一通信元件包括通信组件712和/或通信接口709。其中,通信组件712可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,通信接口709包括诸如LAN卡、调制解调器等的网络接口卡的通信接口,通信接口709经由诸如因特网的网络执行通信处理。
第一处理器可与只读存储器702和/或随机访问存储器703中通信以执行可执行指令,通过第一通信总线704与通信组件712相连、并经通信组件712与其他目标设备通信,从而完成本申请实施例提供的任一项双视角图像校准方法对应的操作,例如,特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;根据所述第一优化基础矩阵校准所述第一图像对。或者完成本申请实施例提供的图像处理方法对应的操作,例如,采用前述实施例一或实施例二的双视角图像校准方法,对对应同一场景的二个不同视角分别拍摄而得的至少一个图像对进行校准;基于校准后的图像对进行应用处理,所述应用处理包括以下任意一项或多项:三维重建处理、图像虚化处理、景深计算、增强现实处理。
此外,在RAM 703中,还可存储有装置操作所需的各种程序和数据。CPU701或GPU713、ROM702以及RAM703通过第一通信总线704彼此相连。在有RAM703的情况下,ROM702为可选模块。RAM703存储可执行指令,或在运行时向ROM702中写入可执行指令,可执行指令使第一处理器执行上述通信方法对应的操作。输入/输出(I/O)接口705也连接至第一通信总线704。通信组件712可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在通信总线链接上。
以下部件连接至I/O接口705:包括键盘、鼠标等的输入部分706;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分707;包括硬盘等的存储部分708;以及包括诸如LAN 卡、调制解调器等的网络接口卡的通信接口709。驱动器710也根据需要连接至I/O接口705。可拆卸介质711,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器710上,以便于从其上读出的计算机程序根据需要被安装入存储部分708。
需要说明的,如图11所示的架构仅为一种可选实现方式,在可选实践过程中,可根据实际需要对上述图11的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如GPU和CPU可分离设置或者可将GPU集成在CPU上,通信元件可分离设置,也可集成设置在CPU或GPU上,等等。这些可替换的实施方式均落入本申请的保护范围。
特别地,根据本申请实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本申请实施例提供的双视角图像校准方法步骤对应的指令,例如,特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;根据所述第一优化基础矩阵校准所述第一图像对。或者程序代码可包括对应执行本申请实施例提供的图像处理方法步骤对应的指令,例如,采用前述实施例一或实施例二的双视角图像校准方法,对对应同一场景的二个不同视角分别拍摄而得的至少一个图像对进行校准;基于校准后的图像对进行应用处理,所述应用处理包括以下任意一项或多项:三维重建处理、图像虚化处理、景深计算、增强现实处理。在这样的实施例中,该计算机程序可以通过通信元件从网络上被下载和安装,和/或从可拆卸介质711被安装。在该计算机程序被第一处理器执行时,执行本申请实施例的方法中限定的上述功能。
可选地,电子设备700还包括至少二个摄像头,第一处理器(包括上述中央处理单元CPU 701,和/或上述图像处理器GPU713)和至少二个摄像头通过第一通信总线完成相互间侧通信。
在实际应用中,电子设备700可以为如图12示出的集成有二个摄像头A的双摄手机。图12中未示内置在双摄手机内侧的第一处理器和通信总线等部件。在用户使用该手机拍摄到图像对时,二个摄像头将拍摄的图像通过第一通信总线传输给第一处理器,第一处理器可以采用本申请实施例的双视角图像校准方法对图像对进行校准,也即,双摄手机可以对拍摄的图像对进行自动校准。
当然,在实际应用中,电子设备700还可以为除双摄手机之外的其他双摄移动终端,或者双摄智能眼镜、双摄机器人、双摄无人机、双摄无人车等。
可选地,电子设备700还包括至少二个摄像头,第二处理器(包括上述中央处理单元CPU701,和/或上述图像处理器GPU713)和至少二个摄像头通过第二通信总线完成相互间侧通信。
在实际应用中,电子设备700可以为如图12示出的集成有二个摄像头A的双摄手机。在双摄手机拍摄到图像对时,二个摄像头将拍摄的图像通过第二通信总线传输给第二处理器,第二处理器可以采用本申请实施例的图像处理方法对图像对进行处理,可以直接基于本申请实施例的双视角图像校准方法校准后的图像对进行处理,图像处理效率较高。
当然,在实际应用中,电子设备700还可以为除双摄手机之外的其他类型的双摄移动终端,以及双摄机器人、双摄智能眼镜、双摄无人机或者双摄无人车等其他双摄设备。
可能以许多方式来实现本申请的方法和装置、设备。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本申请的方法和装置、设备。用于方法的步骤的上述顺序仅是为了进行说明,本申请的方法的步骤不限于以上描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本申请实施为记录在记录介质中的程序,这些程序包括用于实现根据本申请的方法的机器可读指令。因而,本申请还覆盖存储用于执行根据本申请的方法的程序的记录介质。
本申请的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本申请限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本申请的原理和实际应用,并且使本领域的普通技术人员能够理解本申请从而设计适于特定用途的带有各种修改的各种实施例。
以上所述,仅为本申请实施例的可选实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (38)

  1. 一种双视角图像校准方法,其特征在于,包括:
    特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;
    至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;
    至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;
    根据所述第一优化基础矩阵校准所述第一图像对。
  2. 根据权利要求1所述的方法,其特征在于,所述获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息,包括:
    根据所述第一基础矩阵对所述第一图像对中的二张图像进行映射变换;
    根据每张图像中至少一对映射前后相应的特征点之间的距离,获取所述第一图像变形信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,包括:
    分别根据所述第一特征点对集中至少二个不同的特征点对子集生成至少二个第一基础矩阵。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    确定每个特征点对子集的匹配误差信息;
    所述至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵,包括:
    根据所述匹配误差信息和所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵。
  5. 根据权利要求3或4所述的方法,其特征在于,所述匹配误差信息包括:特征点对子集中不满足预定匹配条件的特征点对在特征点对子集或第一特征点对集的占比。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    存储或更新所述第一优化基础矩阵。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    存储或更新所述第一特征点对集中至少一对满足预定匹配条件的特征点对的信息。
  8. 根据权利要求7所述的方法,其特征在于,所述存储或更新的特征点对的对数相对特征点对集所包括的总特征点对数的占比,小于设定阈值。
  9. 根据权利要求7或8所述的方法,其特征在于,所述至少一对满足预定匹配条件的特征点对的信息,包括:
    至少一对满足预定匹配条件的特征点对的坐标。
  10. 根据权利要求6至9中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一优化基础矩阵校准第二图像对。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    特征匹配第二图像对以得到第二特征点对集;根据所述第二特征点对集确定映射代价信息,所述映射代价信息包括所述第二图像对的第二图像变形信息和/或特征点对子集的匹配误差信息;
    所述根据所述第一优化基础矩阵校准第二图像对,包括:
    响应于所述映射代价信息满足预定门限条件,根据所述第一优化基础矩阵校准所述第二图像对。
  12. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    响应于所述映射代价信息不满足预定门限条件,获取所述第二图像对对应的第二优化基础矩阵;
    根据所述第二优化基础矩阵对所述第二图像对进行校准。
  13. 根据权利要求12所述的方法,其特征在于,所述获取所述第二图像对对应的第二优化基础矩阵,包括:
    特征匹配第二图像对以得到所述第二图像对的第二特征点对集;
    根据所述第二特征点对集和存储的特征点对,获取所述第二图像对的多个不同的第二基础矩阵,以及获取各所述第二基础矩阵对应的第二图像变形信息;
    至少根据所述第二图像变形信息从所述多个第二基础矩阵中确定所述第二优化基础矩阵。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    采用所述第二优化基础矩阵更新已存储的所述第一优化基础矩阵;和/或,
    采用所述第二特征点集中至少一对满足预定匹配条件的特征点对的信息更新已存储的特征点对信息。
  15. 根据权利要求1至14中任一项所述的方法,其特征在于,所述方法还包括:
    通过设有二个摄像头的设备拍摄图像对。
  16. 根据权利要求15所述的方法,其特征在于,所述带有二个摄像头的设备包括:双摄移动终端、双摄智能眼镜、双摄机器人、双摄无人机或双摄无人车。
  17. 一种图像处理方法,其特征在于,包括:
    采用权1至16中任一项所述的双视角图像校准方法对对应同一场景的二个不同视角分别拍摄而得的至少一个图像对进行校准;
    基于校准后的图像对进行应用处理,所述应用处理包括以下任意一项或多项:三维重建处理、图像虚化处理、景深计算、增强现实处理。
  18. 一种双视角图像校准装置,其特征在于,包括:
    特征匹配模块,用于特征匹配第一图像对以得到第一特征点对集,所述第一图像对包括对应同一场景的二个不同视角分别拍摄而得的二张图像;
    第一获取模块,用于至少根据所述第一特征点对集获取所述第一图像对的多个不同的第一基础矩阵,以及获取表示所述第一图像对经过第一基础矩阵进行映射变换前后的相对变形的第一图像变形信息;
    第一确定模块,用于至少根据所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵;
    第一校准模块,用于根据所述第一优化基础矩阵校准所述第一图像对。
  19. 根据权利要求18所述的装置,其特征在于,所述第一获取模块包括第一获取单元,用于根据所述第一基础矩阵对所述第一图像对中的二张图像进行映射变换;以及根据每张图像中至少一对映射前后相应的特征点之间的距离,获取所述第一图像变形信息。
  20. 根据权利要求18或19所述的装置,其特征在于,所述第一获取模块还包括第二获取单元,用于分别根据第一特征点对集中至少二个不同的特征点对子集生成至少二个第一基础矩阵。
  21. 根据权利要求20所述的装置,其特征在于,所述装置还包括第二确定模块,用于确定每个特征点对子集的匹配误差信息;
    所述第一确定模块用于根据所述匹配误差信息和所述第一图像变形信息从所述多个第一基础矩阵中确定第一优化基础矩阵。
  22. 根据权利要求20或21所述的装置,其特征在于,所述匹配误差信息包括:特征点对子集中不满足预定匹配条件的特征点对在特征点对子集或第一特征点对集的占比。
  23. 根据权利要求18至22中任一项所述的装置,其特征在于,所述装置还包括第一存储模块,用于存储或更新所述第一优化基础矩阵。
  24. 根据权利要求18至23中任一项所述的装置,其特征在于,所述第一存储模块还用于存储或更新所述第一特征点对集中至少一对满足预定匹配条件的特征点对的信息。
  25. 根据权利要求24所述的装置,其特征在于,所述存储或更新的特征点对的对数相对特征点对集所包括的总特征点对数的占比,小于设定阈值。
  26. 根据权利要求24或25所述的装置,其特征在于,所述至少一对满足预定匹配条件的特征点对的信息,包括:
    至少一对满足预定匹配条件的特征点对的坐标。
  27. 根据权利要求23至26中任一项所述的装置,其特征在于,所述装置还包括:
    第二校准模块,用于根据所述第一优化基础矩阵校准第二图像对。
  28. 根据权利要求27所述的装置,其特征在于,所述装置还包括第三确定模块,用于特征匹配第二图像对以得到第二特征点对集;根据所述第二特征点对集确定映射代价信息,所述映射代价信息包括所述第二图像对的第二图像变形信息和/或特征点对子集的匹配误差信息;
    所述第二校准模块用于响应于所述映射代价信息满足预定门限条件,根据所述第一优化基础矩阵校准所述第二图像对。
  29. 根据权利要求28所述的装置,其特征在于,所述装置还包括:
    第二获取模块,用于响应于所述映射代价信息不满足预定门限条件,获取所述第二图像对对应的第二优化基础矩阵;
    第三校准模块,用于根据所述第二优化基础矩阵对所述第二图像对进行校准。
  30. 根据权利要求29所述的装置,其特征在于,所述第二获取模块包括:
    特征匹配单元,用于特征匹配第二图像对以得到所述第二图像对的第二特征点对集;
    第三获取单元,用于根据所述第二特征点对集和存储的特征点对,获取所述第二图像对的多个不同的第二基础矩阵,以及获取各所述第二基础矩阵对应的第二图像变形信息;
    确定单元,用于至少根据所述第二图像变形信息从所述多个第二基础矩阵中确定所述第二优化基础矩阵。
  31. 根据权利要求30所述的装置,其特征在于,所述装置还包括第二存储模块,用于采用所述第二优化基础矩阵更新已存储的所述第一优化基础矩阵;和/或,
    采用所述第二特征点集中至少一对满足预定匹配条件的特征点对的信息更新已存储的特征点对信息。
  32. 根据权利要求18至31中任一项所述的装置,其特征在于,所述装置还包括拍摄模块,用于通过设有二个摄像头的设备拍摄图像对。
  33. 根据权利要求32所述的装置,其特征在于,所述带有二个摄像头的设备包括:双摄移动终端、双摄智能眼镜、双摄机器人、双摄无人机或双摄无人车。
  34. 一种图像处理装置,其特征在于,用于采用权1至16中任一项任一所述的双视角图像校准方法对对应同一场景的二个不同视角分别拍摄而得的至少一个图像对进行校准;以及基于校准后的图像对进行应用处理,所述应用处理包括以下任意一项或多项:三维重建处理、图像虚化处理、景深计算、增强现实处理。
  35. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述程序指令被处理器执行时实现权利要求1至16中任一项所述的双视角图像校准方法或者权利要求17所述的图像处理方法的步骤。
  36. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现权利要求1至16中任一项所述的双视角图像校准方法或者权利要求17所述的图像处理方法中各步骤的指令。
  37. 一种电子设备,其特征在于,包括:处理器和存储器;
    所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1至16中任一项所述的双视角图像校准方法对应的操作;和/或,所述可执行指令使所述处理器执行如权利要求17所述的图像处理方法对应的操作。
  38. 根据权利要求37所述的电子设备,其特征在于,还包括至少二个摄像头,所述处理器和所述至少二个摄像头通过所述通信总线完成相互间的通信。
PCT/CN2018/091085 2017-06-14 2018-06-13 双视角图像校准及图像处理方法、装置、存储介质和电子设备 WO2018228436A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2019569277A JP6902122B2 (ja) 2017-06-14 2018-06-13 ダブル視野角画像較正および画像処理方法、装置、記憶媒体ならびに電子機器
SG11201912033WA SG11201912033WA (en) 2017-06-14 2018-06-13 Dual-view angle image calibration method and apparatus, storage medium and electronic device
US16/710,033 US11380017B2 (en) 2017-06-14 2019-12-11 Dual-view angle image calibration method and apparatus, storage medium and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710448540.X 2017-06-14
CN201710448540.XA CN108230395A (zh) 2017-06-14 2017-06-14 双视角图像校准及图像处理方法、装置、存储介质和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/710,033 Continuation US11380017B2 (en) 2017-06-14 2019-12-11 Dual-view angle image calibration method and apparatus, storage medium and electronic device

Publications (1)

Publication Number Publication Date
WO2018228436A1 true WO2018228436A1 (zh) 2018-12-20

Family

ID=62656659

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091085 WO2018228436A1 (zh) 2017-06-14 2018-06-13 双视角图像校准及图像处理方法、装置、存储介质和电子设备

Country Status (5)

Country Link
US (1) US11380017B2 (zh)
JP (1) JP6902122B2 (zh)
CN (1) CN108230395A (zh)
SG (1) SG11201912033WA (zh)
WO (1) WO2018228436A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020024576A1 (zh) 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 摄像头校准方法和装置、电子设备、计算机可读存储介质
CN109040745B (zh) * 2018-08-01 2019-12-27 Oppo广东移动通信有限公司 摄像头自校准方法和装置、电子设备、计算机存储介质
CN109040746B (zh) * 2018-08-01 2019-10-25 Oppo广东移动通信有限公司 摄像头校准方法和装置、电子设备、计算机可读存储介质
US20210133981A1 (en) * 2019-10-30 2021-05-06 Allen Institute Biology driven approach to image segmentation using supervised deep learning-based segmentation
EP3882857A1 (en) * 2020-03-19 2021-09-22 Sony Group Corporation Extrinsic calibration of multi-camera system
US11232315B2 (en) * 2020-04-28 2022-01-25 NextVPU (Shanghai) Co., Ltd. Image depth determining method and living body identification method, circuit, device, and medium
CN112598751A (zh) * 2020-12-23 2021-04-02 Oppo(重庆)智能科技有限公司 标定方法及装置、终端和存储介质
CN112785519A (zh) * 2021-01-11 2021-05-11 普联国际有限公司 基于全景图的定位误差校准方法、装置、设备及存储介质
CN113628283B (zh) * 2021-08-10 2024-05-17 地平线征程(杭州)人工智能科技有限公司 摄像装置的参数标定方法、装置、介质以及电子设备
CN113884099B (zh) * 2021-12-07 2022-04-12 智道网联科技(北京)有限公司 一种路端移动物***置测量方法及装置
CN114820314A (zh) * 2022-04-27 2022-07-29 Oppo广东移动通信有限公司 图像处理方法及装置、计算机可读存储介质和电子设备
CN115439555A (zh) * 2022-08-29 2022-12-06 佛山职业技术学院 一种无公共视场多相机外参数标定方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581136A (zh) * 2013-10-14 2015-04-29 钰创科技股份有限公司 图像校准***和立体摄像机的校准方法
US20150341618A1 (en) * 2014-05-23 2015-11-26 Leap Motion, Inc. Calibration of multi-camera devices using reflections thereof
CN105654459A (zh) * 2014-11-28 2016-06-08 深圳超多维光电子有限公司 计算场景主体的深度分布方法与装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916454B (zh) 2010-04-08 2013-03-27 董洪伟 基于网格变形和连续优化的高分辨率人脸重建方法
CN102065313B (zh) * 2010-11-16 2012-10-31 上海大学 平行式相机阵列的未标定多视点图像校正方法
JP5768684B2 (ja) * 2011-11-29 2015-08-26 富士通株式会社 ステレオ画像生成装置、ステレオ画像生成方法及びステレオ画像生成用コンピュータプログラム
JP5901447B2 (ja) * 2012-06-27 2016-04-13 オリンパス株式会社 画像処理装置及びそれを備えた撮像装置、画像処理方法、並びに画像処理プログラム
CN103345736B (zh) * 2013-05-28 2016-08-31 天津大学 一种虚拟视点绘制方法
US10489912B1 (en) * 2013-12-20 2019-11-26 Amazon Technologies, Inc. Automated rectification of stereo cameras
CN104019799B (zh) * 2014-05-23 2016-01-13 北京信息科技大学 一种利用局部参数优化计算基础矩阵的相对定向方法
CN104316057A (zh) * 2014-10-31 2015-01-28 天津工业大学 一种无人机视觉导航方法
KR102281184B1 (ko) * 2014-11-20 2021-07-23 삼성전자주식회사 영상 보정 방법 및 장치
JP2017059049A (ja) * 2015-09-17 2017-03-23 キヤノン株式会社 画像処理装置およびその制御方法
JP6493885B2 (ja) * 2016-03-15 2019-04-03 富士フイルム株式会社 画像位置合せ装置、画像位置合せ装置の作動方法および画像位置合せプログラム
CN106204731A (zh) * 2016-07-18 2016-12-07 华南理工大学 一种基于双目立体视觉***的多视角三维重建方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581136A (zh) * 2013-10-14 2015-04-29 钰创科技股份有限公司 图像校准***和立体摄像机的校准方法
US20150341618A1 (en) * 2014-05-23 2015-11-26 Leap Motion, Inc. Calibration of multi-camera devices using reflections thereof
CN105654459A (zh) * 2014-11-28 2016-06-08 深圳超多维光电子有限公司 计算场景主体的深度分布方法与装置

Also Published As

Publication number Publication date
JP2020523703A (ja) 2020-08-06
JP6902122B2 (ja) 2021-07-14
SG11201912033WA (en) 2020-01-30
US20200111234A1 (en) 2020-04-09
US11380017B2 (en) 2022-07-05
CN108230395A (zh) 2018-06-29

Similar Documents

Publication Publication Date Title
WO2018228436A1 (zh) 双视角图像校准及图像处理方法、装置、存储介质和电子设备
CN107330439B (zh) 一种图像中物体姿态的确定方法、客户端及服务器
JP7159057B2 (ja) 自由視点映像生成方法及び自由視点映像生成システム
WO2019149206A1 (zh) 深度估计方法和装置、电子设备、程序和介质
US10574974B2 (en) 3-D model generation using multiple cameras
US20210012093A1 (en) Method and apparatus for generating face rotation image
CN108492364B (zh) 用于生成图像生成模型的方法和装置
CN110070564B (zh) 一种特征点匹配方法、装置、设备及存储介质
US20200184726A1 (en) Implementing three-dimensional augmented reality in smart glasses based on two-dimensional data
CN108230384B (zh) 图像深度计算方法、装置、存储介质和电子设备
Wang et al. High-fidelity view synthesis for light field imaging with extended pseudo 4DCNN
CN102289803A (zh) 图像处理设备、图像处理方法及程序
JP2016537901A (ja) ライトフィールド処理方法
WO2019041660A1 (zh) 人脸去模糊方法及装置
CN113220251B (zh) 物体显示方法、装置、电子设备及存储介质
CN108388889B (zh) 用于分析人脸图像的方法和装置
CN113379815A (zh) 基于rgb相机与激光传感器的三维重建方法、装置及服务器
CN114663686A (zh) 物体特征点匹配方法及装置、训练方法及装置
CN113793392A (zh) 一种相机参数标定方法及装置
CN111192308B (zh) 图像处理方法及装置、电子设备和计算机存储介质
CN112950759A (zh) 基于房屋全景图的三维房屋模型构建方法及装置
JP2016114445A (ja) 3次元位置算出装置およびそのプログラム、ならびに、cg合成装置
CN108335329B (zh) 应用于飞行器中的位置检测方法和装置、飞行器
CN115409949A (zh) 模型训练方法、视角图像生成方法、装置、设备及介质
Halperin et al. Clear Skies Ahead: Towards Real‐Time Automatic Sky Replacement in Video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18817652

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019569277

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.08.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18817652

Country of ref document: EP

Kind code of ref document: A1