WO2019100933A1 - 用于三维测量的方法、装置以及*** - Google Patents
用于三维测量的方法、装置以及*** Download PDFInfo
- Publication number
- WO2019100933A1 WO2019100933A1 PCT/CN2018/114016 CN2018114016W WO2019100933A1 WO 2019100933 A1 WO2019100933 A1 WO 2019100933A1 CN 2018114016 W CN2018114016 W CN 2018114016W WO 2019100933 A1 WO2019100933 A1 WO 2019100933A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera
- feature point
- abscissa
- ordinate
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
Definitions
- the present invention generally relates to methods, apparatus, and systems for three-dimensional measurements, and more particularly to three-dimensional measurement methods, apparatus, and systems based on computer vision techniques.
- Three-dimensional measurement based on computer vision and three-dimensional reconstruction based on three-dimensional measurement are widely used in industry, security, transportation and entertainment.
- Industrial robots sense the real world and make decisions that require three-dimensional spatial information.
- Security monitoring adds 3D scenes to improve target recognition accuracy.
- Auto-driving, drones, etc. need to sense the position of surrounding objects in real time.
- Building restoration, cultural relics restoration, etc. require three-dimensional reconstruction of buildings and cultural relics, especially three-dimensional reconstruction based on high-density true color point clouds.
- the characters formed by three-dimensional reconstruction are widely used in the movie, animation, and game industries.
- Three-dimensional virtual characters formed based at least in part on three-dimensional reconstruction are also widely used in the VR and AR industries.
- the three-dimensional reconstruction based on the binocular camera recovers the three-dimensional coordinate information of the object by calculating the positional deviation between the image points corresponding to the same object point in the binocular image, the core of which is to select the feature points in the image, and Find/screen feature point groups (ie, feature point matching) on different images that may correspond to the same object point.
- Find/screen feature point groups ie, feature point matching
- a common feature of the above method for screening matching feature point groups is that which feature point on one image is most similar to the feature point to be matched on another image, and one problem exists in the processing of images of complex scenes.
- the medium match error rate is often very high. For example, in an image of a periodic structure, two image points that do not belong to the same object or on the same period of the same period, it is likely that the normalized cross-correlation of the neighborhood is the largest, or the pixel difference is the smallest, and the true corresponding point combination may be It is in the position where the normalized cross-correlation value is the second largest or the pixel difference is the second. When there are no other dimensions to verify that the match is valid, then that match error will persist and be passed.
- Another problem with the above-described existing method of screening matching feature point groups is that the amount of calculation is very large, resulting in problems such as excessive processing time and/or difficulty in miniaturization and high cost of the required computing device.
- a three-dimensional measurement method based on a parallax ratio relationship comprising: receiving first, second, and third images from a first camera, a second camera, and a third camera, respectively
- the first camera, the second camera, and the third camera have the same focal length and optical axes parallel to each other, and the optical centers of the first camera, the second camera, and the third camera are disposed on the same plane perpendicular to the optical axis; Extracting feature points in the first image, the second image, and the third image; matching feature points in the first image, the second image, and the third image, the matching including filtering matching features based on the following parallax ratio relationship a group of points: a first parallax d 1 in a first direction and a second direction generated between the first image and the third image, which are generated between the first image and the second image, in the second direction
- a three-dimensional measuring apparatus comprising: a processor; and a memory storing program instructions, wherein when the program instructions are executed by the processor, causing the processor to perform the following operations Receiving a first image, a second image, and a third image; extracting feature points in the first image, the second image, and the third image, respectively; matching feature points in the first image, the second image, and the third image
- the matching includes: filtering the matched feature point group based on the following coordinate relationship: a difference between the abscissa of the feature point in the first image and the abscissa of the feature point in the second image and the feature point in the second image
- the abscissa has a predetermined proportional relationship with the difference between the abscissas of the feature points in the third image, and the feature points in the first image and the third image have the same ordinate; and the object corresponding to the matched feature point group is calculated The three-dimensional coordinates of the point.
- a three-dimensional measuring apparatus for use with a camera array for three-dimensional measurement.
- the camera array includes at least a first camera, a second camera, and a third camera, the first camera, the second camera, and the third camera having the same focal length and optical axes parallel to each other, and the first camera and the second camera
- the optical centers of the third camera and the third camera are arranged on the same plane perpendicular to the optical axis.
- the three-dimensional measuring apparatus includes a processing unit that receives a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively, and is configured to perform processing of: Extracting feature points in an image, a second image, and a third image; matching feature points in the first image, the second image, and the third image, the matching including filtering the matched feature point groups based on the following parallax ratio relationship: a same object point, a first parallax d 1 generated in a first direction between the first image and the second image, and a second generated in the second direction between the second image and the third image
- a three-dimensional measurement system based on a parallax ratio relationship
- the system comprising: a camera array and any one of the above three-dimensional measurement devices, the camera array including at least a first camera, a second camera, and a a three camera, the first camera, the second camera, and the third camera having the same focal length and an optical axis parallel to each other, and the optical centers of the first camera, the second camera, and the third camera are arranged in a same direction perpendicular to the optical axis on flat surface.
- 1A, 1B and 1C show the relationship between the binocular parallax and the relative position of the camera optical center
- FIG. 3 is a schematic structural block diagram showing an example of a three-dimensional measuring system of the present invention.
- Figure 4 shows a schematic overall flow chart of the three-dimensional measuring method of the present invention
- FIG. 5 illustrates an example of a camera array that can be used in conjunction with the three-dimensional measurement method according to the first embodiment of the present invention
- 6A, 6B, and 6C show the parallax ratio relationship between the cameras shown in Fig. 5.
- FIG. 7 is a view schematically showing an example of an image obtained by the camera array shown in FIG. 5 and illustrating coordinate difference values of image points;
- FIG. 8 is a schematic flow chart showing a three-dimensional measuring method according to a first embodiment of the present invention.
- FIG. 9 shows an example of a process of screening a feature point group which can be used for the three-dimensional measurement method according to the first embodiment of the present invention.
- FIG. 10 is a view schematically comparing feature point matching based on similarity calculation and feature point matching based on coordinate relationship in the three-dimensional measurement method according to the first embodiment of the present invention.
- Figure 11 is a flow chart showing an example of a three-dimensional measuring method according to a first embodiment of the present invention.
- FIG. 12 shows an example of a camera array that can be used in combination with the three-dimensional measurement method according to the second embodiment of the present invention and shows its parallax ratio relationship;
- Figure 13 is a view schematically showing an example of an image obtained by the camera array shown in Figure 12 and illustrating coordinate difference values of image points;
- FIG. 14 shows an example of a process of screening a feature point group which can be used for the three-dimensional measurement method according to the second embodiment of the present invention
- FIG. 15 shows an example of a camera array that can be used in combination with the three-dimensional measurement method according to the third embodiment of the present invention and shows its parallax ratio relationship;
- Figure 16 is a view schematically showing an example of an image obtained by the camera array shown in Figure 12 and illustrating coordinate difference values of image points;
- FIG. 17 shows an example of a process of screening a feature point group that can be used in the three-dimensional measurement method according to the third embodiment of the present invention.
- Figure 18 is a flowchart showing one example of a three-dimensional measuring method according to a third embodiment of the present invention.
- FIG. 19 illustrates an example of a camera array arrangement that can be used in a three-dimensional measurement system in accordance with an embodiment of the present invention.
- O l and O r represent the optical centers of the left and right cameras respectively (the optical center of the camera lens), and I l and I r represent the image planes of the left and right cameras, respectively (hereinafter referred to as left image and right, respectively).
- Image side The image plane of the camera is determined by the position of the photosensitive surface of the image sensor contained in the camera, such as a CCD or CMOS, typically located at a focal length f from the optical center.
- the camera is usually a real image that is inverted on the object.
- the image plane is located on the opposite side of the object with respect to the optical center. However, for the convenience of illustration and analysis, the image surface is shown at a symmetrical position on the same side as the object. Show. It should be understood that this does not change the parallax relationship that will be discussed in this application.
- Both cameras for binocular vision have the same focal length.
- the optical centers O l and O r of the left and right cameras are separated by a distance D (also referred to as "baseline”), and the corresponding optical axes Z l and Z r are parallel to each other.
- the optical center of the camera to the left O l camera coordinate system origin to the left and right of a camera optical axis is the Z-direction.
- the optical centers of the left and right cameras are located in the same plane (ie, the XY plane) perpendicular to the optical axis.
- a direction to connect with the optical center O l O r as the X direction of the camera coordinate system a direction perpendicular to the optical axis of the optical center and the connection to the Y-direction.
- the camera coordinate system may also be set in other ways, for example, the optical center of the camera to the right of the origin O r, is set in a manner different from the camera coordinate system, discussed below, does not affect the parallax than a given relationship.
- the image planes corresponding to the respective cameras have image plane coordinate systems set in the same manner.
- the point at which the camera optical axis intersects the image plane is the origin of the image plane coordinate system, and is parallel to the X coordinate of the camera coordinate system, respectively.
- the direction of the Y-axis is the x-axis and the y-axis of the image plane coordinate system.
- the image coordinate system can also be set in other ways, for example, using an apex angle of the photosensitive surface of the image sensor as the origin, and the setting of different image coordinate systems does not affect the parallax discussed below.
- the ratio relationship is described in the image plane coordinate system.
- Fig. 1A shows the relationship between the binocular parallax and the optical center distance in the direction in which the camera is connected in the optical center.
- Figure 1A shows the projection of the entire imaging system on the XZ plane.
- the same object point P[X, Y, Z] in space is imaged by the left and right cameras to the image points P l and P r , respectively .
- d / D f / Z, where x l and x r are the x-axis coordinates of the image points P l and P r in the left image plane I l and the right image plane I r , respectively.
- Fig. 1B shows binocular parallax in a direction perpendicular to the optical line of the camera.
- Figure 1 B schematically illustrates the projection of the entire imaging system on the YZ plane.
- the direction of the parallax to be investigated is set to the X direction in FIG. 1C, and the optical line connection between the left camera and the right camera does not coincide with the X direction, in other words, the direction of the parallax to be inspected relative to the camera optical center.
- the connection can be in any direction.
- the focal length f is an optical axis (not shown) parallel to each other, and the respective optical centers O 1 , O 2 and O 3 are located in the same plane perpendicular to the optical axis.
- the first parallax d 1 D 1 ⁇ f/Z in the first direction generated between the images obtained by the first camera and the second camera, wherein D 1 Is the offset of the optical center O 2 of the second camera relative to the optical center O 1 of the first camera in the first direction A, and the image obtained between the second camera and the third camera, in the second direction
- the first direction A is parallel to the plane of the camera's optical center and is not perpendicular to the direction of the optical connection of the first camera and the second camera
- the second direction B is parallel to the plane of the camera's optical center and is non-perpendicular to the first The direction of the optical connection between the two cameras and the third camera.
- the first direction is different from the second direction in FIG. 2, the first direction may be the same as the second direction.
- FIG. 3 shows a schematic structural block diagram of one example of a three-dimensional measurement system 10 in accordance with the present invention.
- FIG. 4 shows a schematic overall flow diagram of a three dimensional measurement method 100 of the present invention.
- three-dimensional measurement system includes a camera array 10 in FIG. CA, the CA includes at least a first camera array camera C 1, C 2, and the second camera third camera 3 C, they have the same focal length and the optical axes are parallel to one another And the respective optical centers are arranged on the same plane perpendicular to the optical axis.
- each camera has the same aperture, ISO, shutter time, image sensor, and the like. More preferably, each camera is a camera of exactly the same type.
- the three-dimensional measurement system 10 includes a processing unit 11 that receives images from a camera array, including first, second, and third images from a first camera C 1 , a second camera C 2 , and a third camera C 3 , and Processing is performed based on these images to achieve three-dimensional measurement.
- the camera array CA can include additional cameras, and the processing unit 11 can receive images from these additional cameras and process them.
- the three-dimensional measurement system 10 can include a control unit 12 operative to control the first camera, the second camera, and the third camera to acquire images simultaneously.
- the control unit 12 can also control a multiple zoom such as a camera. For example, for a farther scene, the three cameras need to be adjusted to a larger focal length identically.
- the control unit 12 may be connected through a wired or wireless manner with the first camera C 1, C 2, and the second camera third camera C 3, to realize the above control.
- the control unit 12 can be in communication with the processing unit 11 to receive information from the processing unit 11 to generate the control signals for controlling the camera to acquire images simultaneously, or to operate independently to achieve the above control.
- the three-dimensional measurement method 100 based on the parallax ratio relationship of the present invention is implemented based on, for example, the camera array CA of the three-dimensional measurement system 10.
- the three-dimensional measurement method 100 includes:
- S110 receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
- S120 extract feature points in the first image, the second image, and the third image
- S140 Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
- it can be considered as equal in engineering sense.
- d 1 :d 2 (1+k)*D 1 :D 2 , where k represents an allowable relative deviation, and
- the processing unit 11 of the three-dimensional measurement system 10 is configured to perform the above-described processes s110 to s140 in the three-dimensional measurement method 100.
- Processing unit 11 may be implemented by a processor and a memory storing program instructions that, when executed by the processor, cause the processor to perform the operations of processes s110-s140 described above.
- the processing unit 11 can constitute a three-dimensional measuring device 20 according to the invention.
- the image may be received directly from the first camera, the second camera, and the third camera, or may be received via other units or devices.
- a grayscale gradient can be used to find a point with a large change in gray as a feature point
- a sift algorithm can be used to find a sift feature point
- a corner point detection algorithm such as the Harris algorithm can be used to find a corner point of an image as a feature point. It should be understood that the present invention is not limited to the specific method for extracting feature points.
- the processing s130 is configured to match the feature points in each image, and includes processing for filtering the matching feature point groups based on the parallax ratio relationship described above.
- the process of screening matching feature point groups based on the parallax ratio relationship in process s130 will be described in more detail below in conjunction with various embodiments.
- Processing s130 may further include the process of filtering the matched set of feature points in other manners.
- the process s130 may implement feature point matching only by processing based on a parallax ratio relationship.
- feature point matching may be implemented in processing s130 in conjunction with processing based on disparity ratio relationships and processing based on other matching/screening methods.
- Other ways of filtering matching feature point groups include, for example, applying a similarity calculation to a pixel or neighborhood pixel group to filter matching feature point groups.
- the similarity calculation includes a sum of squares of pixel gradation differences, a sum of squares of zero-crossing average pixel gradation differences, an absolute value of pixel gradation differences, an absolute value of a zero-crossing average pixel gradation difference, and an adjacent
- the process s130 further includes applying a similarity calculation to the pixel or neighborhood pixel group to further filter the matched feature point group for the matched feature point group selected based on the parallax ratio relationship.
- the similarity calculation may be applied only to two or more sets of feature points that contain the same feature points, ie, only if the match result is not unique.
- the matching feature point group is first filtered by, for example, applying a similarity calculation to the feature point or its neighboring pixel group, and then the matching feature point group obtained by the similarity calculation screening is judged. Whether the group feature points satisfy the parallax ratio relationship, thereby further screening the matching feature point groups.
- Each of the matched feature point groups obtained by processing s130 includes one feature point from the first image, the second image, and the third image, respectively.
- the depth (Z coordinate) of the corresponding object point may be calculated based on any two feature points in a matched feature point group, and then the X, Y coordinates of the object point are calculated according to the similar triangle principle.
- a method of calculating a depth value based on two matched feature points and calculating an X, Y coordinate based on the depth value is known, and will not be described herein.
- a depth value is calculated in each of the two feature points in a matched feature point group in the processing s130, and the average value is taken as the depth value of the corresponding object point.
- the calculated depth values should be equal, but in reality, considering the effect of image noise, there will be a slight difference between the three values. Therefore, taking the average value can reduce the influence of noise and improve the accuracy of the depth value.
- two pairs of feature points in the feature point group may also be selected to calculate the depth value and take the average value as the depth value of the object point.
- the process s140 may further include calculating a color value of the object point corresponding to each matched feature point group, such as [R, G, B]. This color value can form voxel information together with the coordinates [X, Y, Z] of the object point, such as [X, Y, Z, R, G, B].
- voxel information of all object points can be combined to form a true color 3D model of the object or scene.
- the three-dimensional measurement method 100 and the three-dimensional measurement system 10 will be described in more detail below in connection with various embodiments, particularly in which a matching feature point group is screened based on a parallax ratio relationship.
- 5 to 11 illustrate a three-dimensional measurement system and a three-dimensional measurement method according to a first embodiment of the present invention.
- FIG. 5 shows a camera array CA in a three-dimensional measuring system according to a first embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 , and the third camera C 3 have the same focal length and optical axes parallel to each other And the optical centers of the three cameras are arranged on the same straight line (X direction) perpendicular to the optical axis.
- D 1 D 2 ; in the arrangement (b), D 1 ⁇ D 2 .
- FIG. 6A, 6B, and 6C show the parallax ratio relationship between the three cameras shown in Fig. 5.
- Each of the image planes has an image plane coordinate system set in the same manner as described above.
- the x-axis direction of each image plane corresponds to the direction of the line where the camera's optical center is located, and the y-axis is perpendicular to the x-axis.
- k represents the relative deviation allowed, and
- may be, for example, 0.05 or 0.01.
- FIG. 7 schematically shows an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 5, wherein the abscissa axis (u axis) of the image corresponds to the camera light. The direction of the line where the heart is located. Ideally, as shown in FIG. 7 by superimposing the first image IM. 1, the second and the third image IM 2.
- the first image, the second image, and the third image obtained directly from the first camera, the second camera, and the third camera due to manufacturing errors, mounting errors, and changes in camera internal parameters and/or external parameters generated during use It usually deviates from the above ideal situation. This can bring the image closer to the ideal situation by physically calibrating or correcting the camera and/or by correcting the image by a calculation program.
- the three-dimensional measurement method may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera
- the points of the optical axes of the second camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image all correspond to the direction of the straight line, wherein The directions of the abscissa and the ordinate are perpendicular to each other.
- the correction process is performed before the process s130 shown in Fig. 4, that is, the process of matching the feature points, preferably after the image is received and before the feature points are extracted, that is, between the processes s110 and s120 shown in Fig. 4.
- the three-dimensional measurement method according to an embodiment of the present invention is not limited to the case including the above-described correction processing, for example, in some applications, correction may be performed by physical adjustment of the camera array without an image Correction.
- a correction unit 13 may be further included in the three-dimensional measurement system 10 according to the present invention, the correction unit 13 receiving an image from the camera column CA and generating a correction matrix based at least in part on the image, The correction matrix is provided to the processing unit 11.
- the correction matrix when applied to the first image, the second image, and the third image by the processing unit 11, implements the above-described correction processing in the three-dimensional measurement method.
- the three-dimensional measuring device 20 can include the correcting unit 13.
- Processing unit 11 and correction unit 13 may be implemented based on the same processor and memory or different processors and memories.
- the processing of matching the feature point groups is implemented by: filtering the matched feature point groups based on the following coordinate relationships: the abscissa of the feature points P 1 [u 1 , v 1 ] in the first image and the feature points P 2 in the second image [
- the difference s 1 u 1 -u 2 of the abscissa of u 2 , v 2 ] and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image and the feature points in the third image
- the other processing of the three-dimensional measurement method 200 is the same as the corresponding processing in
- process 300 includes:
- the first feature point and the second feature point are respectively selected in the first image, the second image, and the third image, and the ordinates of the first feature point and the second feature point satisfy relative to one
- the target ordinate is within a predetermined range
- S330 Search for the third feature point in the third party based on the expected position formed by the expected abscissa and the target ordinate of the third feature point.
- Process 300 can be used to scan an image line by line, for example, in a certain order of ordinates (increment or decrement) to filter matching sets of feature points.
- the first feature point is selected from the second image in the above description, which is merely exemplary, and the present invention is not limited to which image is selected from the first feature point.
- the number of feature points I, J, and K that meet the ordinate requirement in each image may be first compared, and the first feature point is selected in the image with the smallest number of feature points, and the number of feature points is The second feature point is selected in the small image, and finally the third feature point is searched for in the image with the largest number of feature points.
- the feature points can be searched and selected within a predetermined range of the ordinate of v t - ⁇ - v t + ⁇ , and ⁇ is an integer greater than or equal to 0, which can be installed according to, for example, a camera array. It is determined by the condition of use or the quality of the image, preferably 0 ⁇ ⁇ ⁇ 2.
- the expected position [u e , v t ] of the feature point P 1 can be obtained, and based on the The expected position is searched in the first image IM 1 for the presence or absence of the feature point P 1 .
- the abscissa tolerance ⁇ u and the ordinate tolerance ⁇ v can be set, and [u e - ⁇ u ⁇ u e + ⁇ u , v t - ⁇ v ⁇ v t + ⁇ v] range inherent in the first image IM 1 searches the feature point P 1 (e.g. see FIG. 1, shown in a broken line range image IM. 7).
- P 1 e.g. see FIG. 1, shown in a broken line range image IM. 7
- only one of the abscissa tolerance and the ordinate tolerance may be set, and details are not described herein again.
- the feature points P 1 , P 2 , P 3 are taken as one matching feature point group. If the expected feature point P 1 is not searched, the selected first feature point (P 2 ) and the second feature point (P 3 ) do not have the possibility of matching, and the screening of the round ends, and then the selection can be selected. The next second feature point is then repeated for the above processes s320 and s330. After traversing all of the second feature points, a new first feature point is selected and the above process is similarly repeated.
- FIG. 9 shows only one example of a process of filtering matching feature point groups based on the coordinate relationship.
- the candidate matching feature point group has been obtained by other means (for example, calculation of the similarity of the pixel or the neighborhood pixel group)
- the processing based on the parallax ratio relationship generated between the three cameras is simplified to the processing based on the coordinate relationship of the corresponding feature points in the image, and the feature point matching/filter matching is performed.
- the former mainly adds and subtracts coordinates and a small number of multiplication operations (in D 1 :
- D 2 1
- only addition and subtraction operations are required, and the latter usually requires dense multiplication operations, such as convolution of matrices, so in comparison, the former greatly reduces the matching process.
- a significant reduction in the amount of computation is of great significance for three-dimensional measurement and reconstruction based on high definition images and for real-time three-dimensional measurement and reconstruction, which offers the possibility of implementation of the latter.
- FIG. 10(a) shows a certain feature point P l1 in one image obtained by, for example, a binocular camera, and three candidate feature points P r1 which are very similar to the self attribute and the neighborhood attribute in another image, P r2 , P r3 . According to the unique requirement of feature point matching in 3D reconstruction, the above non-unique matching result can only be abandoned.
- FIG. 10 shows that for images from the first camera, the second camera, and the third camera arranged as shown in FIG.
- the feature point matching based on the coordinate relationship can help to eliminate the error result obtained by the matching based on the similarity calculation, and improve the correct rate of the matching; at the same time, since the unique matching result can be obtained with a larger probability, the matching result is avoided.
- the matching failure caused by not unique, so the feature point matching based on the parallax ratio relationship/the above coordinate relationship also contributes to the improvement of the matching rate, thereby contributing to the realization of the high density feature point cloud.
- FIG. 11 is a flow chart showing an example of a three-dimensional measurement method, a three-dimensional measurement method 400, according to a first embodiment of the present invention.
- the three-dimensional measurement method 400 includes:
- S410 receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
- S420 performing a correction process on the first image, the second image, and the third image, as discussed above, such that the first image, the second image, and the third image correspond to the first camera, the second camera, and
- the points of the optical axis of the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image all correspond to the direction of the straight line;
- S430 extract feature points in the first image, the second image, and the third image
- S442 further performing screening based on the similarity calculation of the pixel group of the pixel or the neighborhood for the matching feature point group obtained by processing s441;
- S450 Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
- the process s441 can be implemented as, for example, the process 300 shown in FIG. 9, but is not limited thereto.
- Process s420 may be a correction process implemented using any correction method, including a correction process implemented by applying a certain correction matrix to the image.
- the parallax ratio relationship can also be applied to the correction processing, and for example, the correction matrix for the correction processing can be generated based on the parallax ratio relationship.
- the process of generating a matrix for the correction process based on the parallax ratio relationship may include extracting feature points in the first image, the second image, and the third image, respectively, for example, extracting a sparse feature dot lattice by using a sift algorithm Matching feature points in the first image, the second image, and the third image to obtain a plurality of matching feature point groups, for example, by using a RANSAC algorithm; using coordinates of the feature points in each image in the matching feature point group, according to the correction a matrix is applied to each of the images to match the parallax ratio relationship satisfied between feature points in the feature point group, to establish an overdetermined system of equations; and to solve the overdetermined equations by, for example, a least squares method to obtain a correction matrix .
- the matching feature point group is first filtered by the processing s441, so that the number of feature points entering the subsequent feature point matching processing s442 based on the similarity calculation is greatly reduced, and the calculation amount in the feature point matching process can be significantly reduced.
- the combination of processing s441 and s442 also helps to improve the correct rate and matching rate of feature point matching, so that a higher density feature point cloud can be obtained.
- FIG. 12 shows a camera array CA in a three-dimensional measuring system according to a second embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 , and the third camera C 3 have the same focal length and optical axes parallel to each other (Z direction), the optical centers O 1 , O 2 of the first camera C 1 and the second camera C 2 are aligned in the X direction, and the optical centers O 2 , O 3 of the second camera C 2 and the third camera C 3 are along the Y Align the directions.
- the shift of the optical center of the second camera with respect to the optical center of the first camera in the X direction is D 1
- the shift of the optical center of the third camera with respect to the optical center of the second camera in the Y direction is D 2 .
- FIG. 13 is a view schematically showing an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 12, wherein the abscissa axis (u axis) of the image corresponds to the first The direction of the line where the camera and the second camera are located (x-axis direction), and the ordinate of the image (v-axis) corresponds to the direction of the line where the second camera and the third camera's optical center are located (y direction).
- the abscissa axis (u axis) of the image corresponds to the first The direction of the line where the camera and the second camera are located (x-axis direction)
- the ordinate of the image (v-axis) corresponds to the direction of the line where the second camera and the third camera's optical center are located (y direction).
- the three-dimensional measurement method may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera, the second image
- the points of the optical axes of the camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the optical centers of the first camera and the second camera
- the direction, the ordinate direction corresponds to the direction of the optical line connection of the second camera and the third camera.
- the process of screening the matching feature point group based on the parallax ratio relationship in the process s130 shown in FIG. 4 is implemented to include:
- the matching feature point group is filtered based on the following coordinate relationship: the abscissa of the feature point P 1 [u 1 , v 1 ] in the first image and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image
- the difference s 3 u 1 -u 2 and the ordinate of the feature point P 2 [u 2 , v 2 ] in the second image and the feature point P 3 [u 3 , v 3 ] in the third image
- FIG. 14 shows an example of a process of screening matching feature point groups based on the above-described coordinate relationship, process 500.
- process 500 includes:
- S510 selecting a first feature point and a second feature point in the first image and the second image, the ordinates of the first feature point and the second feature point satisfying a predetermined range with respect to a target ordinate;
- S540 Calculate a expected ordinate of the third feature point in the third image that matches the first feature point and the second feature point, such that a difference between the ordinate of the second feature point and the expected ordinate of the third feature point The second difference s 4 calculated for the above;
- S550 Search for the third feature point in the third image based on the expected position formed by the expected ordinate of the third feature point and the abscissa of the second feature point.
- the operation of selecting feature points in the predetermined range with respect to the target coordinates in the processing s510 may refer to the operations described above in the processing s310 in the combination processing 300.
- the process s550 of the process 500 may also set a tolerance range with respect to the expected position, and details are not described herein again.
- the first camera and the third camera actually have a pair with respect to the second camera. a positional relationship, such that in process s510, the first feature point and the second feature point are selected in the first image and the second image, and the ordinates of the first feature point and the second feature point satisfy The first and second feature points are selected first in the horizontally aligned two images by being within a predetermined range with respect to a target ordinate.
- feature points in the first image and the third image that have the same ordinate and the same abscissa as the feature point may be searched for based on the first image. And the number of feature points searched in the third image to determine the next search order. For example, when the number of feature points having the same ordinate in the first image is greater than the number of feature points having the same abscissa in the third image, the feature points may be selected from the third image to be calculated. And determining the expected position of the feature points in the first image and searching. Processing s510 is intended to cover this situation.
- the process 500 can be used to traverse feature points in the second image, for example, row by row or column by column, to filter feature points in the first image and the third image to match, thereby filtering the matched feature point groups.
- the three-dimensional measurement method according to the second embodiment of the present invention is relative to a pixel-based or neighborhood-based pixel group in the feature point matching process.
- the calculation amount of the matching process is greatly reduced, which helps to improve the spatial precision and real-time performance of the three-dimensional measurement.
- the three-dimensional measurement method according to the present embodiment may be combined with a feature point matching method based on the similarity calculation of a pixel or a neighborhood pixel group, in which case the feature point matching based on the parallax ratio relationship (screening matching feature point group)
- the method can effectively eliminate the mismatching result of the feature point matching calculated based on the similarity of the pixel or the neighborhood pixel group, improve the correct rate of the matching; and help to avoid the matching failure due to the non-unique matching result, thereby Helps improve the matching rate and obtain a high-density feature point cloud.
- 15 to 18 illustrate a three-dimensional measurement system and a three-dimensional measurement method according to a third embodiment of the present invention.
- Figure 15 shows a camera array CA in a three-dimensional measuring system according to a third embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 and the third camera C 3 have the same focal length and optical axes parallel to each other (Z direction), the first camera C 1 , the second camera C 2 and the third camera C 3 are arranged in a triangle, and their optical centers O 1 , O 2 , O 3 are arranged on the same plane perpendicular to the optical axis Wherein the optical centers O 1 , O 2 of the first camera C 1 and the second camera C 2 are aligned in the X direction.
- the shift of the optical center of the second camera with respect to the optical center of the first camera in the X direction is D 1
- the shift of the optical center of the third camera with respect to the optical center of the second camera in the X direction is D 2 .
- FIG. 16 schematically shows an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 15 , wherein the abscissa axis (u axis) of the image corresponds to the first The direction of the line where the camera and the third camera are located (x-axis direction).
- the same object point P[X, Y, Z] obtains image points P 1 [u 1 , v 1 ], P 2 [u] in the images IM 1 , IM 2 , and IM 3 , respectively.
- the three-dimensional measurement method may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera, the second image
- the points of the optical axes of the camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the optical connections of the first camera and the third camera The direction.
- the process of screening the matching feature point group based on the parallax ratio relationship in the process s130 shown in FIG. 4 is implemented to include:
- the matching feature point group is filtered based on the following coordinate relationship: the abscissa of the feature point P 1 [u 1 , v 1 ] in the first image and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image
- the difference s 5 u 1 -u 2 and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image and the feature point P 3 [u 3 , v 3 ] in the third image
- the feature points in the first image and the third image have the same
- FIG. 17 shows an example of a process of screening a matching feature point group based on the above coordinate relationship, process 600.
- process 600 includes:
- S610 respectively selecting a first feature point and a second feature point in the first image and the third image, wherein the ordinate of the second feature point is within a predetermined range with respect to the ordinate of the first feature point;
- S630 Search for the third feature point in the second image based on the expected abscissa.
- the operation of selecting the feature points in the predetermined range with respect to the target coordinates in the processing s610 may refer to the operations described above in the processing s310 in the combination processing 300. Further, similar to the process 300, the processing s630 600 may be expected with respect to the abscissa is set tolerance range (horizontal dashed line 2 of the feature point P in FIG. 16 Referring range), are not repeated here.
- the ordinate range of the third feature point in the second image is not constrained in the process 600 as compared to the process 300 shown in FIG. 9, so that the matching feature is searched based on the expected abscissa in the second image. Point, the resulting matching result is more likely to be non-unique than the first embodiment and the second embodiment.
- FIG. 18 shows an example of a three-dimensional measurement method, a three-dimensional measurement method 700, according to a third embodiment of the present invention.
- the three-dimensional measurement method 700 includes:
- S710 receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
- S720 performing a correction process on the first image, the second image, and the third image, and as the above discussion, the correction process is such that the first image, the second image, and the third image correspond to the first camera, the second camera, and The points of the optical axis of the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the directions of the optical centers of the first camera and the third camera. ;
- S730 extract feature points in the first image, the second image, and the third image
- S742 Filter matching feature point groups based on the following coordinate relationship: the abscissa of the feature points P 1 [u 1 , v 1 ] in the first image and the cross-section of the feature points P 2 [u 2 , v 2 ] in the second image
- S750 Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
- the process s720 may be a correction process implemented by using any correction method, including a correction process implemented by applying a certain correction matrix to the image.
- the three-dimensional measurement method 700 is not limited to the processing using the similarity calculation based on the pixel group or the neighborhood of the pixel group in the processing s741, and may be replaced with other existing or later emerging others. Filter the processing of matching feature point group/feature point matching.
- the process s742 can be implemented as, for example, the process 600 shown in FIG. 17, but is not limited thereto.
- the three-dimensional measurement method 700 according to the third embodiment of the present invention can help improve the correct rate and matching rate of feature point matching by employing the process s742, and contribute to obtaining a higher density feature point cloud.
- Figure 19 shows a more arrangement of a camera array that can be used in a three dimensional measurement system in accordance with the present invention.
- the camera array can include three cameras arranged in an equilateral triangle, and can be further expanded to arrange more than three cameras forming a plurality of equilateral triangles, such as the honeycomb arrangement shown in the lower right corner of FIG.
- a camera array comprising three cameras arranged in a right triangle may be further expanded into a variety of other forms, such as rectangular (including square), T-shaped, cross-shaped, diagonal, and expanded forms in units of these shapes.
- the three-dimensional measurement method according to the present invention can be implemented based on images from three cameras or more than three cameras in a camera array.
- the three-dimensional measurement system and the three-dimensional measurement method according to the first, second and third embodiments of the present invention are respectively described above, those skilled in the art should understand that these embodiments or features therein may be combined to form different technologies.
- the first, second, third, and fourth cameras are included in the camera array, and the three-dimensional measurement method according to the present invention may be for a set of images from the first, second, and third cameras.
- the set of images from the first, third and fourth cameras respectively perform the feature point matching processing based on the parallax ratio relationship proposed by the present invention, and combine the matching results of the two sets of images to determine the last in the first sum.
- the set of feature points in the third camera is matched to calculate the spatial position of the corresponding object point.
- the three-dimensional measurement system 10 may further include a projection unit 14 in addition to a different camera array CA.
- the projection unit 14 is for projecting a projection pattern onto a shooting area of the camera array CA, which can be captured by a camera in the camera array CA.
- the projection pattern can add more feature points in the shooting area, and in some applications, the feature points can be distributed more evenly or compensate for the lack of feature points in some areas.
- the projected pattern can include dots, lines, or a combination thereof.
- the dots may also be formed as larger or have a specific shape after the size is increased, and the lines may be formed into a strip pattern having a wider or other shape characteristic after the size is increased.
- the projection pattern includes a line, and the extending direction of the line is not parallel to the optical fiber connection direction of at least two of the first camera, the second camera, and the third camera, which helps provide more available for use according to the present invention.
- a feature point of the matching process based on the parallax ratio relationship in the three-dimensional measurement method of the invention.
- the projected pattern can also be encoded by features such as color, intensity, shape, distribution, and the like.
- the feature points with the same encoding in the image obtained by the camera are necessarily matching points. Added matching dimensions to improve match rate and match accuracy.
- the projection unit 14 may be configured to include a light source and an optical element for forming a projection pattern based on illumination light from the light source, such as a diffractive optical element or grating or the like.
- the illumination light emitted by the light source includes light having wavelengths in an operating wavelength range of the first camera, the second camera, and the third camera, and may be monochromatic light or multi-color light, and may include visible light and/or Or non-visible light, such as infrared light.
- the projection unit 14 can be configured to be able to adjust the projection direction to selectively project a projection pattern to different regions depending on different shooting scenes.
- the projection unit 14 may be configured to enable sequential single pattern projection or sequential multi-pattern projection, or to project different patterns according to different shooting scenes.
- the three-dimensional measurement system 10 can also include a sensor 15 for detecting at least a portion of the pattern features projected by the projection unit 14 to obtain additional information that can be used for three-dimensional measurements.
- the camera array CA operates at visible wavelengths and infrared wavelengths
- the projection unit 14 projects a projected pattern at infrared wavelengths
- the sensor 15 is an infrared sensor or an infrared camera, at which point the information acquired by the camera array CA includes both visible light formation.
- the image information in turn includes a projected pattern that can provide more feature points for three-dimensional measurement based on binocular vision, while the projected pattern obtained by sensor 15 can be used for measurements based on other three-dimensional measurement techniques, such as based on structured light. measuring.
- the information obtained by the sensor 15 can be transmitted, for example, to the correcting unit 13, which can, for example, use three-dimensional measurement results obtained based on information from the sensor 15 and other three-dimensional measuring techniques for the camera. Calibration and/or correction of the array or its obtained images.
- the three-dimensional measurement system 10 can be implemented as an integrated system or as a distributed system.
- the camera array CA in the three-dimensional measurement system 10 can be mounted on one device, and the processing unit 11 can be implemented based on an internet server to be physically separate from the camera array CA.
- the projection unit 14 and the sensor 15 may be mounted together with the camera array CA or may be provided independently.
- the control unit 12 the correcting unit 13 may be implemented together with the processing unit 11 by the same processor and associated memory or the like (which may be formed as part of the three-dimensional measuring device 20 represented by the dashed box in FIG. 3), or may be implemented separately.
- the correction unit 13 can be implemented by a processor integrated with the camera array CA and an associated memory or the like.
- the three-dimensional measuring system according to the present invention is implemented as a three-dimensional measuring device based on a mobile phone and an external camera module.
- the camera module consists of three cameras of the same model.
- the centers of the three cameras are in a straight line, and the distances between the centers of adjacent cameras are equal.
- the optical axes of the cameras are parallel to each other and face the same, all perpendicular to the line where the camera center is located.
- the camera module is connected to the phone via WiFi and/or data cable.
- the phone can control the camera's 3 cameras for simultaneous shooting (photo and / or video) and equal magnification.
- Photos and/or videos captured by the camera module are transmitted to the phone via WiFi and/or data lines.
- the mobile phone corrects the image frames in the photo and/or video by the correction application.
- a checkerboard placed in front of the camera module can be utilized during the calibration process, and the checkerboard mesh size is known. Corrections based on an auxiliary correction tool such as a checkerboard are known in the art and will not be described again.
- the method for correction in this application example is also not limited to this particular method.
- the camera module is used to capture the scenes and objects to be modeled.
- the mobile phone can be further integrated or externally connected with a projection module for projecting stripes onto the object to be photographed, and the direction of the stripes is not parallel to the direction of the center of the three cameras.
- a processing unit (consisting of a processing chip and a storage unit of the mobile phone) integrated in the mobile phone extracts feature points and feature areas common to the three camera photos.
- the feature points also include new feature points and feature regions formed on the object by the stripes projected by the projection module.
- the matching feature point group having the image coordinate symmetry relationship and the similar attribute is selected.
- a plurality of depth values are calculated based on each of the matched feature point groups, and an average of the depth values is taken. Then, the three-dimensional space coordinates of the object points corresponding to the feature point group are calculated and fused with the color information to form voxel information. Finally, a true color point cloud of the entire object or scene is displayed on the mobile phone or a three-dimensional model reconstructed based on the point cloud.
- the handset can transmit the received photos and/or videos from the camera module to the cloud server.
- the server can quickly correct the photos, extract feature points, match the corresponding matching feature point groups with image coordinate symmetry relationship and similar attributes, calculate the three-dimensional coordinates of the object points corresponding to the matched feature point groups, and form voxel information by color information fusion.
- the true color point cloud result or the three-dimensional model reconstructed based on the point cloud is transmitted back to the mobile phone for display.
- the system can also be integrated with TOF to solve image-free features (such as large white walls) or occlusion problems.
- the fusion of the two point clouds not only ensures that large object targets can be sampled, but also is compatible with the surface details of the objects, and can form a three-dimensional point cloud with more comprehensive spatial sampling and higher resolution.
- the "forward camera + TOF" fusion 3d sampling module can scan the pixel level precision of the face, and the current TOF can only scan the approximate surface contour.
- the 3D module of "backward camera + TOF” is a shortcoming that complements the TOF system's limited projection distance.
- the three-dimensional point cloud formed by the TOF is transformed and projected by the space coordinate system to calculate the initial parallax of the corresponding sampling point on the image, which can accelerate the image matching process.
- the basic process of image 3D measurement system and laser radar fusion is as follows: 1. TOF generates 3D point cloud; 2. Converts the point cloud 3D coordinates of lidar to reference camera coordinates through conversion between TOF coordinate system and reference camera coordinate system The three-dimensional coordinates of the corresponding points are obtained; 3. According to the projection equation of the camera, the three-dimensional coordinates of the corresponding sampling points in the reference camera coordinate system are converted into the two-dimensional coordinates and the parallax of the image; 4. The two-dimensional coordinates corresponding to the sampling points by referring to the camera image Coordinates and disparity find the initial position of the two-dimensional coordinates on other camera images; 5.
- the system can also be integrated with structured light to solve the problem of images without features such as large white walls.
- Structured light can form high density features on the surface of the object. Structured light is more flexible to use. It can only increase the feature points on the surface of the object (allowing the pattern arrangement of different positions to be repeated), and the three-dimensional point cloud can be generated by multi-camera system matching, and the coding pattern projection can also be generated (guarantee that the pattern coding or neighborhood pattern coding of each point is unique)
- the three-dimensional point cloud is calculated from the triangle original understanding between the camera and the projection device.
- a combination of "camera + TOF + structured light” can be formed, which can form a high-density and ultra-high-resolution 3D point cloud, which can be used for applications such as VR/AR games or movies that require very realistic 3D virtual objects. in.
- the three-dimensional measuring system according to the present invention is realized as a device for automatic driving of a vehicle.
- the camera module is mounted on the front of the vehicle and includes three cameras of the same model.
- the centers of the three cameras are in a straight line with the same distance from the center of the camera.
- the optical axes of the cameras are parallel to each other and face the same, all perpendicular to the line where the camera center is located.
- the camera module is connected to the onboard computer via a data cable.
- the on-board computer can control the camera's 3 cameras for simultaneous shooting (photo and/or video) and equal magnification.
- the photos and/or videos captured by the camera module are transmitted to the onboard computer via the data cable.
- the on-board computer generates a correction matrix from multiple images (image frames in a photo or video) taken at the same time. After the correction, the on-board computer extracts the feature points and feature regions shared by the images taken by the three cameras, selects the corresponding matching group with symmetric structure and similar properties, and selects the matched feature point groups with the image coordinate symmetry relationship and the similar attributes, and calculates The three-dimensional coordinates of the object points corresponding to the feature point group are matched, and the true color three-dimensional point cloud is output after being merged with the color information, and is pushed to the decision system of the vehicle automatic driving.
- the system can also be integrated with Lidar to solve image-free features (such as large white walls) or occlusion problems.
- Lidar is suitable for spatial sampling of large objects, even in the case of images without features.
- the spatial sampling rate limitation such as a power pole of several tens of meters
- the leakage angle may exist when the opening angle is smaller than the spatial sampling angle resolution of the radar.
- the image system is more sensitive to various features of the object, including edge features.
- the fusion of the two point clouds not only ensures that large targets can be sampled, but also is compatible with the surface details of objects and the sampling of small objects, which can form a three-dimensional point cloud with more comprehensive spatial sampling and higher resolution.
- the three-dimensional point cloud formed by the laser radar is converted into the initial parallax of the corresponding sampling point on the image, which can accelerate the image matching process.
- the basic process of image 3D measurement system and laser radar fusion is as follows: 1. Lidar generates 3D point cloud; 2. Converts lidar 3D coordinates of lidar into reference by conversion between lidar coordinate system and reference camera coordinate system The three-dimensional coordinates of the corresponding points in the camera coordinate system; 3. According to the projection equation of the camera, the three-dimensional coordinates of the corresponding sampling points in the reference camera coordinate system are converted into two-dimensional coordinates and parallax of the image; 4. by referring to the camera image corresponding to the sampling points Two-dimensional coordinates and parallax find the initial position of the two-dimensional coordinates on other camera images; 5.
- the three-dimensional measurement system according to the present invention is realized as a device for performing aerial photography based on a drone and an external camera module.
- the camera module is placed on the airborne head of the drone, including five cameras of the same model.
- the five cameras are distributed in a cross shape with the center of the camera on the same plane.
- the distance between the centers of adjacent cameras in the longitudinal and lateral directions is equal.
- the optical axes of the cameras are parallel to each other and face the same, all perpendicular to the plane of the center of the camera.
- the camera module is connected to the drone's onboard computer via a data cable.
- the onboard computer can control the camera's 5 cameras for simultaneous shooting (photo and/or video) and equal magnification.
- Photos and/or videos captured by the camera module are transmitted to the onboard computer via the data cable.
- the onboard computer generates a correction matrix from multiple images (image frames in a photo or video) taken at the same time. The image is corrected by a correction matrix.
- the onboard computer can display the feature points and feature areas shared by the images of the five cameras on the camera module in real time, extract corresponding matching feature point groups with image coordinate symmetry relationship and similar attributes, and calculate the three-dimensional object points corresponding to the matched feature point groups. Coordinates, combined with color information, output true color 3D point clouds.
- the equal signs in the formulas in the present invention are all equal in engineering sense, and a certain deviation can be tolerated. That is, if the difference between the two sides of the equal sign is within a certain range, it can be considered equal.
- the range of the deviation is, for example, plus or minus 5%, or plus or minus 1%.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims (51)
- 一种三维测量方法,包括:接收分别来自第一相机、第二相机和第三相机的第一图像、第二图像和第三图像;分别在所述第一图像、第二图像和第三图像中提取特征点;对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括基于同一物点在第一图像和第二图像之间产生的、在第一方向上的第一视差d 1与在第二图像和第三图像之间产生的、在第二方向上的第二视差d 2筛选匹配特征点组,其中第一方向为非垂直于第一相机和第二相机的光心连线的方向,第二方向为非垂直于第二相机和第三相机的光心连线的方向;以及计算匹配特征点组对应的物点的三维坐标。
- 如权利要求1所述的三维测量方法,其中,所述第一相机、第二相机和第三相机具有相同的焦距以及彼此平行的光轴,并且第一相机、第二相机和第三相机的光心布置在垂直于光轴的同一平面上,其中所述第一方向为第一相机和第二相机的光心连线方向,平行于所述平面,第二方向为第二相机和第三相机光心连线方向,平行于所述平面,其中所述匹配包括基于以下视差定比关系筛选匹配特征点组:d 1:d 2=D 1:D 2,其中D 1为第一相机的光心相对于第二相机的光心在所述第一方向上的偏移量,D 2为第二相机的光心相对于第三相机的光心在所述第二方向上的偏移量。
- 如权利要求2所述的三维测量方法,其中,所述第一相机、第二相机和第三相机的光心依次布置在垂直于光轴的一条直线上,所述第一方向和第二方向为所述直线的方向;并且所述基于所述视差定比关系筛选匹配特征点组的处理包括:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点的横坐标与第二图像中的特征点的横坐标的差值s 1与第二图像中的所述特征点的横坐标与第三图像中的特征点的横坐标的差值s 2满足s 1:s 2=D 1:D 2,并且 第一图像、第二图像和第三图像中的所述特征点具有相同的纵坐标,所述横坐标的方向对应于所述直线的方向。
- 如权利要求3所述的三维测量方法,还包括:对第一图像、第二图像和第三图像进行校正处理,使得第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于所述直线的方向。
- 如权利要求3所述的三维测量方法,其中,所述基于所述坐标关系筛选匹配特征点组的处理包括:在第一图像、第二图像和第三图像中的两者中分别选定第一特征点和第二特征点,所述第一特征点和第二特征点的纵坐标满足相对于一目标纵坐标位于预定范围内;计算第一图像、第二图像和第三图像中的第三者中与所述第一特征点和第二特征点匹配的第三特征点的期待横坐标,使得所述差值s 1与所述差值s 2满足s 1:s 2=D 1:D 2;以及基于由第三特征点的期待横坐标和目标纵坐标构成的期待位置,在所述第三者中搜索第三特征点。
- 如权利要求5所述的三维测量方法,其中,在第一图像、第二图像和第三图像中的两者中分别选定纵坐标为所述目标纵坐标的第一特征点和第二特征点。
- 如权利要求5所述的三维测量方法,其中,所述基于所述期待位置搜索第三特征点的处理还包括:设定容差范围,该容差范围包括横坐标容差范围和纵坐标容差范围中的至少一者;以及在第一图像、第二图像和第三图像中的所述第三者中,在相对于所述期待位置位于所述容差范围内的区域内,搜索第三特征点。
- 如权利要求5所述的三维测量方法,其中,所述基于所述坐标关系筛选匹配特征点组的处理还包括:计算第一图像、第二图像和第三图像中纵坐标相对于所述目标纵坐标位于预定范围内的特征点的数量;以及确定第一图像、第二图像和第三图像中的所述第三者为第一图像、第二图像和第三图像中具有给定纵坐标的特征点的数量最多的一者。
- 如权利要求5所述的三维测量方法,其中,D 1:D 2=1:1,并且所述计算第三特征点的期待横坐标的处理包括:计算所述期待横坐标,使得第一图像中的特征点与第三图像中的特征点的横坐标之和为第二图像中的特征点的横坐标的两倍。
- 如权利要求2所述的三维测量方法,其中,第一相机和第二相机的光心布置在垂直于光轴的第一直线上,第二相机和第三相机的光心布置在垂直于光轴且垂直于所述第一直线的第二直线上,所述第一方向为第一直线的方向,所述第二方向为第二直线的方向;并且所述基于所述视差定比关系筛选匹配特征点组的处理包括:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点的横坐标与第二图像中的特征点的横坐标的差值s 3与第二图像中的所述特征点的纵坐标与第三图像中的特征点的纵坐标的差值s 4满足s 3:s 4=D 1:D 2,第一图像和第二图像的所述特征点具有相同的纵坐标,第二图像和第三图像的所述特征点具有相同的横坐标,所述横坐标的方向对应于所述第一方向,所述纵坐标的方向对应于所述第二方向。
- 如权利要求10所述的三维测量方法,还包括:对第一图像、第二图像和第三图像的校正处理,该校正处理使得第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向对应于第一方向,纵坐标方向对应于第二方向。
- 如权利要求10所述的三维测量方法,其中,所述基于所述坐标关系筛选匹配特征点组的处理包括:在第一图像和第二图像中分别选定第一特征点和第二特征点,所述第一特征点和第二特征点的纵坐标满足相对于一目标纵坐标位于预定范围内;计算第一特征点的横坐标与第二特征点的横坐标的差值s 3;计算差值s 4,使得s 3:s 4=D 1:D 2;计算第三图像中与所述第一特征点和第二特征点匹配的第三特征 点的期待纵坐标,使得第二特征点的纵坐标与第三特征点的期待纵坐标的差值为上述计算得到的第二差值s 4;并且基于由所述第三特征点的期待纵坐标和第二特征点的横坐标构成的期待位置,在第三图像中搜索第三特征点。
- 如权利要求12所述的三维测量方法,其中,所述基于所述期待位置搜索第三特征点的处理还包括:设定容差范围;以及在第三图像中,在相对于所述期待位置位于所述容差范围内的区域内,搜索第三特征点。
- 如权利要求2所述的三维测量方法,其中,所述第一方向和第二方向均为第一相机和第三相机的光心连线方向。
- 如权利要求14所述的三维测量方法,其中,所述基于所述视差定比关系筛选匹配特征点组的处理包括:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点的横坐标与第二图像中的特征点的横坐标的差值s 5与第二图像中的所述特征点的横坐标与第三图像中的特征点的横坐标的差值s 6满足s 5:s 6=D 1:D 2,第一图像和第三图像中的所述特征点具有相同的纵坐标,所述横坐标的方向对应于所述第一相机和第三相机的光心连线方向。
- 如权利要求15所述的三维测量方法,还包括:对第一图像、第二图像和第三图像进行校正处理,使得第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于第一相机和第三相机的光心连线的方向。
- 如权利要求15所述的三维测量方法,其中,所述基于所述坐标关系筛选匹配特征点组的处理包括:在第一图像和第三图像中分别选定第一特征点和第二特征点,第二特征点的纵坐标相对于第一特征点的纵坐标在预定范围内;计算第二图像中与第一特征点和第二特征点满足s 5:s 6=D 1:D 2关系的第三特征点的期待横坐标;以及基于期待横坐标,在第二图像中搜索第三特征点。
- 如权利要求1所述的三维测量方法,其中,所述对第一图像、第二图像和第三图像中的特征点进行匹配的处理还包括:对基于所述视差定比关系筛选出来的匹配特征点组,施用对像素或邻域像素群的相似性计算以进一步筛选匹配特征点组。
- 如权利要求18所述的三维测量方法,其中,所述相似性计算仅施用于包含有相同特征点的两个以上特征点组。
- 如权利要求1所述的三维测量方法,其中,所述对第一图像、第二图像和第三图像中的特征点进行匹配还包括:对特征点或其邻域像素群施用相似性计算以筛选匹配特征点组;并且所述基于所述视差定比关系筛选匹配特征点组的处理被施用于通过相似性计算筛选出来的匹配特征点组。
- 如权利要求18-20中任一项所述的三维测量方法,其中,所述相似性计算包括对由像素灰度差的平方和、过零平均像素灰度差的平方和、像素灰度差的绝对值和、过零平均像素灰度差的绝对值和、邻域之间归一化交叉相关度、以及过零平均归一化交叉相关度构成的组中的至少一项的计算。
- 如权利要求2所述的三维测量方法,还包括:基于第一图像、第二图像和第三图像生成校正矩阵,该校正矩阵用于被施用于以后接收的来自第一相机、第二相机和第三相机的图像,所述生成校正矩阵的处理包括:分别在所述第一图像、第二图像和第三图像中提取特征点;对第一图像、第二图像和第三图像中的特征点进行匹配,获得多个匹配特征点组;利用匹配特征点组中特征点在各图像中的坐标,根据校正矩阵被施用于各个图像之后匹配特征点组中的特征点之间满足的所述视差定比关系,建立超定方程组;以及解算所述超定方程组,得到校正矩阵。
- 如权利要求1所述的三维测量方法,其中,所述基于匹配特征点组计算各个特征点组所对应的物点的三维坐标包括:基于一个匹配特征点组中的至少两对特征点计算深度值,并取所述深度值的平均值 作为该匹配特征点组所对应的物点的深度值。
- 如权利要求1所述的三维测量方法,还包括:以所述第一相机、第二相机和第三相机的工作波长范围内的波长的光向所述相机的拍摄区域投射图案。
- 如权利要求24所述的三维测量方法,其中,所述图案包含条纹,所述条纹的方向与第一相机、第二相机和第三相机中至少两者的光心连线不平行。
- 一种三维测量装置,包括:处理器;和存储程序指令的存储器,其中当所述程序指令由所述处理器执行时,使得所述处理器执行下列操作:接收第一图像、第二图像和第三图像;分别在第一图像、第二图像和第三图像中提取特征点;对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点的横坐标与第二图像中的特征点的横坐标的差值与第二图像中的所述特征点的横坐标与第三图像中的特征点的横坐标的差值成预定比例关系,并且第一图像和第三图像中的所述特征点具有相同的纵坐标;以及计算匹配特征点组对应的物点的三维坐标。
- 如权利要求26所述的三维测量装置,其中,所述坐标关系还包括:第二图像中的所述特征点具有与第一图像和第三图像中的所述特征点相同的纵坐标。
- 如权利要求27所述的三维测量装置,其中,所述程序指令在被执行时使得所述处理器通过以下操作来实现基于所述坐标关系筛选匹配特征点组的操作:在第一图像、第二图像和第三图像中的两者中分别选定第一特征点和第二特征点,所述第一特征点和第二特征点的纵坐标满足相对于一目标纵坐标位于预定范围内;计算第一图像、第二图像和第三图像中的第三者中与所述第一特征点和第二特征点匹配的第三特征点的期待横坐标,使得第一图像中 的特征点的横坐标与第二图像中的特征点的横坐标的差值与第二图像中的所述特征点的横坐标与第三图像中的特征点的横坐标的差值成预定比例关系;以及基于由第三特征点的期待横坐标和目标纵坐标构成的期待位置,在所述第三者中搜索第三特征点。
- 如权利要求28所述的三维测量装置,其中,在第一图像、第二图像和第三图像中的两者中分别选定纵坐标为所述目标纵坐标的第一特征点和第二特征点。
- 如权利要求28所述的三维测量装置,其中,所述程序指令在被执行时使得所述处理器通过以下操作来实现基于所述期待位置搜索第三特征点的操作:设定容差范围,该容差范围包括横坐标容差范围和纵坐标容差范围中的至少一者;以及在第一图像、第二图像和第三图像中的所述第三者中,在相对于所述期待位置位于所述容差范围内的区域内,搜索第三特征点。
- 一种三维测量装置,用于与相机阵列配合使用以进行三维测量,所述相机阵列至少包括第一相机、第二相机和第三相机,所述三维测量装置包括:处理单元,其接收分别来自第一相机、第二相机和第三相机的第一图像、第二图像和第三图像,并被配置为进行以下处理:分别在所述第一图像、第二图像和第三图像中提取特征点;对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括基于同一物点在第一图像和第二图像之间产生的、在第一方向上的第一视差d 1与在第二图像和第三图像之间产生的、在第二方向上的第二视差d 2筛选匹配特征点组,其中第一方向为非垂直于第一相机和第二相机的光心连线的方向,第二方向为非垂直于第二相机和第三相机的光心连线的方向;以及计算匹配特征点组对应的物点的三维坐标。
- 如权利要求31所述的三维测量装置,其中,所述第一相机、第二相机和第三相机具有相同的焦距以及彼此平行的光轴,并且第一 相机、第二相机和第三相机的光心布置在垂直于光轴的同一平面上,所述第一方向和第二方向平行于所述平面,其中所述匹配包括基于以下视差定比关系筛选匹配特征点组:d 1:d 2=D 1:D 2,其中D 1为第一相机的光心相对于第二相机的光心在所述第一方向上的偏移量,D 2为第二相机的光心相对于第三相机的光心在所述第二方向上的偏移量。
- 如权利要求32所述的三维测量装置,其中,所述第一相机、第二相机和第三相机的光心依次布置在垂直于光轴的一条直线上,所述第一方向和第二方向为所述直线的方向;并且所述处理单元配置为通过以下处理来基于所述视差定比关系筛选匹配特征点组:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点的横坐标与第二图像中的特征点的横坐标的差值与第二图像中的所述特征点的横坐标与第三图像中的特征点的横坐标的差值满足s 1:s 2=D 1:D 2,并且第一图像、第二图像和第三图像中的所述特征点具有相同纵坐标,所述横坐标的方向对应于所述直线的方向。
- 如权利要求33所述的三维测量装置,还包括:校正单元,其基于来自第一相机、第二相机和第三相机的图像,生成校正矩阵并将所述校正矩阵提供给处理单元,所述校正矩阵当被处理单元施用于第一图像、第二图像和第三图像时,使第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于所述直线的方向。
- 如权利要求33所述的三维测量装置,其中,所述处理单元配置为通过以下处理来基于所述坐标关系筛选匹配特征点组:在第一图像、第二图像和第三图像中的两者中分别选定第一特征点和第二特征点,所述第一特征点和第二特征点的纵坐标满足相对于一目标纵坐标位于预定范围内;计算第一图像、第二图像和第三图像中的第三者中与所述第一特征点和第二特征点匹配的第三特征点的期待横坐标,使得所述差值s 1 与所述差值s 2满足s 1:s 2=D 1:D 2;以及基于由第三特征点的期待横坐标和目标纵坐标构成的期待位置,在所述第三者中搜索第三特征点。
- 如权利要求35所述的三维测量装置,其中,所述处理单元配置为通过以下处理来基于所述期待位置搜索第三特征点:设定容差范围,该容差范围包括横坐标容差范围和纵坐标容差范围中的至少一者;以及在第一图像、第二图像和第三图像中的所述第三者中,在相对于所述期待位置位于所述容差范围内的区域内,搜索第三特征点。
- 如权利要求32所述的三维测量装置,其中,第一相机和第二相机的光心布置在垂直于光轴的第一直线上,第二相机和第三相机的光心布置在垂直于光轴且垂直于所述第一直线的第二直线上,所述第一方向为第一直线的方向,所述第二方向为第二直线的方向;并且所述处理单元配置为通过以下处理来基于所述视差定比关系筛选匹配特征点组:基于以下坐标关系筛选匹配的特征点组:第一图像中的特征点的横坐标与第二图像中的特征点的横坐标的差值s 3与第二图像中的所述特征点的纵坐标与第三图像中的特征点的纵坐标的差值s 4满足s 3:s 4=D 1:D 2,第一图像和第二图像的所述特征点具有相同的纵坐标,第二图像和第三图像的所述特征点具有相同的横坐标,所述横坐标的方向对应于所述第一方向,所述纵坐标的方向对应于所述第二方向。
- 如权利要求37所述的三维测量装置,还包括:校正单元,其基于来自第一相机、第二相机和第三相机的图像,生成校正矩阵并将所述校正矩阵提供给处理单元,所述校正矩阵当被处理单元施用于第一图像、第二图像和第三图像时,使第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于所述直线的方向。
- 如权利要求37所述的三维测量装置,其中,所述处理单元配 置为通过以下处理来基于所述坐标关系筛选匹配特征点组:在第一图像和第二图像中分别选定第一特征点和第二特征点,所述第一特征点和第二特征点的纵坐标满足相对于一目标纵坐标位于预定范围内;计算第一特征点的横坐标与第二特征点的横坐标的差值s 3;计算差值s 4,使得s 3:s 4=D 1:D 2;计算第三图像中与所述第一特征点和第二特征点匹配的第三特征点的期待纵坐标,使得第二特征点的纵坐标与第三特征点的期待纵坐标的差值为上述计算得到的第二差值s 4;并且基于由所述第三特征点的期待纵坐标和第二特征点的横坐标构成的期待位置,在第三图像中搜索第三特征点。
- 如权利要求39所述的三维测量装置,其中,所述处理单元配置为通过以下处理来基于所述期待位置搜索第三特征点:设定容差范围,该容差范围包括横坐标容差范围和纵坐标容差范围中的至少一者;以及在第三图像中,在相对于所述期待位置位于所述容差范围内的区域内,搜索第三特征点。
- 如权利要求32所述的三维测量装置,其中,所述第一方向和第二方向均为第一相机和第三相机的光心连线方向;并且所述处理单元配置为通过以下处理来基于所述视差定比关系筛选匹配特征点组:基于以下坐标关系筛选匹配的特征点组:第一图像中的特征点的横坐标与第二图像中的特征点的横坐标的差值s 5与第二图像中的所述特征点的横坐标与第三图像中的特征点的横坐标的差值s 6满足s 5:s 6=D 1:D 2,第一图像和第三图像中的所述特征点具有相同的纵坐标,所述横坐标的方向对应于所述第一相机和第三相机的光心连线方向。
- 如权利要求41所述的三维测量装置,还包括:校正单元,其基于来自第一相机、第二相机和第三相机的图像,生成校正矩阵并将所述校正矩阵提供给处理单元,所述校正矩阵当被 处理单元施用于第一图像、第二图像和第三图像时,使第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于第一相机和第三相机的光心连线的方向。
- 如权利要求37所述的三维测量装置,其中,所述处理单元配置为通过以下处理来基于所述坐标关系筛选匹配特征点组:在第一图像和第三图像中分别选定第一特征点和第二特征点,第二特征点的纵坐标相对于第一特征点的纵坐标在预定范围内;计算第二图像中与第一特征点和第二特征点满足s 5:s 6=D 1:D 2关系的第三特征点的期待横坐标;以及基于期待横坐标,在第二图像中搜索第三特征点。
- 如权利要求31所述的三维测量装置,其中,所述处理单元还配置为:对基于所述视差定比关系筛选出来的匹配的特征点组,施用对像素或邻域像素群的相似性计算以进一步筛选匹配的特征点组。
- 如权利要求31所述的三维测量装置,其中,所述处理单元还配置为:对特征点或其邻域像素群施用相似性计算以筛选匹配的特征点组;并且对通过相似性计算筛选出来的匹配特征点组进行所述基于所述视差定比关系筛选匹配特征点组的处理。
- 如权利要求34、38或42所述的三维测量装置,其中,所述校正单元配置为通过以下处理生成校正矩阵:分别在所述第一图像、第二图像和第三图像中提取特征点;对第一图像、第二图像和第三图像中的特征点进行匹配,获得多个匹配的特征点组;利用匹配的特征点组中特征点在各图像中的坐标,根据校正矩阵被施用于各个图像之后匹配的特征点组中的特征点之间满足的所述坐标关系,建立超定方程组;以及解算所述超定方程组,得到校正矩阵。
- 一种基于视差定比关系的三维测量***,包括:相机阵列,该相机阵列至少包括第一相机、第二相机和第三相机;以及如权利要求26-46中任一项所述的三维测量装置。
- 如权利要求47所述的三维测量***,其中,所述第一相机、第二相机和第三相机具有相同的焦距以及彼此平行的光轴,并且第一相机、第二相机和第三相机的光心布置在垂直于光轴的同一平面上,所述三维测量***还包括:控制单元,其与所述相机阵列连接并通信,并配置为控制第一相机、第二相机和第三相机同步地采集图像。
- 如权利要求48所述的三维测量***,其中,所述控制单元还配置为控制第一相机、第二相机和第三相机等倍数变焦。
- 如权利要求49所述的三维测量***,还包括:投射单元,其包括光源以及用于基于来自该光源的照明光形成投射图案的光学元件,所述照明光包含具有所述第一相机、第二相机和第三相机的工作波长范围内的波长的光,并且所述投影单元向所述相机阵列的拍摄区域投射所述投射图案。
- 如权利要求50所述的三维测量***,其中,所述投射图案包含条纹,并且所述条纹的方向与所述第一相机、第二相机和第三相机中至少两者的光心连线不平行。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711164746.6 | 2017-11-21 | ||
CN201711164746.6A CN109813251B (zh) | 2017-11-21 | 2017-11-21 | 用于三维测量的方法、装置以及*** |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019100933A1 true WO2019100933A1 (zh) | 2019-05-31 |
Family
ID=66599669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/114016 WO2019100933A1 (zh) | 2017-11-21 | 2018-11-05 | 用于三维测量的方法、装置以及*** |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109813251B (zh) |
WO (1) | WO2019100933A1 (zh) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110706334A (zh) * | 2019-09-26 | 2020-01-17 | 华南理工大学 | 一种基于三目视觉的工业部件三维重构方法 |
CN111368745A (zh) * | 2020-03-06 | 2020-07-03 | 上海眼控科技股份有限公司 | 车架号图像生成方法、装置、计算机设备和存储介质 |
CN111612728A (zh) * | 2020-05-25 | 2020-09-01 | 北京交通大学 | 一种基于双目rgb图像的3d点云稠密化方法和装置 |
CN112016570A (zh) * | 2019-12-12 | 2020-12-01 | 天目爱视(北京)科技有限公司 | 用于背景板同步旋转采集中的三维模型生成方法 |
CN112102404A (zh) * | 2020-08-14 | 2020-12-18 | 青岛小鸟看看科技有限公司 | 物体检测追踪方法、装置及头戴显示设备 |
CN112183436A (zh) * | 2020-10-12 | 2021-01-05 | 南京工程学院 | 基于像素点八邻域灰度对比的高速公路能见度检测方法 |
CN112381874A (zh) * | 2020-11-04 | 2021-02-19 | 北京大华旺达科技有限公司 | 基于机器视觉的标定方法及装置 |
CN113310420A (zh) * | 2021-04-22 | 2021-08-27 | 中国工程物理研究院上海激光等离子体研究所 | 一种通过图像测量两个目标之间距离的方法 |
CN113358020A (zh) * | 2020-03-05 | 2021-09-07 | 青岛海尔工业智能研究院有限公司 | 一种机器视觉检测***及方法 |
CN113487686A (zh) * | 2021-08-02 | 2021-10-08 | 固高科技股份有限公司 | 一种多目相机的标定方法、装置、多目相机和存储介质 |
CN113487679A (zh) * | 2021-06-29 | 2021-10-08 | 哈尔滨工程大学 | 激光打标机自动调焦***视觉测距信号处理方法 |
CN114087991A (zh) * | 2021-11-28 | 2022-02-25 | 中国船舶重工集团公司第七一三研究所 | 一种基于线结构光的水下目标测量装置及方法 |
CN115082621A (zh) * | 2022-06-21 | 2022-09-20 | 中国科学院半导体研究所 | 一种三维成像方法、装置、***、电子设备及存储介质 |
CN116503570A (zh) * | 2023-06-29 | 2023-07-28 | 聚时科技(深圳)有限公司 | 图像的三维重建方法及相关装置 |
CN117611752A (zh) * | 2024-01-22 | 2024-02-27 | 卓世未来(成都)科技有限公司 | 一种数字人的3d模型生成方法及*** |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110440712B (zh) * | 2019-08-26 | 2021-03-12 | 英特维科技(苏州)有限公司 | 自适应大景深三维扫描方法与*** |
CN111457859B (zh) * | 2020-03-06 | 2022-12-09 | 奥比中光科技集团股份有限公司 | 3d测量装置对齐标定方法、***及计算机可读存储介质 |
CN112033352B (zh) * | 2020-09-01 | 2023-11-07 | 珠海一微半导体股份有限公司 | 多摄像头测距的机器人及视觉测距方法 |
CN112129262B (zh) * | 2020-09-01 | 2023-01-06 | 珠海一微半导体股份有限公司 | 一种多摄像头组的视觉测距方法及视觉导航芯片 |
CN113503830B (zh) * | 2021-07-05 | 2023-01-03 | 无锡维度投资管理合伙企业(有限合伙) | 一种基于多相机的非球面面形测量方法 |
CN115317747B (zh) * | 2022-07-28 | 2023-04-07 | 北京大学第三医院(北京大学第三临床医学院) | 一种自动气管插管导航方法及计算机设备 |
CN116524160B (zh) * | 2023-07-04 | 2023-09-01 | 应急管理部天津消防研究所 | 基于ar识别的产品一致性辅助核实***及方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002063240A1 (en) * | 2001-02-02 | 2002-08-15 | Snap-On Technologies Inc | Method and apparatus for mapping system calibration |
CN101680756A (zh) * | 2008-02-12 | 2010-03-24 | 松下电器产业株式会社 | 复眼摄像装置、测距装置、视差算出方法以及测距方法 |
US20110122228A1 (en) * | 2009-11-24 | 2011-05-26 | Omron Corporation | Three-dimensional visual sensor |
CN104101293A (zh) * | 2013-04-07 | 2014-10-15 | 鸿富锦精密工业(深圳)有限公司 | 量测机台坐标系归一***及方法 |
CN104897065A (zh) * | 2015-06-09 | 2015-09-09 | 河海大学 | 一种板壳结构表面位移场的测量*** |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6859549B1 (en) * | 2000-06-07 | 2005-02-22 | Nec Laboratories America, Inc. | Method for recovering 3D scene structure and camera motion from points, lines and/or directly from the image intensities |
KR101926563B1 (ko) * | 2012-01-18 | 2018-12-07 | 삼성전자주식회사 | 카메라 추적을 위한 방법 및 장치 |
DE102012112322B4 (de) * | 2012-12-14 | 2015-11-05 | Faro Technologies, Inc. | Verfahren zum optischen Abtasten und Vermessen einer Umgebung |
CN103292710B (zh) * | 2013-05-27 | 2016-01-06 | 华南理工大学 | 一种应用双目视觉视差测距原理的距离测量方法 |
US10750153B2 (en) * | 2014-09-22 | 2020-08-18 | Samsung Electronics Company, Ltd. | Camera system for three-dimensional video |
CN106813595B (zh) * | 2017-03-20 | 2018-08-31 | 北京清影机器视觉技术有限公司 | 三相机组特征点匹配方法、测量方法及三维检测装置 |
-
2017
- 2017-11-21 CN CN201711164746.6A patent/CN109813251B/zh active Active
-
2018
- 2018-11-05 WO PCT/CN2018/114016 patent/WO2019100933A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002063240A1 (en) * | 2001-02-02 | 2002-08-15 | Snap-On Technologies Inc | Method and apparatus for mapping system calibration |
CN101680756A (zh) * | 2008-02-12 | 2010-03-24 | 松下电器产业株式会社 | 复眼摄像装置、测距装置、视差算出方法以及测距方法 |
US20110122228A1 (en) * | 2009-11-24 | 2011-05-26 | Omron Corporation | Three-dimensional visual sensor |
CN104101293A (zh) * | 2013-04-07 | 2014-10-15 | 鸿富锦精密工业(深圳)有限公司 | 量测机台坐标系归一***及方法 |
CN104897065A (zh) * | 2015-06-09 | 2015-09-09 | 河海大学 | 一种板壳结构表面位移场的测量*** |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110706334A (zh) * | 2019-09-26 | 2020-01-17 | 华南理工大学 | 一种基于三目视觉的工业部件三维重构方法 |
CN110706334B (zh) * | 2019-09-26 | 2023-05-09 | 华南理工大学 | 一种基于三目视觉的工业部件三维重构方法 |
CN112016570A (zh) * | 2019-12-12 | 2020-12-01 | 天目爱视(北京)科技有限公司 | 用于背景板同步旋转采集中的三维模型生成方法 |
CN112016570B (zh) * | 2019-12-12 | 2023-12-26 | 天目爱视(北京)科技有限公司 | 用于背景板同步旋转采集中的三维模型生成方法 |
CN113358020A (zh) * | 2020-03-05 | 2021-09-07 | 青岛海尔工业智能研究院有限公司 | 一种机器视觉检测***及方法 |
CN111368745A (zh) * | 2020-03-06 | 2020-07-03 | 上海眼控科技股份有限公司 | 车架号图像生成方法、装置、计算机设备和存储介质 |
CN111612728A (zh) * | 2020-05-25 | 2020-09-01 | 北京交通大学 | 一种基于双目rgb图像的3d点云稠密化方法和装置 |
CN112102404B (zh) * | 2020-08-14 | 2024-04-30 | 青岛小鸟看看科技有限公司 | 物体检测追踪方法、装置及头戴显示设备 |
CN112102404A (zh) * | 2020-08-14 | 2020-12-18 | 青岛小鸟看看科技有限公司 | 物体检测追踪方法、装置及头戴显示设备 |
CN112183436A (zh) * | 2020-10-12 | 2021-01-05 | 南京工程学院 | 基于像素点八邻域灰度对比的高速公路能见度检测方法 |
CN112183436B (zh) * | 2020-10-12 | 2023-11-07 | 南京工程学院 | 基于像素点八邻域灰度对比的高速公路能见度检测方法 |
CN112381874A (zh) * | 2020-11-04 | 2021-02-19 | 北京大华旺达科技有限公司 | 基于机器视觉的标定方法及装置 |
CN112381874B (zh) * | 2020-11-04 | 2023-12-12 | 北京大华旺达科技有限公司 | 基于机器视觉的标定方法及装置 |
CN113310420A (zh) * | 2021-04-22 | 2021-08-27 | 中国工程物理研究院上海激光等离子体研究所 | 一种通过图像测量两个目标之间距离的方法 |
CN113487679A (zh) * | 2021-06-29 | 2021-10-08 | 哈尔滨工程大学 | 激光打标机自动调焦***视觉测距信号处理方法 |
CN113487686A (zh) * | 2021-08-02 | 2021-10-08 | 固高科技股份有限公司 | 一种多目相机的标定方法、装置、多目相机和存储介质 |
CN114087991A (zh) * | 2021-11-28 | 2022-02-25 | 中国船舶重工集团公司第七一三研究所 | 一种基于线结构光的水下目标测量装置及方法 |
CN115082621B (zh) * | 2022-06-21 | 2023-01-31 | 中国科学院半导体研究所 | 一种三维成像方法、装置、***、电子设备及存储介质 |
CN115082621A (zh) * | 2022-06-21 | 2022-09-20 | 中国科学院半导体研究所 | 一种三维成像方法、装置、***、电子设备及存储介质 |
CN116503570A (zh) * | 2023-06-29 | 2023-07-28 | 聚时科技(深圳)有限公司 | 图像的三维重建方法及相关装置 |
CN116503570B (zh) * | 2023-06-29 | 2023-11-24 | 聚时科技(深圳)有限公司 | 图像的三维重建方法及相关装置 |
CN117611752A (zh) * | 2024-01-22 | 2024-02-27 | 卓世未来(成都)科技有限公司 | 一种数字人的3d模型生成方法及*** |
CN117611752B (zh) * | 2024-01-22 | 2024-04-02 | 卓世未来(成都)科技有限公司 | 一种数字人的3d模型生成方法及*** |
Also Published As
Publication number | Publication date |
---|---|
CN109813251A (zh) | 2019-05-28 |
CN109813251B (zh) | 2021-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019100933A1 (zh) | 用于三维测量的方法、装置以及*** | |
US10832429B2 (en) | Device and method for obtaining distance information from views | |
US20180051982A1 (en) | Object-point three-dimensional measuring system using multi-camera array, and measuring method | |
CN110363838B (zh) | 基于多球面相机模型的大视野图像三维重构优化方法 | |
CN107545586B (zh) | 基于光场极线平面图像局部的深度获取方法及*** | |
WO2018032841A1 (zh) | 绘制三维图像的方法及其设备、*** | |
CN106033614B (zh) | 一种强视差下的移动相机运动目标检测方法 | |
JP2009284188A (ja) | カラー撮像装置 | |
CN110838164A (zh) | 基于物体点深度的单目图像三维重建方法、***及装置 | |
CN115035235A (zh) | 三维重建方法及装置 | |
US8340399B2 (en) | Method for determining a depth map from images, device for determining a depth map | |
CN108805921A (zh) | 图像获取***及方法 | |
US20220012905A1 (en) | Image processing device and three-dimensional measuring system | |
JP7489253B2 (ja) | デプスマップ生成装置及びそのプログラム、並びに、デプスマップ生成システム | |
JP2015019346A (ja) | 視差画像生成装置 | |
CN101523436A (zh) | 用于恢复视频流中的视差的方法和滤波器 | |
KR20220121533A (ko) | 어레이 카메라를 통해 획득된 영상을 복원하는 영상 복원 방법 및 영상 복원 장치 | |
WO2021104308A1 (zh) | 全景深度测量方法、四目鱼眼相机及双目鱼眼相机 | |
JP2016114445A (ja) | 3次元位置算出装置およびそのプログラム、ならびに、cg合成装置 | |
WO2023098362A1 (zh) | 一种基于亿级像素相机的目标区域安防与监控*** | |
KR102031485B1 (ko) | 360도 카메라와 평면 거울을 이용한 다시점 영상 획득 장치 및 방법 | |
CN114332373A (zh) | 一种克服继电器金属表面反光的磁路落差检测方法及*** | |
US11651475B2 (en) | Image restoration method and device | |
WO2021093804A1 (zh) | 全向立体视觉的摄像机配置***及摄像机配置方法 | |
Sun et al. | Blind calibration for focused plenoptic cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18880684 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18880684 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18880684 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/02/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18880684 Country of ref document: EP Kind code of ref document: A1 |