WO2020181524A1 - 图像深度的计算方法、图像处理装置及三维测量*** - Google Patents

图像深度的计算方法、图像处理装置及三维测量*** Download PDF

Info

Publication number
WO2020181524A1
WO2020181524A1 PCT/CN2019/077993 CN2019077993W WO2020181524A1 WO 2020181524 A1 WO2020181524 A1 WO 2020181524A1 CN 2019077993 W CN2019077993 W CN 2019077993W WO 2020181524 A1 WO2020181524 A1 WO 2020181524A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
calibration
calibration image
disparity
structured
Prior art date
Application number
PCT/CN2019/077993
Other languages
English (en)
French (fr)
Inventor
毛一杰
潘雷雷
范文文
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to CN201980000341.XA priority Critical patent/CN110088563B/zh
Priority to PCT/CN2019/077993 priority patent/WO2020181524A1/zh
Publication of WO2020181524A1 publication Critical patent/WO2020181524A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting

Definitions

  • the embodiments of the present application relate to the field of data processing technology, and in particular, to an image depth calculation method, image processing device, and three-dimensional measurement system.
  • structured light-based 3D measurement uses the principle of triangulation.
  • the measurement process based on external parameter calibration is: first calibrate the external parameters of the 3D system including the projection device-camera, and then calculate the parallax Figure, finally find the depth of the corresponding point according to the disparity map; in addition, based on the measurement process of reference plane calibration, multiple reference planes are calibrated within the effective measurement range, and then the local uniqueness of the specific structured light is used to obtain the corresponding point by matching the reference plane depth.
  • the calibration accuracy of external parameters usually depends on the accuracy of the corner points.
  • one of the technical problems solved by the embodiments of the present invention is to provide an image depth calculation method, an image processing device and a three-dimensional measurement system to overcome the above-mentioned defects in the prior art.
  • An embodiment of the present application provides a method for calculating image depth, which includes:
  • the structured light is determined to be on the structured image.
  • the depth of the target pixel on the structured image is calculated, wherein the first calibration reference surface corresponds to the upper limit of the measurement distance, and the second calibration reference surface Should be the lower limit of the measured distance.
  • the method further includes: projecting the first disparity and the second disparity to a baseline direction to obtain the first projection disparity and the second projection disparity;
  • calculating the depth of the target pixel on the structured image according to the first disparity and the second disparity includes: calculating the target pixel in the structured image according to the first projection disparity and the second projection disparity Depth on structured image.
  • the method further includes: establishing a first fitting model and a second fitting model of different stripes on the first calibration image and the second calibration image;
  • the structured light is determined to be on the structured image.
  • the first disparity and the second disparity of the target pixels of the target pixels on the first calibration image and the second calibration image with respect to the structured light, respectively, include: according to the target pixels in the structured The image, the positions of the first calibration image and the second calibration image, and the first fitting model and the second fitting model determine the first disparity and the second disparity.
  • the method further includes: determining the center pixel of each stripe on the first calibration image and the second calibration image to establish the first calibration image and the second calibration image of different stripes on the second calibration image.
  • a fitting model and a second fitting model are determining the center pixel of each stripe on the first calibration image and the second calibration image to establish the first calibration image and the second calibration image of different stripes on the second calibration image.
  • the method further includes: determining a mask mark assigned to the center pixel of each stripe on the first calibration image and the second calibration image to establish the first calibration image and the second calibration image.
  • the first fitting model and the second fitting model of different stripes on the calibration image are determined by: determining a mask mark assigned to the center pixel of each stripe on the first calibration image and the second calibration image to establish the first calibration image and the second calibration image. The first fitting model and the second fitting model of different stripes on the calibration image.
  • the method further includes: performing a neighborhood pixel search with reference to the center pixel of each stripe on the first calibration image and the second calibration image to establish the first calibration image The first fitting model and the second fitting model of different stripes on the second calibration image.
  • the pixels searched in the neighborhood are statistically analyzed to determine whether it is necessary to establish a first fitting model and a second fitting model for different stripes on the first calibration image and the second calibration image. Fit the model.
  • the method further includes: determining a fitting pixel according to the first fitting model and the second fitting model, and determining a fitting error according to the fitting pixel and the corresponding actual pixel.
  • the method further includes: extracting wave peaks in the first calibration image and the second calibration image to determine each of the first calibration image and the second calibration image.
  • the center pixel of the stripe is the center pixel of the stripe.
  • An embodiment of the application provides an image processing device, which includes:
  • the parallax unit is used to determine the structured light projected on the surface of the target object, the structured image formed on the first calibration reference surface, the second calibration reference surface, the first calibration image, and the second calibration image to determine that the structured light is in the The first disparity and the second disparity of the target pixel on the structured image with respect to the target pixel on the first calibration image and the second calibration image of the structured light, respectively;
  • a depth calculation unit configured to calculate the depth of the target pixel on the structured image according to the first disparity and the second disparity, wherein the first calibration reference plane corresponds to the upper limit of the measurement distance, the The second calibration reference surface corresponds to the lower limit of the measurement distance.
  • An embodiment of the application provides a three-dimensional measurement system, which includes: a projection device, a camera device, and the image processing device described in any embodiment of the application, and the projection device is used to project a coded image onto On the target object, the camera device is used to capture a structured image formed by projecting the coded image onto the target object.
  • the structured light is The first disparity and the second disparity of the target pixel on the structured image with respect to the structured light on the first calibration image and the second calibration image respectively; according to the first Disparity and second disparity, calculating the depth of the target pixel on the structured image, wherein the first calibration reference surface corresponds to the upper limit of the measurement distance, and the second calibration reference surface corresponds to the measurement distance
  • the lower limit makes the depth independent of the internal and external parameters of the measurement system, avoiding the introduction of additional errors and the positive relationship between the number of calibration reference planes and the measurement accuracy.
  • Figure 1 is a schematic diagram of the use of the three-dimensional measurement system in the first embodiment of the application
  • FIG. 2 is a schematic flowchart of a method for calculating image depth in Embodiment 2 of this application;
  • Embodiment 3 is a schematic diagram of the principle of parallax in Embodiment 3 of the application.
  • Fig. 4 is a schematic diagram of the depth calculation principle in the fourth embodiment of the application.
  • the structured light is The first disparity and the second disparity of the target pixel on the structured image with respect to the structured light on the first calibration image and the second calibration image respectively; according to the first Disparity and second disparity, calculating the depth of the target pixel on the structured image, wherein the first calibration reference surface corresponds to the upper limit of the measurement distance, and the second calibration reference surface corresponds to the measurement distance The lower limit.
  • the image depth calculation method makes the depth independent of the internal and external parameters of the measurement system, avoiding the introduction of additional errors and the positive relationship between the number of calibration reference planes and the measurement accuracy.
  • Figure 1 is a schematic diagram of the use of the three-dimensional measurement system in the first embodiment of the application; as shown in Figure 1, it includes: a projection device, a camera device, and an image processing device (not shown in the figure), the projection device is used to pass through the structure The light projects the coded image onto the target object, and the camera device is used to capture the structured image formed by projecting the coded image onto the target object.
  • the image depth calculation device is used to determine the structure according to the structured image, the first calibration image and the second calibration image formed by the structured light projected onto the surface of the target object, the first calibration reference surface, and the second calibration reference surface.
  • Figure 2 is a schematic flow chart of the method for calculating image depth in the second embodiment of the application; as shown in Figure 2, it includes:
  • the structured light is respectively projected onto the first calibration reference surface and the second calibration reference surface to form the first calibration image and the second calibration image. Similarly, the structured light is projected onto the surface of the target object to form a structure ⁇ image.
  • step S201 may also include: establishing a first simulation of different stripes on the first calibration image and the second calibration image by determining the center pixel of each stripe on the first calibration image and the second calibration image.
  • the combined model and the second fitting model are specifically described here by taking the formation of the first fitting model as an example, and then the first calibration image is formed according to the first fitting model.
  • the step of forming the first fitting model includes the following steps S211-S291 in detail:
  • the structured light is taken as an example of the coding stripe. Therefore, in step S211, the peak value of the first calibration image is extracted, and the center pixel of each stripe is determined according to the peak value.
  • the pixels are specifically several pixels in the geometric center of the stripes on the first calibration image.
  • the central pixel of each stripe is extracted, and further in order to distinguish between the central pixel and the non-central pixel, the central pixel of each stripe Assign a mask mark, such as 1. All pixels except the center pixel on the first calibration image are assigned a mask mark of 0.
  • the so-called mask mark refers to the flag bit of the center pixel
  • the so-called center pixel refers to the middle pixel on the center line of a stripe.
  • S231 Perform a neighborhood pixel search using the central pixel of the i-th stripe on the first calibration image as a reference;
  • a pixel search in the neighborhood is performed.
  • the search method is from top to bottom, first find the top point marked 1 as the starting point of a certain stripe. Starting point, and then down to 8 neighborhood search.
  • searching take the center pixel with a mask marked as 1 at the top as the starting point, and perform a continuous search based on the 8-neighborhood downwards, and add the pixels with the same mask marked as 1 to a set.
  • the 8-neighborhood specifically refers to the 8 pixels around the central pixel, and the downward search is actually only based on the lower 3 pixels in the 8-neighborhood, and the mask flags of those pixels are further determined as 1. If in the downward search process, the pixel whose mask is marked as 1 is searched, use this pixel as the reference point and continue to search downward to determine other pixels whose mask is marked as 1, and so on, until the last time When searching, there is no pixel with a mask of 1 below the 8 neighborhood.
  • the number of neighborhoods is not limited to 8, and can also be flexibly set according to application scenarios.
  • Step S241 when performing statistical analysis on the center pixel searched in the neighborhood, all pixels located on the center line of the same stripe and located in the neighborhood are counted. Then, in step S251, it is determined whether the number of these pixels exceeds the set value. If the number threshold is exceeded, it means that the coded stripe is a legal stripe, that is, there is no stripe interruption or non-striping caused by image quality and other reasons. Therefore, the first fitting model of the corresponding stripe needs to be established, otherwise, execute Step S273B: Perform a flag clearing operation on the searched pixels whose mask flag is 1 so that the mask flag of these pixels becomes 0.
  • step S251 Determine whether it is necessary to establish a first fitting model of the i-th stripe in the first calibration image, if yes, perform step S261, otherwise, perform step S273B.
  • the first fitting model mainly reflects the mathematical relationship between the position of the coded stripe and the position of the pixel above it, that is, the position of the corresponding stripe can be known through the coordinates of these pixels. If you follow the above search method from top to bottom, the fitting is mainly based on the upper left corner of the image as the origin of the coordinate system, the vertical and downward directions of the abscissa, and the horizontal right as the ordinate direction, and then treat the stripe as a straight line or on the coordinate system. Curve and fit it to determine the position of the fringe on the first calibration image and the second calibration image.
  • a neighborhood search can be performed based on the central pixel of the clause, and a number of pixels can be selected arbitrarily among all the pixels marked with a mask of 1, and based on these filtered pixels
  • the coordinates of the corresponding stripe are fitted to obtain the relationship between the position of the corresponding stripe and the pixel coordinates on it.
  • the coordinates of the stripe can be known, and the coordinates of a certain pixel can be deduced to determine whether the pixel is located on the stripe.
  • the error between the coordinate of the pixel deduced and the actual coordinate of the pixel is greater than the set fitting error threshold, which indicates that the accuracy of the first fitting model is poor and the fitting needs to be re-fitted.
  • the saved first fitting model is mainly used to participate in the subsequent processing of calculating the depth according to the disparity.
  • step S291B clear the mask flag of the pixel, and skip to step S231.
  • the mask mark of the pixels in the above set is changed from 1 to 0 to jump to S231 and re-center the i-th stripe
  • the pixel is used as a reference to search the neighborhood pixel mask marked as 1, to further re-establish the first fitting model.
  • the establishment of the second fitting model is similar to the above-mentioned S211-291B, and details are not repeated here.
  • the structured light is determined to be in the structured light.
  • FIG 3 is a schematic diagram of the parallax principle in the third embodiment of this application. It takes the parallax corresponding to the target pixel P0 on a certain stripe (say strip 1) on the structured image as an example.
  • the target The corresponding pixel of the pixel P0 on the second calibration image is denoted as P1
  • the corresponding pixel on the first calibration image is P2.
  • the target pixel P0 on stripe 1 of the structured image corresponding to the first calibration image and the second calibration image, it can only be determined that it is located on stripe 1, but as for the specific position on stripe 1, Need further analysis.
  • step S202 the step of establishing a fitting model described above may also be included in step S202.
  • S203 Calculate the depth of the structured light on the structured image according to the first parallax and the second parallax.
  • step S203 may specifically include:
  • S213 Project the first parallax and the second parallax to the baseline direction to obtain the first projection parallax and the second projection parallax.
  • the fitting model of each stripe since the fitting model of each stripe is established, as long as it is known which stripe the target pixel P0 is located on the first calibration image and the second calibration image, it can be directly based on the stripe
  • the first fitting model and the second fitting model determine the ordinate values of the pixels P1 and P2 corresponding to the target pixel P0 on the first calibration image and the second calibration image.
  • the target pixel P0, pixel P1 and P2 are points that match each other. Therefore, in the row direction, that is, which row of pixels are located on, the three pixels are consistent.
  • the coordinates of the target pixel P0 are (x, y0)
  • the coordinates of the pixels P1 and P2 on the first calibration image and the second calibration image are projected to the baseline direction.
  • the first fitting model characterizes the relationship between the horizontal coordinate and the column coordinate of the pixel. Therefore, when x is known and the fitting model is known, the vertical coordinate of the pixel on the respective calibration image can be obtained.
  • S223 Calculate the depth of the structured light on the structured image according to the first projection disparity and the second projection disparity.
  • the start distance and the end distance respectively represent the lower limit and the upper limit of the measured distance.
  • the corresponding calibration images are actually the second calibration image and the first calibration image respectively.
  • the spatial points A, C, F are the projection points of the same point projected by the projector at different distance planes
  • the imaging point of point C on the structured image can correspond to the target pixel P0
  • point A is in the first calibration image
  • the imaging point on the above corresponds to the above pixel P1
  • the imaging point of the point F on the second calibration plane corresponds to the above pixel P2
  • the disparity between the target pixel P and the pixel P1 and the pixel P2 is the disparity d0 and the disparity d1, respectively, and the disparity in the pixel coordinates
  • the corresponding projection parallaxes dy0 and dy1 are obtained by projecting to the baseline B direction (the pixel coordinate system actually appears as a direction perpendicular
  • the depth Z of the target pixel P0 has nothing to do with the internal and external parameters of the measurement system, only the parallax d0, d1, so there is no need to calibrate the internal and external parameters of the measurement system. Therefore, in this embodiment, there is also the above-mentioned diagonal line The disparity of the direction is projected to the baseline b direction to form the projection disparity to calculate the depth.
  • the point Z 1 on the first calibration image has the following formula (3) relationship:
  • An embodiment of the application provides an image processing device, which includes:
  • the parallax unit is used to determine that the structured light is projected on the surface of the target object, the structured image formed on the first calibration reference surface, the second calibration reference surface, the first calibration image, and the second calibration image.
  • a depth calculation unit configured to calculate the depth of the target pixel on the structured image according to the first disparity and the second disparity, wherein the first calibration reference plane corresponds to the upper limit of the measurement distance, the The second calibration reference surface corresponds to the lower limit of the measurement distance.
  • image processing device may be implemented on an image processing chip or on other chips.
  • a programmable logic device Programmable Logic Device, PLD
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers.
  • controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as a part of the memory control logic.
  • controller in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application specific integrated circuits, programmable logic controllers and embedded The same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for implementing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.
  • the computing device includes one or more processors (CPU), input/output interfaces, network interfaces, and memory.
  • processors CPU
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

一种图像深度的计算方法、图像处理装置及三维测量***,图像深度的计算方法包括:根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应测量距离的上限,所述第二标定参考面对应所述测量距离的下限。图像深度的计算方法使得深度与测量***的内外参无关,避免了额外的误差的引入以及标定参考平面数量与测量精度的正向关系。

Description

图像深度的计算方法、图像处理装置及三维测量*** 技术领域
本申请实施例涉及数据处理技术领域,尤其涉及一种图像深度的计算方法、图像处理装置及三维测量***。
背景技术
在三维测量技术中,通常情况下,基于结构光的三维测量都是采用三角测量原理,其中基于外部参数标定的测量过程为:首先标定包括投影设备-摄像头三维***的外部参数,再求出视差图,根据视差图最后求出相应点的深度;另外,基于参考平面标定的测量过程,在有效测量范围内标定多个参考平面,再利用特定结构光的局部唯一性通过匹配参考平面获取相应点的深度。而无论是标定外部参数还是标定参考平面都存在较大的限制,首先,外部参数的标定精度通常取决于角点的提取是否准确,同时为了提高精度通常还需要标定多张图像,另外为了得到视差图通常还要进行极线校正,由此为深度计算引入额外的误差;其次,多个参考平面的标定操作上比较繁琐,极大增加了生产的时间及存储成本,同时精确度也受标定平面数量的影响,很难用于较高精度的三维测量中。
发明内容
有鉴于此,本发明实施例所解决的技术问题之一在于提供一种图像深度的计算方法、图像处理装置及三维测量***,用以克服现有技术中的上述缺陷。
本申请实施例提供了一种图像深度的计算方法,其包括:
根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;
根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应测量距离的上限,所述第二标定参考面对应所述测量距离的下限。
可选地,在本申请的任一实施例中,还包括:将所述第一视差、第二视差投影到基线方向,得到第一投影视差以及第二投影视差;
对应地,根据所述第一视差、第二视差,计算所述目标像素在所述结构化图像上的深度,包括:根据第一投影视差以及第二投影视差,计算所述目标像素在所述结构化图像上的深度。
可选地,在本申请的任一实施例中,还包括:建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型;
根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差,包括:根据所述目标像素分别在结构化图像、第一标定图像以及第二标定图像的位置以及所述第一拟合模型和第二拟合模型,确定所述第一视差以及第二视差。
可选地,在本申请的任一实施例中,还包括:确定第一标定图像和第二标定图像上每根条纹的中心像素以建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型。
可选地,在本申请的任一实施例中,还包括:确定给所述第一标定图像和第二标定图像上每根条纹的中心像素分配的掩码标记以建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型。
可选地,在本申请的任一实施例中,还包括:以所述第一标定图像和第二标定图像上每根条纹的中心像素为参考进行邻域的像素搜索以建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型。
可选地,在本申请的任一实施例中,对在邻域内搜索到的像素统计分析以判断是否需要建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型。
可选地,在本申请的任一实施例中,还包括:根据第一拟合模型和第二拟合模型确定拟合像素,根据所述拟合像素与对应实际的像素确定拟合误差。
可选地,在本申请的任一实施例中,还包括:提取所述第一标定图像以及第二标定图像中的波峰值,以确定所述第一标定图像和第二标定图像上每根条纹的中心像素。
本申请实施例提供了一种图像处理装置,其包括,
视差单元,用于根据结构光分别投影到目标物体表面、第一标定参考面、 第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;
深度计算单元,用于根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应测量距离的上限,所述第二标定参考面对应所述测量距离的下限。
本申请实施例提供了一种三维测量***,其包括:投影装置、摄像装置、以及本申请任一实施例中所述的图像处理装置,所述投影装置用于通过结构光将编码图像投影到目标物体上,所述摄像装置用于捕获所述编码图像投影到目标物体上形成的结构化图像。
本申请实施例中,根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应测量距离的上限,所述第二标定参考面对应所述测量距离的下限,使得深度与测量***的内外参无关,避免了额外的误差的引入以及标定参考平面数量与测量精度的正向关系。
附图说明
后文将参照附图以示例性而非限制性的方式详细描述本申请实施例的一些具体实施例。附图中相同的附图标记标示了相同或类似的部件或部分。本领域技术人员应该理解,这些附图未必是按比例绘制的。附图中:
图1为本申请实施例一中三维测量***的使用示意图;
图2为本申请实施例二中图像深度的计算方法流程示意图;
图3为本申请实施例三中视差原理示意图;
图4为本申请实施例四中深度计算原理示意图。
具体实施方式
实施本发明实施例的任一技术方案必不一定需要同时达到以上的所有优点。
下面结合本发明实施例附图进一步说明本发明实施例具体实现。
本申请实施例中,根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应测量距离的上限,所述第二标定参考面对应所述测量距离的下限。图像深度的计算方法使得深度与测量***的内外参无关,避免了额外的误差的引入以及标定参考平面数量与测量精度的正向关系。
图1为本申请实施例一中三维测量***的使用示意图;如图1所示,其包括:投影装置、摄像装置以及图像处理装置(图中未示出),所述投影装置用于通过结构光将编码图像投影到目标物体上,所述摄像装置用于捕获所述编码图像投影到目标物体上形成的结构化图像。所述图像深度计算装置用于根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应三维测量***的测量距离的上限,所述第二标定参考面对应所述三维测量***的测量距离的下限。
图2为本申请实施例二中图像深度的计算方法流程示意图;如图2所示,其包括:
S201、将结构光投影到目标物体表面形成结构化图像,以及根据测量距离的上限和下限,通过将结构光分别投影到第一标定参考面以及第二标定参考面上形成第一标定图像以及第二标定图像;
本实施例中,将结构光分别投影到所述第一标定参考面以及第二标定参考面上从而形成第一标定图像以及第二标定图像,类似地,将结构光投影到目标物体表面形成结构化图像。
另外,在本实施例中,步骤S201还可以包括:通过确定第一标定图像和第二标定图像上每根条纹的中心像素,建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型,具体地,此处以形成第一拟合模型为例进行说明,再根据第一拟合模型形成第一标定图像。形成第一拟合模型的步骤详细包括下述步骤S211-S291:
S211、确定第一标定图像上每根条纹的中心像素;
本实施例中,以结构光为编码条纹为例进行说明,因此,在步骤S211中提取所述第一标定图像的波峰值,根据所述波峰值确定上每根条纹的中心像素,所述中心像素具体为在第一标定图像上所述条纹中几何中心的若干像素。
S221、给所述第一标定图像上第i根条纹的中心像素分配掩码标记;
本实施例中,考虑到在不同距离上结构光中的编码条纹的粗细可能出现变化,因此,提取每根条纹的中心像素,进一步为了区别中心像素以及非中心像素,给每根条纹的中心像素分配一个掩码标记,比如1,第一标定图像上除中心像素外的其他所有像素,给分配为0的掩码标记。所谓掩码标记即中心像素的标志位,所谓中心像素即一根条纹的中心线上中间的像素。
S231、以所述第一标定图像上第i根条纹的中心像素为参考进行邻域的像素搜索;
本实施例中,为了将编码条纹位置逐条存储,因此,进行邻域的像素搜索。如前所述,比如如果条纹总共有N条且互相不相交,每一根条纹均要进行搜索,搜索方式为从上到下,先找到最上方标记为1的点,作为某根条纹的起始点,再向下进行8邻域搜索。在搜索时,以最上方某一掩码标记为1的中心像素为出发点,向下进行基于8邻域的连续搜索,并将搜索到的掩码标记同为1的像素加入到一集合中。此处,8邻域具体是指该中心像素周围的8个像素,向下搜索则实际只以8邻域中下方3个像素为准,再进一步从中确定出那些像素的掩码标记为1。如果下向下搜索的过程中,搜索到掩码标记为1的其像素,则以此像素为参考点,继续向下搜索,确定出掩码标记为1的其他像素,依次类推,直到最后一次搜索的时候,8邻域内下方不存在掩码标记为1的像素。
需要说明的是,本实施例中,邻域的个数不局限为8,也可以根据应用场景,灵活设置。
S241、对在邻域内搜索到的像素统计分析;
本实施例中,由于掩码标记为1的像素可以反映编码条纹所在位置,因此,而在拟合模型建立时,是以编码条纹为单位的,即每一条编码条纹对应一个第一拟合模型。因此,步骤S241中,在对邻域内搜索的中心像素进行统计分析时,统计位于同一跟条纹中心线上且位于邻域内的所有像素,则在步骤S251中判断这些像素的数量是否超过设定的数量阈值,如果超过,则表明该编码条纹为合法条纹,即不存在因图像质量等原因造成的条纹中断或非条纹产生,因此,需要建立对应的条纹的第一拟合模型,否则,则执行步骤S273B:对搜索 到的掩码标记为1的像素进行标记的清零操作,使得这些像素的掩码标记变为0。
而对于那些不需要建立第一拟合模型的条纹来说,可以直接舍弃。
S251、判断是否需要建立第一标定图像中第i根条纹的第一拟合模型,若是,则执行步骤S261,否则,执行步骤S273B。
S261、建立第一标定图像上第i条条纹的第一拟合模型;
本实施例中,第一拟合模型中主要反映编码条纹的位置跟其上像素位置的数学关系,即通过这些像素的坐标即可以知悉对应条纹的位置。如果按照上述从上到下搜索的方式的话,拟合主要是以图像左上角为坐标系原点,纵向下为横坐标方向,横向右为纵坐标方向,然后把条纹当做坐标系上的一条直线或曲线,对其进行拟合,从而确定出条纹在第一标定图像和第二标定图像上的位置。
进一步地,在一具体应用场景中,可以从该条文的中心像素为基础出发进行邻域搜索而得到的掩码标记为1的所有像素中,任意选若干个像素,并基于这些筛选出的像素的坐标进行拟合,从而得到对应条纹的位置和其上像素坐标的关系。
S271、根据第一拟合模型确定拟合像素,根据所述拟合像素与对应实际的像素确定拟合误差。
本实施例中,为了验证第一拟合模型的真确性,但得到第一拟合模型后,以知悉条纹的坐标,而反推某一个像素的坐标,确定该像素是否位于该条纹上,如果反推出的像素的坐标跟该像素的实际坐标之间的误差大于设定的拟合误差阈值,则表明该第一拟合模型的准确性较差,需要重新进行拟合。重新拟合时,重新从上述集合中任意选若干个像素,并基于重新筛选出的像素的坐标进行拟合,从而重新得到对应条纹的坐标和其上像素坐标的关系,直到得到的拟合误差小于设定的拟合误差阈值。
S281、判断所述拟合误差是否小于设定的拟合误差阈值;若满足,则执行步骤S291A,否则,执行步骤S291B;
S291A、保存所述第一拟合模型;
本实施例中,保存的第一拟合模型主要用于参与后续根据视差计算深度的处理。
S291B、将像素的掩码标记清零,并跳到步骤S231。
本实施例中,当所述拟合误差大于设定的拟合误差阈值时,对上述集合中的像素的掩码标记从1修改为0,以跳转到S231重新以第i跟条纹的中心像素 为参考进行邻域像素掩码标记为1的搜索,以进一步重新建立第一拟合模型。
本实施例中,第二拟合模型的建立类似上述S211-291B,详细不再赘述。
S202、根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;
参见图3所示,为本申请实施例三中视差原理示意图,是以结构化图像上某一个根条纹(假如条纹1)上目标像素P0对应的视差为例,如图3所示,该目标像素P0在第二标定图像上对应像素记为P1,在第一标定图像上对应像素为P2。实际上,对于在结构化图像的条纹1上的目标像素P0,对应到第一标定图像和第二标定图像上,只能确定出其位于条纹1上,但至于在条纹1上的具***置,需要进一步分析。
参见图3所示,在测量距离由近及远的过程中,目标像素P0的移动轨迹实际上沿着图3中斜线方向,由此可见,目标像素P0分别相对于像素P1、P2的视差如图中d1和d0所示。
需要说明的是,在其他实施例中,上述建立拟合模型的步骤也可以包括在步骤S202中。
S203、根据所述第一视差、第二视差,计算所述结构光在所述结构化图像上的深度。
本实施例中,步骤S203具体可以包括:
S213、将所述第一视差、第二视差投影到基线方向,得到第一投影视差以及第二投影视差;
进一步地,再参考上述图3,考虑到P1、P2的具***置不能确定,沿着斜线方向的视差d0、d1不能直接参与深度运算,因此,将上述沿着斜线方向的视差d0、d1投影到基线方向得到投影视差dy0以及dy1。
具体地,如前所述,由于建立了每根条纹的拟合模型,因此,只要知悉目标像素P0在第一标定图像和第二标定图像上位于那一根条纹,则直接可以基于该条纹的第一拟合模型和第二拟合模型,确定出目标像素P0在第一标定图像和第二标定图像上对应的像素P1和P2的纵坐标值,而对于横坐标,由于目标像素P0、像素P1和P2是相互匹配的点,因此在行方向上,即表征位于第几行像素上,这三个像素是一致的。假如目标像素P0的坐标为(x,y0),则像素P1和P2在第一标定图像和第二标定图上的坐标投影到基线方向后分别得到的 投影坐标为(x,y1)、(x,y2),对应的视差分别为dy0=y0-y1,dy1=y2-y0,而其中y1和y2实际上对应第一拟合模型和第二拟合模型,由此可见,第一拟合模型和第二拟合模型是表征像素在横向坐标和列向坐标的关系,由此,当x得知,拟合模型知道时,即可得到像素在各自标定图像上的纵坐标。
实际上,参见上述图4,由于存在d1/d0=dy1/dy0,因此,故可以用dy1,dy0代替d1,d0,详细原因可参见下述深度Z的计算公式的解释。
S223、根据第一投影视差以及第二投影视差,计算所述结构光在所述结构化图像上的深度。
如图4所示,为本申请实施例四中深度计算原理示意图,起始距离和结束距离分别表示测量距离的下限和上限,对应的标定图实际上分别为第二标定图像以及第一标定图像,空间点A、C、F为投影仪投射出的同一点在不同距离平面处的投影点,点C在结构化图像上的成像点可对应上述目标像素P0,而点A在第一标定图像上的成像点则对应上述像素P1,点F在第二标定平面上的成像点对应上述像素P2,目标像素P分别与像素P1、像素P2的视差分别为视差d0、视差d1,而在像素坐标系上通过投影到基线B方向(像素坐标系上实际表现为垂直于条纹的方向),得到对应的投影视差dy0,dy1。而实际上根据相似三角形的关系,存在如下公式:
Figure PCTCN2019077993-appb-000001
由上述公式可见,目标像素P0的深度Z与测量***的内外参无关,只与视差d0,d1有关,从而不需要标定测量***内外参数,因此,在本实施例中也就存在上述将斜线方向的视差投影到基线b方向形成投影视差以计算深度。
参照上述图5,深度Z的计算公式推导如下,根据相似三角形原理得知对于结构化图像上的点C存在如下公式(2)关系:
Figure PCTCN2019077993-appb-000002
同理,对于测量距离的下限,第一标定图像上的点Z 1存在如下公式(3)关系:
Figure PCTCN2019077993-appb-000003
与此同时,存在如下公式(4)关系:
Figure PCTCN2019077993-appb-000004
根据上述公式(2)(3)可得公式(5):
Figure PCTCN2019077993-appb-000005
再参见上述公式(4)(5)可知:
(Z 2-Z 1)d1*Z=Z 1*(d0+d1)(Z 2-Z)    (6)
对上述公式(6)进行解析即可得到上述公式(1)。
本申请实施例提供了一种图像处理装置,其包括:
视差单元,用于根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;
深度计算单元,用于根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应测量距离的上限,所述第二标定参考面对应所述测量距离的下限。
此处,需要说明的是,上述图像处理装置可以在图像处理芯片上实现,也可以在其他芯片上实现。
至此,已经对本主题的特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作可以按照不同的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序,以实现期望的结果。在某些实施方式中,多任务处理和并行处理可以是有利的。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字***“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language, HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本发明的实施例可提供为方法、***、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/ 或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (11)

  1. 一种图像深度的计算方法,其特征在于,包括:
    根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;
    根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应测量距离的上限,所述第二标定参考面对应所述测量距离的下限。
  2. 根据权利要求1所述的方法,其特征在于,还包括:将所述第一视差、第二视差投影到基线方向,得到第一投影视差以及第二投影视差;
    对应地,根据所述第一视差、第二视差,计算所述目标像素在所述结构化图像上的深度,包括:根据第一投影视差以及第二投影视差,计算所述目标像素在所述结构化图像上的深度。
  3. 根据权利要求1所述的方法,其特征在于,还包括:建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型;
    根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差,包括:根据所述目标像素分别在结构化图像、第一标定图像以及第二标定图像的位置以及所述第一拟合模型和第二拟合模型,确定所述第一视差以及第二视差。
  4. 根据权利要求3所述的方法,其特征在于,还包括:确定第一标定图像和第二标定图像上每根条纹的中心像素以建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型。
  5. 根据权利要求4所述的方法,其特征在于,还包括:确定给所述第一标定图像和第二标定图像上每根条纹的中心像素分配的掩码标记以建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型。
  6. 根据权利要求5所述的方法,其特征在于,还包括:以所述第一标定图像和第二标定图像上每根条纹的中心像素为参考进行邻域的像素搜索以建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型。
  7. 根据权利要求6所述的方法,其特征在于,对在邻域内搜索到的像素统计分析以判断是否需要建立第一标定图像和第二标定图像上不同条纹的第一拟合模型和第二拟合模型。
  8. 根据权利要求4所述的方法,其特征在于,还包括:根据第一拟合模型和第二拟合模型确定拟合像素,根据所述拟合像素与对应实际的像素确定拟合误差。
  9. 根据权利要求4-8任一项所述的方法,其特征在于,还包括:提取所述第一标定图像以及第二标定图像中的波峰值,以确定所述第一标定图像和第二标定图像上每根条纹的中心像素。
  10. 一种图像处理装置,其特征在于,包括,
    视差单元,用于根据结构光分别投影到目标物体表面、第一标定参考面、第二标定参考面上形成的结构化图像、第一标定图像以及第二标定图像,确定所述结构光在所述结构化图像上的目标像素分别相对于所述结构光在所述第一标定图像以及所述第二标定图像上的所述目标像素的第一视差以及第二视差;
    深度计算单元,用于根据所述第一视差和第二视差,计算所述目标像素在所述结构化图像上的深度,其中,所述第一标定参考面对应测量距离的上限,所述第二标定参考面对应所述测量距离的下限。
  11. 一种三维测量***,其特征在于,包括:投影装置、摄像装置、以及权利要求10所述的图像处理装置,所述投影装置用于通过结构光将编码图像投影到目标物体上,所述摄像装置用于捕获所述编码图像投影到目标物体上形成的结构化图像。
PCT/CN2019/077993 2019-03-13 2019-03-13 图像深度的计算方法、图像处理装置及三维测量*** WO2020181524A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980000341.XA CN110088563B (zh) 2019-03-13 2019-03-13 图像深度的计算方法、图像处理装置及三维测量***
PCT/CN2019/077993 WO2020181524A1 (zh) 2019-03-13 2019-03-13 图像深度的计算方法、图像处理装置及三维测量***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/077993 WO2020181524A1 (zh) 2019-03-13 2019-03-13 图像深度的计算方法、图像处理装置及三维测量***

Publications (1)

Publication Number Publication Date
WO2020181524A1 true WO2020181524A1 (zh) 2020-09-17

Family

ID=67424510

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077993 WO2020181524A1 (zh) 2019-03-13 2019-03-13 图像深度的计算方法、图像处理装置及三维测量***

Country Status (2)

Country Link
CN (1) CN110088563B (zh)
WO (1) WO2020181524A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112752088B (zh) * 2020-07-28 2023-03-28 腾讯科技(深圳)有限公司 深度图像生成方法及装置、参考图像生成方法、电子设备
CN112085752B (zh) * 2020-08-20 2024-01-30 浙江华睿科技股份有限公司 一种图像的处理方法、装置、设备及介质
CN113099120B (zh) * 2021-04-13 2023-04-18 南昌虚拟现实研究院股份有限公司 深度信息获取方法、装置、可读存储介质及深度相机
CN113592706B (zh) * 2021-07-28 2023-10-17 北京地平线信息技术有限公司 调整单应性矩阵参数的方法和装置
CN114160961B (zh) * 2021-12-14 2023-10-13 深圳快造科技有限公司 用于标定激光加工参数的***和方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1888815A (zh) * 2006-07-13 2007-01-03 上海交通大学 投影结构光空间位置和形状的多点拟合标定方法
US20130093854A1 (en) * 2011-10-17 2013-04-18 Canon Kabushiki Kaisha Three dimensional shape measurement apparatus, control method therefor, and storage medium
CN104408732A (zh) * 2014-12-10 2015-03-11 东北大学 一种基于全向结构光的大视场深度测量***及方法
CN106875435A (zh) * 2016-12-14 2017-06-20 深圳奥比中光科技有限公司 获取深度图像的方法及***
CN109461181A (zh) * 2018-10-17 2019-03-12 北京华捷艾米科技有限公司 基于散斑结构光的深度图像获取方法及***

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9514522B2 (en) * 2012-08-24 2016-12-06 Microsoft Technology Licensing, Llc Depth data processing and compression
KR102081778B1 (ko) * 2014-02-19 2020-02-26 엘지전자 주식회사 객체의 3차원 형상을 산출하는 장치 및 방법
CN109405765B (zh) * 2018-10-23 2020-11-20 北京的卢深视科技有限公司 一种基于散斑结构光的高精度深度计算方法及***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1888815A (zh) * 2006-07-13 2007-01-03 上海交通大学 投影结构光空间位置和形状的多点拟合标定方法
US20130093854A1 (en) * 2011-10-17 2013-04-18 Canon Kabushiki Kaisha Three dimensional shape measurement apparatus, control method therefor, and storage medium
CN104408732A (zh) * 2014-12-10 2015-03-11 东北大学 一种基于全向结构光的大视场深度测量***及方法
CN106875435A (zh) * 2016-12-14 2017-06-20 深圳奥比中光科技有限公司 获取深度图像的方法及***
CN109461181A (zh) * 2018-10-17 2019-03-12 北京华捷艾米科技有限公司 基于散斑结构光的深度图像获取方法及***

Also Published As

Publication number Publication date
CN110088563A (zh) 2019-08-02
CN110088563B (zh) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2020181524A1 (zh) 图像深度的计算方法、图像处理装置及三维测量***
TWI607412B (zh) 多維度尺寸量測系統及其方法
KR102318023B1 (ko) 에지를 이용한 3차원 모델 생성
US9117267B2 (en) Systems and methods for marking images for three-dimensional image generation
CN109752003B (zh) 一种机器人视觉惯性点线特征定位方法及装置
JP4964801B2 (ja) 2次元実写映像から3次元モデルを生成する方法及び装置
WO2020048152A1 (zh) 高精度地图制作中地下车库停车位提取方法及***
CN109461181A (zh) 基于散斑结构光的深度图像获取方法及***
US9996966B2 (en) Ray tracing method and apparatus
US9360307B2 (en) Structured-light based measuring method and system
WO2022052582A1 (zh) 一种图像配准方法、装置及电子设备和存储介质
JP2019145085A (ja) 点群データ収集軌跡を調整するための方法、装置、およびコンピュータ読み取り可能な媒体
CN111123242B (zh) 一种基于激光雷达和相机的联合标定方法及计算机可读存储介质
WO2020125637A1 (zh) 一种立体匹配方法、装置和电子设备
CN109903346A (zh) 相机姿态检测方法、装置、设备及存储介质
JP2011242183A (ja) 画像処理装置、画像処理方法およびプログラム
JP6453908B2 (ja) 4カメラ組の平面アレイの特徴点のマッチング方法、及びそれに基づく測定方法
US8963920B2 (en) Image processing apparatus and method
KR102166426B1 (ko) 렌더링 시스템 및 이의 렌더링 방법
US10049488B2 (en) Apparatus and method of traversing acceleration structure in ray tracing system
CN113074634B (zh) 一种快速相位匹配方法、存储介质和三维测量***
US10115224B2 (en) Method and apparatus generating acceleration structure
CN110148086B (zh) 稀疏深度图的深度补齐方法、装置及三维重建方法、装置
CN117109430A (zh) 一种基于点云数据的计算方法及装置
CN103559710A (zh) 一种用于三维重建***的标定方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19918729

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19918729

Country of ref document: EP

Kind code of ref document: A1