WO2018032841A1 - 绘制三维图像的方法及其设备、*** - Google Patents

绘制三维图像的方法及其设备、*** Download PDF

Info

Publication number
WO2018032841A1
WO2018032841A1 PCT/CN2017/085147 CN2017085147W WO2018032841A1 WO 2018032841 A1 WO2018032841 A1 WO 2018032841A1 CN 2017085147 W CN2017085147 W CN 2017085147W WO 2018032841 A1 WO2018032841 A1 WO 2018032841A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
viewpoint
color
invisible
color image
Prior art date
Application number
PCT/CN2017/085147
Other languages
English (en)
French (fr)
Inventor
黄源浩
肖振中
刘龙
许星
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2018032841A1 publication Critical patent/WO2018032841A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to the field of three-dimensional display technology, and in particular, to a method for drawing a three-dimensional image, and an apparatus and system thereof.
  • the three-dimensional display technology generates a three-dimensional effect by respectively receiving the simultaneously acquired binocular images by the corresponding eyes. Since this technology has brought people a new stereoscopic viewing experience, the demand for 3D image resources has increased in recent years.
  • One of the methods for obtaining a three-dimensional image at present is to convert a two-dimensional image into a three-dimensional image by image processing technology. Specifically, the image depth information of the existing two-dimensional image is calculated by using image processing technology, and then other virtual viewpoint images are drawn, and the three-dimensional image is formed by using the existing two-dimensional image and the virtual other viewpoint image.
  • the technical problem mainly solved by the present invention is to provide a method for drawing a three-dimensional image, a device and a system thereof, and capable of improving a three-dimensional display effect.
  • a technical solution adopted by the present invention is to provide a method for drawing a three-dimensional image, comprising: separately acquiring an invisible light image obtained by acquiring a target with a first viewpoint and respectively aiming at the target with a second viewpoint; Performing the first color image obtained by the acquisition; calculating a parallax between the first viewpoint and the second viewpoint from the invisible image; and moving the pixel coordinates of the first color image according to the parallax to obtain the first a second color image of the viewpoint; a three-dimensional image is formed by the first color image and the second color image.
  • the invisible light image is obtained by projecting a structured light pattern to the target in a projection module, and acquiring the target by an invisible light image collector disposed at the first viewpoint, where the first color image is obtained by A color camera disposed at the second viewpoint acquires the target.
  • the calculating the disparity between the first view point and the second view point by the invisible light image comprises: calculating the invisible light including the structured light pattern according to a matching algorithm of digital image processing a displacement between the image and each pixel of the preset reference structured light image; a disparity between the first viewpoint and the second viewpoint is calculated from the displacement, wherein the displacement has a linear relationship with the parallax.
  • the disparity between the first view point and the second view point is calculated by the displacement, including: calculating a disparity d between the first view point and the second view point by using Equation 1 below,
  • B 1 is the distance between the invisible image collector and the projection module
  • B 2 is the distance between the invisible image collector and the color camera
  • Z 0 is the reference structured light image The depth of the plane relative to the invisible image collector
  • f is the focal length of the invisible image collector and the color camera
  • ⁇ u is the displacement between the invisible image and the pixels of the preset reference structured light image.
  • the second color image of the first view is obtained by moving the pixel coordinates of the first color image according to the parallax, and the first pixel coordinate I ir of the invisible image is established according to the disparity d (u)
  • the method further includes: calculating, by using the invisible light image, a depth image of the first viewpoint; and using a three-dimensional image transformation theory, calculating the target according to the depth image of the first viewpoint and the first color image a third color image of the first viewpoint;
  • Forming the three-dimensional image from the first color image and the second color image includes: averaging pixel values of corresponding pixels in the second color image and the third color image Or a weighted average to obtain a fourth color image of the first viewpoint; a three-dimensional image is formed by the first color image and the fourth color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection module are on the same straight line;
  • the invisible light image is an infrared image, and the invisible light image collector is an infrared camera.
  • the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the present invention adopts another technical solution to provide an image processing device, which includes an input interface, a processor, and a memory; the input interface is used to obtain an invisible image collector and a color camera.
  • the memory is used to store a computer program; the processor executes the computer program, respectively, by acquiring, by the input interface, the target of the invisible image collector of the first viewpoint a non-visible light image and a first color image obtained by collecting the target with the color camera of the second viewpoint; calculating a parallax between the first viewpoint and the second viewpoint from the invisible image; The parallax moves pixel coordinates of the first color image to obtain a second color image of the first viewpoint; and forms a three-dimensional image from the first color image and the second color image.
  • the present invention adopts another technical solution to provide a three-dimensional image drawing system, including a projection module, an invisible image collector, a color camera, and the invisible image collector and the color camera.
  • An image processing device configured to: respectively acquire an invisible light image obtained by acquiring an object by a non-visible light image collector of a first viewpoint and acquiring the target by using a color camera of a second viewpoint a color image; calculating a disparity between the first view point and the second view point by the invisible light image; moving a pixel coordinate of the first color image according to the disparity to obtain a second color of the first view point An image; a three-dimensional image is formed from the first color image and the second color image.
  • the present invention obtains the parallax of the first viewpoint and the second viewpoint by using the acquired invisible light image of the first viewpoint, and obtains the second color image of the second viewpoint by using the first color image of the second viewpoint and the parallax, and further A color image and a second color image form a three-dimensional image, and since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, the loss of image detail information is reduced, so as to obtain more accurate Color map of two viewpoints
  • the image reduces the distortion of the synthesized three-dimensional image and improves the three-dimensional display effect based on the two-dimensional image generation.
  • the embodiment does not need to calculate the depth information of the image, avoids the error introduced by repeated calculations, and further improves the three-dimensional display effect.
  • FIG. 1 is a flow chart of an embodiment of a method for drawing a three-dimensional image according to the present invention
  • FIG. 2 is a schematic diagram of an application scenario of a method for drawing a three-dimensional image according to the present invention
  • FIG. 3 is a partial flow chart of another embodiment of a method for drawing a three-dimensional image according to the present invention.
  • FIG. 4 is a partial flow chart of still another embodiment of a method for drawing a three-dimensional image according to the present invention.
  • FIG. 5 is a flow chart of still another embodiment of a method for drawing a three-dimensional image of the present invention.
  • FIG. 6 is a schematic structural view of an embodiment of a three-dimensional image drawing apparatus according to the present invention.
  • FIG. 7 is a schematic structural view of an embodiment of a three-dimensional image rendering system of the present invention.
  • Figure 8 is a block diagram showing another embodiment of the three-dimensional image rendering system of the present invention.
  • FIG. 1 is a flow chart of an embodiment of a method for drawing a three-dimensional image according to the present invention.
  • the method can be performed by a three-dimensional image rendering device, including the following steps:
  • S11 Acquire an invisible light image obtained by collecting the target with the first viewpoint and a first color image obtained by acquiring the target by the second viewpoint.
  • the invisible light image and the color image according to the present invention are both two-dimensional images.
  • the invisible light image is an image formed by acquiring the intensity of invisible light on the target.
  • the first viewpoint and the second viewpoint are located at different positions of the target to obtain the target An image at two viewpoints.
  • the first viewpoint and the second viewpoint are used as two viewpoints of the eyes of the human body, that is, the positional relationship between the first viewpoint and the second viewpoint is The positional relationship between the eyes of the human body. For example, if the distance between the eyes of the conventional human body is t, the distance between the first viewpoint and the second viewpoint is set to t, which is specifically 6.5 cm.
  • the first view and the second view are set to be the same distance as the target or the distance does not exceed a set threshold.
  • the device The threshold can be set to a value of no more than 10 cm or 20 cm.
  • the invisible light image is a projected light pattern projected onto the target 23 by the projection module 25, and the invisible light image collector 21 disposed at the first viewpoint
  • the target 23 is acquired, and the first color image is acquired by the color camera 22 disposed at the second viewpoint.
  • the invisible light image collector 21 and the color camera transmit the acquired images thereof to the three-dimensional image drawing device 24 to perform acquisition of the following three-dimensional images. Since the position of the color camera and the invisible image collector is different, the spatial three-dimensional points corresponding to the same pixel coordinates in the first color image and the invisible image are not the same.
  • FIG. 2 the invisible light image is a projected light pattern projected onto the target 23 by the projection module 25, and the invisible light image collector 21 disposed at the first viewpoint
  • the target 23 is acquired, and the first color image is acquired by the color camera 22 disposed at the second viewpoint.
  • the invisible light image collector 21 and the color camera transmit the acquired images thereof to the three-dimensional image drawing device 24 to perform acquisition of the following three-dimensional images. Since the position of the color camera and the invisible image
  • the color camera 22 and the invisible light image collector 21 and the projection module 25 are on the same line, so that the color camera 22 and the invisible light image collector 21 and the projection module 25 are The depth of the target is the same.
  • FIG. 2 is only used as an embodiment. In other applications, the above three types may not be on the same line.
  • the projection module 25 is generally composed of a laser and a diffractive optical element.
  • the laser may be an edge-emitting laser or a vertical cavity laser, which is an invisible light that can be collected by the invisible image collector.
  • the diffractive optical element may be configured to have functions such as collimation, splitting, diffusion, etc. according to different structural light patterns.
  • the structured light pattern may be an irregularly distributed speckle pattern, and the speckle center level needs to meet the requirements for harmlessness to the human body. Therefore, it is necessary to comprehensively consider the power of the laser and the arrangement of the diffractive optical element.
  • the intensity of the speckle pattern affects the speed and accuracy of the depth value calculation.
  • the speckle particle density can also be determined by the three-dimensional image rendering device 24 according to its own calculation requirements, and the determined density information is sent to the projection module 25.
  • the projection module 25 is, but is not limited to, projecting the speckle particle pattern at a certain diffusion angle to the target area.
  • the invisible light image collector 21 collects the invisible light image of the target.
  • the invisible light may be any invisible light.
  • the invisible light image collector 21 may be an infrared collector, such as an infrared camera, and the invisible image is an infrared image; or the invisible image collector 21 may be an ultraviolet collector.
  • the invisible image is an ultraviolet image.
  • the color camera and the invisible image collector can be set to be synchronously acquired and the number of acquisition frames is the same, so that the obtained color image and the invisible image can ensure a one-to-one correspondence. Easy for subsequent calculations.
  • S12 Calculate a disparity between the first view point and the second view point from the invisible light image.
  • a matching algorithm such as a digital image correlation (DIC) algorithm using digital image processing calculates a parallax between an image of the first viewpoint and an image of the second viewpoint, that is, an image of the first viewpoint and a pixel of the second viewpoint image. The relative positional relationship between the coordinates.
  • DIC digital image correlation
  • the pixel coordinates of the first color image are shifted by the image disparity value d corresponding to the respective pixels, wherein the pixel values (also referred to as RGB values) of the obtained pixel coordinates (u 1 +d, v 1 ) are The pixel value of the pixel coordinates (u 1 , v 1 ) in a color image.
  • the first color image and the second color image are respectively used as a human body binocular image to synthesize a three-dimensional image, and specifically may be a three-dimensional image for 3D display in a top-bottom format, a left-right format, or a red-blue format. Further, after the three-dimensional image is synthesized, the three-dimensional image may also be displayed or output to a connected external display device for display.
  • the disparity image of the first viewpoint and the second viewpoint are obtained by using the acquired invisible light image of the first viewpoint
  • the second color image of the second viewpoint is obtained by using the first color image of the second viewpoint and the parallax.
  • the first color image and the second color image are formed into a three-dimensional image. Since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, the loss of image detail information is reduced. More accurate access to two viewpoints
  • the color image reduces the distortion of the synthesized three-dimensional image and improves the three-dimensional display effect generated based on the two-dimensional image.
  • DIBR depth-image-based rendering
  • the invisible light image is a projected light pattern projected onto the target by the projection module, and the target is collected by an invisible light image collector disposed at the first viewpoint. It is obtained that the first color image is obtained by collecting the target by a color camera disposed at the second viewpoint.
  • the foregoing S12 includes the following sub-steps:
  • S121 Calculate a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image according to a matching algorithm of the digital image processing.
  • the matching algorithm of the digital image processing is a digital image correlation algorithm.
  • the reference structured light pattern is obtained by previously projecting a reference structured light pattern onto a plane of a set distance by using the set projection module, and acquiring the reference structured light pattern of the plane by using the set invisible light image collector,
  • the "set up" should be understood as that the image collector and the projection module are not moved when the invisible image is subsequently acquired after being set.
  • a digital image correlation algorithm is used to obtain a displacement value ⁇ u of each corresponding pixel between the invisible light image and the reference structured light pattern such as the reference speckle image.
  • the measurement accuracy of the digital image correlation algorithm can reach sub-pixel level, such as 1/8 pixel, that is, the value of ⁇ u will be a multiple of 1/8, and the unit is pixel.
  • the displacement between the invisible image and each pixel of the reference structured light image has a linear relationship with the parallax. Therefore, the disparity between the first viewpoint and the second viewpoint can be calculated by the displacement and its linear relationship.
  • the parallax d between the first viewpoint and the second viewpoint is calculated by the following formula 11,
  • B 1 is the distance between the invisible image collector and the projection module
  • B 2 is the distance between the invisible image collector and the color camera
  • Z 0 is the reference structured light image
  • f is the focal length of the invisible image collector and the color camera
  • ⁇ u is the displacement between the invisible image and the pixels of the preset reference structured light image.
  • the plane of the reference structured light image is the plane on which the reference structured light pattern is projected, and the Z 0 is used to indicate the distance between the plane and the image collector, which can be used when the reference structured light image is previously tested. Distance information is obtained.
  • the unit of f is a pixel, and the value of f can be obtained by calibration in advance.
  • the calculated value of the parallax d is not an integer, it may be rounded or rounded.
  • the above 13 includes the following sub-steps:
  • S131 Establish a correspondence between a first pixel coordinate of the invisible light image and a second pixel coordinate of the first color image according to a parallax.
  • a pixel value (also referred to as an RGB value) of the first color image is assigned to the invisible light image according to the correspondence relationship to generate a second color image.
  • the pixel coordinates (1, 1) of the invisible light image correspond to the pixel coordinates (2, 1) of the first color image.
  • the pixel value of the pixel coordinate (1, 1) of the invisible light image is set as the pixel value (r, g, b) of the pixel coordinate (2, 1) in the first color image.
  • S133 Perform smoothing and denoising processing on the second color image.
  • the sub-step performs denoising and smoothing on the obtained second color image.
  • the foregoing step S13 may include only the foregoing S131 and S132. Substeps.
  • the depth image of the first viewpoint is calculated by using an infrared image, and the specific calculation manner may adopt an existing correlation algorithm.
  • S16 Calculate a third color image of the target at the first viewpoint according to the depth image of the first viewpoint and the first color image by using a three-dimensional image transformation theory.
  • any three-dimensional coordinate point in space and two-dimensional coordinate points on the image acquisition plane can be related by the theory of transmission transformation, so the theory can be the first viewpoint and the second viewpoint.
  • the pixel coordinates of the image are associated, and the pixel value of the corresponding pixel coordinate in the first color image of the second viewpoint is set for the image pixel coordinates of the first viewpoint according to the correspondence relationship and the pixel value of the first color image of the second viewpoint.
  • the S16 includes the following substeps:
  • the Z D is depth information in the first depth image, indicating a depth value of the target distance from the depth camera; and Z R represents a depth value of the target distance from the color camera; a pixel homogeneous coordinate on an image coordinate system of the color camera; The homogeneous coordinates for the pixel in the depth image coordinate system of the camera; M g is the internal reference matrix color camera, M D is the matrix of intrinsic depth camera; R & lt depth camera is a color camera with respect to the external reference matrix In the rotation matrix, T is the translation matrix in the outer parameter matrix of the depth camera relative to the color camera.
  • the internal reference matrix and the external parameter matrix of the camera and the collector may be preset, and the internal reference matrix may be calculated according to setting parameters of the camera and the collector, and the external reference matrix may be between the invisible image collector and the color camera.
  • the positional relationship is determined.
  • the internal parameter matrix formed by the pixel focal length of the image capture lens of the camera and the collector and the central position coordinates of the image acquisition target surface. Since the positional relationship between the first viewpoint and the second viewpoint is set to the positional relationship of the eyes of the human eye, there is no relative rotation between the eyes of the human body and only the distance of the set value t, so the rotation of the color camera with respect to the invisible image collector
  • the matrix R is an identity matrix
  • the translation matrix T [t, 0, 0] -1 .
  • the set value t can be adjusted according to the distance between the invisible light image collector and the color camera and the target.
  • the method further includes: acquiring a distance between the target and the invisible image collector and the color camera; and determining a distance between the target and the invisible image collector and the color camera When the value is greater than the first distance value, the set value t is increased; when it is determined that the distance between the target and the invisible image collector and the color camera is less than the second distance value, the setting is The setting value t is small.
  • the first distance value is greater than or equal to the second distance value.
  • the distance between the target and the invisible image collector is 100 cm
  • the distance between the target and the color camera is also 100 cm
  • the set value is reduced by one step value, or according to the current target.
  • the distance between the invisible image collector and the color camera is calculated and adjusted.
  • the distance between the target and the invisible image collector and the color camera is 300 cm, since the 300 cm is larger than the second distance value 200 and smaller than the first distance value of 500 cm, the set value is not adjusted.
  • the depth information Z D of the invisible light image of the first viewpoint into the above formula 12 the depth information of the second viewpoint on the left side of the formula 12, that is, the depth information Z R of the first color image, and the first color can be obtained.
  • Pixel homogeneous coordinates of the image coordinate system of the image the invisible light image collector and the color camera are at the same distance from the target, that is, the obtained Z R and Z D are equal.
  • the foregoing S14 includes the following steps:
  • S141 Average or weight average the pixel values of the corresponding pixels in the second color image and the third color image to obtain a fourth color image of the first view.
  • the pixel values of the pixel coordinates (Ur, Vr) in the second color image and the third color image are (r1, g1, b1) and (r2, g2, b2), respectively. Setting the pixel value of the pixel coordinates (Ur, Vr) in the fourth color image of the first viewpoint to
  • S142 Form a three-dimensional image from the first color image and the fourth color image.
  • the first color image and the fourth color image are respectively used as a human binocular image to synthesize a three-dimensional image.
  • the image acquisition target surface of the invisible light image collector and the color camera may be set to be equal in size, the resolution is the same, and the focal length is the same.
  • the color camera and the invisible image collector have different image acquisition target sizes, resolutions, and focal lengths, for example, the color camera has a larger target size and resolution than the invisible image collector.
  • the image acquisition target surface of the invisible image collector and the color camera is equal in size, the resolution is the same, and the focal length is the same: the invisible image collector
  • the image acquisition target size, resolution, and focal length of the color camera are the same within the tolerance range.
  • the image includes a photo or a video
  • the acquisition frequency of the invisible image collector and the color camera is synchronized, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, the image is passed. Interpolation is used to obtain video images of consistent frequency.
  • FIG. 6 is a schematic structural diagram of an embodiment of a three-dimensional image drawing apparatus according to the present invention.
  • the drawing device 60 includes an obtaining module 61, a calculating module 62, a forming module 63, and a getting module 64. among them,
  • the acquiring module 61 is configured to respectively acquire an invisible light image obtained by acquiring the target by the first viewpoint and a first color image obtained by collecting the target by the second viewpoint;
  • the calculating module 62 is configured to calculate a disparity between the first view point and the second view point from the invisible light image
  • the obtaining module 64 is configured to move the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view point;
  • the forming module 63 is configured to form a three-dimensional image from the first color image and the second color image.
  • the invisible light image is obtained by projecting a structured light pattern to the target in a projection module, and acquiring the target by an invisible light image collector disposed at the first viewpoint, the first color The image is acquired by the color camera disposed at the second viewpoint.
  • the calculating module 62 is specifically configured to calculate, according to a matching algorithm of the digital image processing, a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image; The displacement calculation calculates a disparity between the first viewpoint and the second viewpoint, wherein the displacement has a linear relationship with the parallax.
  • the calculating module 62 performs the disparity calculation between the first viewpoint and the second viewpoint by the displacement calculation, including: calculating, between the first viewpoint and the second viewpoint, by using the above formula 11 Parallax d.
  • the obtaining module 64 is specifically configured to establish, according to the disparity d, a first pixel coordinate I ir (u ir , v ir ) of the invisible light image and a second pixel coordinate I r (u of the first color image)
  • the calculating module 62 is further configured to calculate, by using the invisible light image, a depth image of the first view; and use a three-dimensional image transform theory to calculate, according to the depth image of the first view and the first color image.
  • the forming module 63 is configured to average or weight average the pixel values of the corresponding pixels in the second color image and the third color image to obtain a a fourth color image of the first viewpoint; a three-dimensional image is formed by the first color image and the fourth color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection module are on the same straight line;
  • the invisible light image is an infrared image, and the invisible light image collector For infrared cameras.
  • the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the invisible light image and the first color image are photos or videos
  • the invisible light image collector and the color camera are collected when the invisible light image and the first color image are video.
  • Frequency synchronization or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, a video image of the same frequency is obtained by image interpolation.
  • FIG. 7 is a schematic structural diagram of an embodiment of a three-dimensional image rendering system according to the present invention.
  • the system 70 includes a projection module 74, an invisible image collector 71, a color camera 72, and an image processing device 73 connected to the invisible image collector 71 and the color camera 72.
  • the image processing device 73 includes an input interface 731, a processor 732, and a memory 733. Further, the image processing device 73 can also be connected to the projection module 74.
  • the input interface 731 is used to obtain images acquired by the invisible image collector 71 and the color camera 72.
  • the memory 733 is used to store a computer program and provide the computer program to the processor 732, and can store data used by the processor 732 for processing such as the internal parameter matrix and the external parameter matrix of the invisible light image collector 71 and the color camera 72. And the image obtained by the input interface 731.
  • the processor 732 is used to:
  • a three-dimensional image is formed from the first color image and the second color image.
  • the image processing device 73 may further include a display screen 734 for displaying the three-dimensional image to implement three-dimensional display.
  • the image processing device 73 is not used to display the three-dimensional image.
  • the three-dimensional image rendering system 70 further A display device 75 connected to the image processing device 73 for receiving a three-dimensional image output by the image processing device 73 and displaying the three-dimensional image is included.
  • the processor 732 is specifically configured to calculate, according to a matching algorithm of the digital image processing, a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image;
  • the displacement calculation calculates a disparity between the first viewpoint and the second viewpoint, wherein the displacement has a linear relationship with the parallax.
  • the processor 732 performs the disparity calculation between the first view point and the second view point by the displacement calculation, including: calculating the first view point and the second view point by using Equation 1 below Parallax d between.
  • the processor 732 performs the moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view, including: establishing a first image of the invisible image according to the disparity d
  • the processor 732 is further configured to calculate, by using the invisible light image, a depth image of the first view; and use a three-dimensional image transform theory to calculate, according to the depth image of the first view and the first color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera 72 and the invisible image collector 71 and the projection module 74 are in the same On the straight line; the invisible light image is an infrared image, and the invisible light image collector 71 is an infrared camera.
  • the color camera 72 and the invisible light image collector 71 have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the invisible light image and the first color image are photos or videos, when When the invisible light image and the first color image are video, the acquisition frequency of the invisible image collector and the color camera is synchronized, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, the image is passed Interpolation is used to obtain video images of consistent frequency.
  • the image processing device 73 can be used as the above-described three-dimensional image drawing device for executing the method described in the above embodiments.
  • the method disclosed in the above embodiments of the present invention may also be applied to the processor 732 or implemented by the processor 732.
  • Processor 732 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 732 or an instruction in a form of software.
  • the processor 732 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 733, and the processor 732 reads the information in the corresponding memory and completes the steps of the above method in combination with the hardware thereof.
  • the disparity image of the first viewpoint and the second viewpoint are obtained by using the acquired invisible image of the first viewpoint
  • the second color image of the second viewpoint is obtained by using the first color image of the second viewpoint and the parallax
  • further Forming a three-dimensional image from the first color image and the second color image since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, thereby reducing the loss of image detail information, thereby Accurately obtaining color images of two viewpoints, thereby reducing the distortion of the synthesized three-dimensional image and improving the three-dimensional display effect generated based on the two-dimensional image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了绘制三维图像的方法及其设备、***。其中,所述方法包括:分别获取以第一视点对目标进行釆集得到的不可见光图像和以第二视点对所述目标进行釆集得到的第一彩色图像;由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;由所述第一彩色图像和所述第二彩色图像形成三维图像。通过上述方式,能够提高三维显示效果。

Description

绘制三维图像的方法及其设备、*** 【技术领域】
本发明涉及三维显示技术领域,特别是涉及绘制三维图像的方法及其设备、***。
【背景技术】
人类双眼由于位置不同在对具有一定距离的物体进行观看时会产生视觉差异,正是这种视差让人们有了三维的感观效果。三维显示技术根据这一原理,通过将同时获取的双眼图像分别被对应的眼睛接收,从而产生三维效果。由于这一技术给人们带来了全新的立体观看体验,近年来人们对三维图像资源的需求也日渐增加。
目前获得三维图像的方法之一是将二维图像通过图像处理技术转化为三维图像。具体为运用图像处理技术计算得到已有二维图像的场景深度信息,进而绘制出虚拟的其他视点图像,利用已有二维图像和虚拟的其他视点图像形成三维图像。
由于用于绘制该其他视点图像的已有二维图像的深度信息是经过计算得到,这一过程会导致图像细节信息的流失,影响三维显示的效果。
【发明内容】
本发明主要解决的技术问题是提供绘制三维图像的方法及其设备、***,能够提高三维显示效果。
为解决上述技术问题,本发明采用的一个技术方案是:提供一种绘制三维图像的方法,包括:分别获取以第一视点对目标进行采集得到的不可见光图像和以第二视点对所述目标进行采集得到的第一彩色图像;由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;由所述第一彩色图像和所述第二彩色图像形成三维图像。
其中,所述不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。
其中,所述由所述不可见光图像计算所述第一视点和所述第二视点之间的视差,包括:根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。
其中,所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:利用下述公式1计算得到第一视点和所述第二视点之间的视差d,
Figure PCTCN2017085147-appb-000001
其中,B1为所述不可见光图像采集器与所述投影模组之间的距离,B2为所述不可见光图像采集器与彩色相机之间的距离;Z0是所述参考结构光图像所在平面相对于不可见光图像采集器的深度值;f是不可见光图像采集器与彩色相机的像面焦距,Δu是不可见光图像与预设的参考结构光图像的各像素之间的位移。
其中,所述按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像,包括:根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Iir(uir,vir)=Ir(ur+d,vr);将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;对所述第二彩色图像进行平滑、去噪处理。
其中,还包括:利用所述不可见光图像计算得到所述第一视点的深度图像;利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;
所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均 或者加权平均,得到所述第一视点的第四彩色图像;由所述第一彩色图像和所述第四彩色图像形成三维图像。
其中,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机和所述不可见光图像采集器以及所述投影模组处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器为红外相机。
其中,所述彩色相机和所述不可见光图像采集器的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。
为了解决上述技术问题,本发明采用另一技术方案是:提供一种图像处理设备,其中,包括输入接口、处理器和存储器;所述输入接口用于获得不可见光图像采集器和彩色相机采集得到的图像;所述存储器用于存储计算机程序;所述处理器执行所述计算机程序,用于:通过所述输入接口分别获取以第一视点的所述不可见光图像采集器对目标进行采集得到的不可见光图像和以第二视点的所述彩色相机对所述目标进行采集得到的第一彩色图像;由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;由所述第一彩色图像和所述第二彩色图像形成三维图像。
为了解决上述技术问题,本发明采用另一技术方案是:提供一种三维图像绘制***,包括投影模组、不可见光图像采集器、彩色相机、与所述不可见光图像采集器和彩色相机连接的图像处理设备;所述图像处理设备用于:分别获取以第一视点的不可见光图像采集器对目标进行采集得到的不可见光图像和以第二视点的彩色相机对所述目标进行采集得到的第一彩色图像;由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;由所述第一彩色图像和所述第二彩色图像形成三维图像。
本发明利用采集得到的第一视点的不可见光图像得到第一视点和第二视点的视差,并利用第二视点的第一彩色图像和该视差得到第二视点的第二彩色图像,进而由第一彩色图像和第二彩色图像形成三维图像,由于该第一视点和第二视点的视差由采集得到的图像数据获得,而无需经过图像处理,因此减少了图像细节信息的流失,以更准确获得两个视点的彩色图 像,进而减少了合成的三维图像的失真度,提高了基于二维图像生成的三维显示效果。而且相对于现有的DIBR技术,本实施例无需计算得到图像的深度信息,避免了多次重复计算引入的误差,进一步提高了三维显示效果。
【附图说明】
图1是本发明绘制三维图像的方法一实施例的流程图;
图2是本发明绘制三维图像的方法一应用场景的示意图;
图3是本发明绘制三维图像的方法另一实施例的部分流程图;
图4是本发明绘制三维图像的方法再一实施例的部分流程图;
图5是本发明绘制三维图像的方法又再一实施例的流程图;
图6是本发明三维图像绘制装置一实施例的结构示意图;
图7是本发明三维图像绘制***一实施例的结构示意图;
图8是本发明三维图像绘制***另一实施例的结构示意图。
【具体实施方式】
为了更好的理解本发明的技术方案,下面结合附图对本发明实施例进行详细描述。
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
请参阅图1,图1是本发明绘制三维图像的方法一实施例的流程图。本实施例中,该方法可由三维图像绘制装置执行,包括以下步骤:
S11:分别获取以第一视点对目标进行采集得到的不可见光图像和以第二视点对所述目标进行采集得到的第一彩色图像。
值得注意的是,本发明所述的不可见光图像和彩色图像均为二维图像。该不可见光图像为获取目标上的不可见光的强度而形成的图像。
其中,该第一视点和第二视点位于目标的不同位置,以获得该目标的 两个视点处的图像。通常,由于三维感观是由双眼观看到的不同图像叠加形成,故该第一视点和第二视点用于作为人体双眼的两个视点,即第一视点与第二视点之间的位置关系为人体双眼之间的位置关系。例如,常规人体双眼的距离为t,则将第一视点和第二视点之间的距离设置为t,该t具体如为6.5cm。而且,为保证第一视点和第二视点的图像深度相同或者类似,将第一视点和第二视点设置为与该目标的距离相同或者距离相差不超过设定阈值,在具体应用中,该设定阈值可设置为不大于10cm或20cm的值。
在一具体应用中,如图2所示,该不可见光图像为在投影模组25向所述目标23投射结构光图案,由设置在所述第一视点的不可见光图像采集器21对所述目标23进行采集得到,该第一彩色图像由设置在所述第二视点的彩色相机22对目标23进行采集得到。不可见光图像采集器21和彩色相机将其采集得到的图像传输至三维图像绘制装置24,以进行下述三维图像的获取。由于彩色相机与不可见光图像采集器的位置不同,故该第一彩色图像与不可见光图像中的相同像素坐标上所对应的空间三维点并不相同。图2中,彩色相机22和所述不可见光图像采集器21以及所述投影模组25处于同一直线上,以使该彩色相机22和所述不可见光图像采集器21以及所述投影模组25对目标的深度相同。当然,图2仅作为一种实施例,在其他应用中,上述三种也可不在同一直线上。
具体地,投影模组25一般由激光及衍射光学元件组成,激光可以是边发射型的激光也可以是垂直腔面激光,该激光为能被该不可见光图像采集器采集得到的不可见光。衍射光学元件根据不同的结构光图案需要可以被设置成具有准直、分束、扩散等功能。上述结构光图案可以为分布不规则的散斑图案,散斑中心能级需要符合对人体无害的要求,因此需要综合考虑激光的功率以及衍射光学元件的配置情况。
散斑图案的密集程度影响了深度值计算的速度及精度,散斑颗粒越多,计算速度越慢,但精度却越高。因此,该投影模组25可根据拍摄图像的目标区域的大致深度,选择合适的散斑颗粒密度,在保证计算速度的同时,仍有着较高的计算精度。当然,该散斑颗粒密度也可由上述三维图像绘制装置24根据自身的计算需求而确定的,并将该确定的密度信息发送至投影模组25。
其中,该投影模组25向目标区域是但不限是以一定的扩散角投射散斑颗粒图案的。
在投影模组25向目标投射结构光图像后,不可见光图像采集器21采集目标的不可见光图像。具体,该不可见光可以为任意不可见光,例如,该不可见光图像采集器21可以为红外采集器,如红外相机,该不可见光图像为红外图像;或者不可见光图像采集器21可以为紫外采集器,如紫外相机,该不可见光图像为紫外图像。
为了达到好的采集效果以及避免后续多余的计算,可将彩色相机与不可见光图像采集器设置成同步采集且采集帧数相同,这样得到的彩色图像与不可见光图像能保证一一对应的关系,便于后续计算。
S12:由所述不可见光图像计算所述第一视点和所述第二视点之间的视差。
例如,利用数字图像处理的匹配算法比如图数字图像相关(DIC)算法计算得到第一视点的图像与第二视点的图像之间的视差,即第一视点的图像与第二视点的图像的像素坐标之间的相对位置关系。
S13:按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像。
例如,将该第一彩色图像的像素坐标移动各自像素对应的图像视差值d,其中,移动得到的像素坐标(u1+d,v1)的像素值(又称为RGB值)为第一彩色图像中的像素坐标(u1,v1)的像素值。
S14:由所述第一彩色图像和第二彩色图像形成三维图像。
例如,将第一彩色图像和第二彩色图像分别作为人体双眼图像,以合成三维图像,具体地可以是上下格式、左右格式或者红蓝格式的用于3D显示的三维图像。进一步地,在合成三维图像后,还可将该三维图像进行显示,或者输出至连接的外部显示装置进行显示。
本实施例中,利用采集得到的第一视点的不可见光图像得到第一视点和第二视点的视差,并利用第二视点的第一彩色图像和该视差得到第二视点的第二彩色图像,进而由第一彩色图像和第二彩色图像形成三维图像,由于该第一视点和第二视点的视差由采集得到的图像数据获得,而无需经过图像处理,因此减少了图像细节信息的流失,以更准确获得两个视点的 彩色图像,进而减少了合成的三维图像的失真度,提高了基于二维图像生成的三维显示效果。而且相对于现有的深度图像绘制(depth-image-based rendering,DIBR)技术,本实施例无需计算得到图像的深度信息,避免了多次重复计算引入的误差,进一步提高了三维显示效果。
请参阅图3,在另一实施例中,该不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到,本实施例与上述实施例的区别在于,上述S12包括以下子步骤:
S121:根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移。
该数字图像处理的匹配算法如数字图像相关算法。该参考结构光图案是预先利用已设置好的投影模组向设定距离的平面投射参考结构光图案,并利用已设置好的不可见光图像采集器采集该平面的参考结构光图案得到的,上述的“设置好”应理解为一旦设置好之后,在后续进行该不可见光图像的采集时亦不会移动该图像采集器和投影模组。
例如,利用数字图像相关算法获得不可见光图像与参考结构光图案如参考散斑图像之间各对应像素的位移值Δu。目前数字图像相关算法的测量精度能达到亚像素级,比如1/8像素,也就是说Δu的值会是1/8的倍数,单位为像素。
S122:由所述位移计算得到第一视点和所述第二视点之间的视差。
由于该不可见光图像与该参考结构光图像的各像素之间的位移与该视差具有线性关系。故可位移以及其线性关系计算得到第一视点和所述第二视点之间的视差。
例如,利用下述公式11计算得到第一视点和所述第二视点之间的视差d,
Figure PCTCN2017085147-appb-000002
其中,B1为所述不可见光图像采集器与所述投影模组之间的距离,B2为所述不可见光图像采集器与彩色相机之间的距离;Z0是所述参考结构光图 像所在平面相对于不可见光图像采集器的深度值;f是不可见光图像采集器与彩色相机的像面焦距,Δu是不可见光图像与预设的参考结构光图像的各像素之间的位移。该参考结构光图像所在平面即为之前投射该参考结构光图案所在的平面,该Z0用于表示该平面距离该图像采集器之间的距离,可由之前测试该参考结构光图像时该平面的距离信息得到。本实施例中,f的单位为像素,f的值可预先经过标定得到。
当计算得到的视差d的数值不为整数时,可对其进行四舍五入或取整等处理。
请参阅图4,在再一实施例中,其与上述实施例的区别在于,上述13包括以下子步骤:
S131:根据视差建立所述不可见光图像的第一像素坐标与所述第一彩色图像的第二像素坐标之间的对应关系。
例如,根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Iir(uir,vir)=Ir(ur+d,vr)。
S132:将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像。
例如,根据对应关系,将第一彩色图像的像素值(也可称为RGB值)赋值于不可见光图像,以生成第二彩色图像。以图像的其中一个像素坐标举例,若d为1,则不可见光图像的像素坐标(1,1)与第一彩色图像的像素坐标(2,1)对应。然后,将不可见光图像的像素坐标(1,1)的像素值设置为第一彩色图像中像素坐标(2,1)的像素值(r,g,b)。
S133:对所述第二彩色图像进行平滑、去噪处理。
由于位移值Δu的数据常常出现一些坏点,导致最终得到的彩色图像中出现一些空洞等问题,在后面步骤中进一步处理时会将这些数据进行放大,进而严重影响三维显示效果,为避免深度图像的坏点或区域数据对三维显示的影响,本子步骤对得到的第二彩色图像进行去噪、平滑处理。
当然,在其他实施例中,上述S13步骤可以仅包括上述S131和S132 子步骤。
请参阅图5,在又再一实施例中,在上述S11之后,还包括以下步骤:
S15:利用所述不可见光图像计算得到所述第一视点的深度图像。
例如,利用红外图像计算出该第一视点的深度图像,其具体计算方式可采用现有的相关算法。
S16:利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像。
根据三维图像转换(3D Image Wrapping)理论——空间任一三维坐标点与图像采集平面上的二维坐标点可通过透射变换理论对应起来,故由此理论可将第一视点和第二视点的图像的像素坐标对应起来,并根据该对应关系和第二视点的第一彩色图像的像素值,为第一视点的图像像素坐标设置第二视点的第一彩色图像中对应像素坐标的像素值。
例如,该S16包括以下子步骤:
a:利用下面公式12计算得到所述第一视点的深度图像的第一像素坐标(uD,vD)与所述第一彩色图像的第二像素坐标(uR,vR)之间的对应关系,
Figure PCTCN2017085147-appb-000003
其中,所述ZD为所述第一深度图像中的深度信息,表示所述目标距离所述深度相机的深度值;ZR表示所述目标距离所述彩色相机的深度值;
Figure PCTCN2017085147-appb-000004
为所述彩色相机的图像坐标系上的像素齐次坐标;
Figure PCTCN2017085147-appb-000005
为所述深度相机的图像坐标系上的像素齐次坐标;Mg为所述彩色相机的内参矩阵,MD为所述深度相机的内参矩阵;R为深度相机相对于彩色相机的外参矩阵中的旋转矩阵,T为深度相机相对于彩色相机的外参矩阵中的平移矩阵。
上述相机及采集器的内参矩阵和外参矩阵可预先设定的,具体该内参矩阵可根据相机及采集器的设置参数计算得到,该外参矩阵可由不可见光图像采集器与彩色相机之间的位置关系确定。在一具体实施例中,由相机及采集器的图像采集镜头的像素焦距以及图像采集靶面的中心位置坐标构成的内部参数矩阵。由于第一视点和第二视点的位置关系设置为人眼双眼的位置关系,人体双眼之间没有任何的相对旋转而仅有设定值t的距离,因 此彩色相机相对于不可见光图像采集器的旋转矩阵R为单位矩阵,平移矩阵T=[t,0,0]-1
进一步地,该设定值t可根据不可见光图像采集器和彩色相机与目标的距离进行调整。在再一实施例中,在上述S11之前还包括以下步骤:获取目标与不可见光图像采集器和彩色相机的距离;当判断所述目标与所述不可见光图像采集器和所述彩色相机的距离均大于第一距离值时,将所述设定值t调大;当判断所述目标与所述不可见光图像采集器和所述彩色相机的距离均小于第二距离值时,将所述设定值t调小。
其中,所述第一距离值大于或等于所述第二距离值。例如,当目标与不可见光图像采集器的距离为100cm,目标与彩色相机的距离也为100cm,由于100cm小于第二距离值200cm,则将设定值调小一个步长值,或者按照当前目标与不可见光图像采集器和彩色相机的距离计算得到调小值后进行调整。当目标与不可见光图像采集器和彩色相机的距离均为300cm,由于300cm大于第二距离值200且小于第一距离值500cm,则不将该设定值进行调整。
b:将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第三彩色图像。
例如,将第一视点的不可见光图像的深度信息ZD代入上述公式12后,可求得公式12左边的第二视点的深度信息也即第一彩色图像的深度信息ZR,以及第一彩色图像的图像坐标系上的像素齐次坐标
Figure PCTCN2017085147-appb-000006
在本实施例中,不可见光图像采集器和彩色相机与目标的距离相同,即求得的ZR与ZD是相等的。由像素齐次坐标
Figure PCTCN2017085147-appb-000007
可得到与该不可见光图像的第一像素坐标(uD,vD)一一对应的第一彩色图像的第二像素坐标(uR,vR),例如其对应关系为(uR,vR)=(uD+d,vD)。然后,根据对应关系,将第一彩色图像的像素值赋值于不可见光图像,以生成第三彩色图像。
在该又再一实施例中,上述S14包括以下步骤:
S141:将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像。
以彩色图像中的一像素坐标举例,第二彩色图像和第三彩色图像中的像素坐标(Ur,Vr)的像素值分别为(r1,g1,b1)和(r2,g2,b2),则将第一视点的第四彩色图像中的像素坐标(Ur,Vr)的像素值设置为
Figure PCTCN2017085147-appb-000008
S142:由所述第一彩色图像和所述第四彩色图像形成三维图像。
例如,将第一彩色图像和第四彩色图像分别作为人体双眼图像,以合成三维图像。
可以理解的是,上述实施例中,可设置该不可见光图像采集器和彩色相机的图像采集靶面大小相等、分辨率相同以及焦距相同。或者,彩色相机和所述不可见光图像采集器的图像采集靶面大小、分辨率以及焦距中的至少一个不相同,例如彩色相机的靶面大小以及分辨率都比不可见光图像采集器大,此时,上述S13之后,该获得方法还包括:对所述第一彩色图像和/或所述第二彩色图像进行插值、分割处理,使得所述第一彩色图像和所述第二彩色图像对应的目标区域相同,且图像大小与分辨率也相同。由于彩色相机与不可见光图像采集器在装配时存在误差,故上述该不可见光图像采集器和彩色相机的图像采集靶面大小相等、分辨率相同以及焦距相同应理解为:该不可见光图像采集器和彩色相机的图像采集靶面大小、分辨力和焦距为在允许误差的范围内的相同。
而且,上述图像包括照片或者视频,当上述图像为视频时,所述不可见光图像采集器和彩色相机的采集频率同步,或者若不可见光图像采集器和彩色相机的采集频率不同步,则通过图像插值的方式获得频率一致的视频图像。
请参阅图6,图6是本发明三维图像绘制装置一实施例的结构示意图。本实施例中,该绘制装置60包括获取模块61、计算模块62、形成模块63和得到模块64。其中,
获取模块61用于分别获取以第一视点对目标进行采集得到的不可见光图像和以第二视点对所述目标进行采集得到的第一彩色图像;
计算模块62用于由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;
得到模块64用于按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;
形成模块63用于由所述第一彩色图像和所述第二彩色图像形成三维图像。
可选地,所述不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。
可选地,计算模块62具体用于根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。
进一步可选地,计算模块62执行所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:利用上述公式11计算得到第一视点和所述第二视点之间的视差d。
可选地,得到模块64具体用于根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Iir(uir,vir)=Ir(ur+d,vr);将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;对所述第二彩色图像进行平滑、去噪处理。
可选地,计算模块62还用于利用所述不可见光图像计算得到所述第一视点的深度图像;利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;该形成模块63具体用于将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;由所述第一彩色图像和所述第四彩色图像形成三维图像。
可选地,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机和所述不可见光图像采集器以及所述投影模组处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器 为红外相机。
可选地,所述彩色相机和所述不可见光图像采集器的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。
可选地,所述不可见光图像和所述第一彩色图像为照片或者视频,当所述不可见光图像和所述第一彩色图像为视频时,所述不可见光图像采集器和彩色相机的采集频率同步,或者若不可见光图像采集器和彩色相机的采集频率不同步,则通过图像插值的方式获得频率一致的视频图像。
其中,该绘制装置的上述模块分别用于执行上述方法实施例中的相应步骤,具体执行过程如上方法实施例说明,在此不作赘述。
请参阅图7,图7是本发明三维图像绘制***一实施例方式的结构示意图。本实施例中,该***70包括投影模组74、不可见光图像采集器71、彩色相机72、与所述不可见光图像采集器71和彩色相机72连接的图像处理设备73。该图像处理设备73包括输入接口731、处理器732、存储器733。进一步地,该图像处理设备73也可与投影模组74连接。
该输入接口731用于获得不可见光图像采集器71和彩色相机72采集得到的图像。
存储器733用于存储计算机程序,并向处理器732提供所述计算机程序,且可存储处理器732处理时所采用的数据如不可见光图像采集器71和彩色相机72的内参矩阵和外参矩阵等,以及输入接口731获得的图像。
处理器732用于:
通过输入接口731分别获取以第一视点的不可见光图像采集器71对目标进行采集得到的不可见光图像和以第二视点的彩色相机72对所述目标进行采集得到的第一彩色图像;
由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;
按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;
由所述第一彩色图像和所述第二彩色图像形成三维图像。
本实施例中,图像处理设备73还可包括显示屏734,该显示屏734用于显示该三维图像,以实现三维显示。当然,在另一实施例中,图像处理设备73不用于显示该三维图像,如图8所示,该三维图像绘制***70还 包括与图像处理设备73连接的显示设备75,显示设备75用于接收图像处理设备73输出的三维图像,并显示该三维图像。
可选地,处理器732具体用于根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。
进一步可选地,处理器732执行所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:利用下述公式1计算得到第一视点和所述第二视点之间的视差d。
可选地,处理器732执行所述按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像,包括:根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Iir(uir,vir)=Ir(ur+d,vr);将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;对所述第二彩色图像进行平滑、去噪处理。
可选地,处理器732还用于利用所述不可见光图像计算得到所述第一视点的深度图像;利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;处理器732执行所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;由所述第一彩色图像和所述第四彩色图像形成三维图像。
可选地,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机72和所述不可见光图像采集器71以及所述投影模组74处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器71为红外相机。
可选地,所述彩色相机72和所述不可见光图像采集器71的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。
可选地,所述不可见光图像和所述第一彩色图像为照片或者视频,当 所述不可见光图像和所述第一彩色图像为视频时,所述不可见光图像采集器和彩色相机的采集频率同步,或者若不可见光图像采集器和彩色相机的采集频率不同步,则通过图像插值的方式获得频率一致的视频图像。
该图像处理设备73可作为上述三维图像绘制装置,用于执行上述实施例所述方法。例如,上述本发明实施方式揭示的方法也可以应用于处理器732中,或者由处理器732实现。处理器732可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器732中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器732可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器733,处理器732读取相应存储器中的信息,结合其硬件完成上述方法的步骤。
上述方案中,利用采集得到的第一视点的不可见光图像得到第一视点和第二视点的视差,并利用第二视点的第一彩色图像和该视差得到第二视点的第二彩色图像,进而由第一彩色图像和第二彩色图像形成三维图像,由于该第一视点和第二视点的视差由采集得到的图像数据获得,而无需经过图像处理,因此减少了图像细节信息的流失,以更准确获得两个视点的彩色图像,进而减少了合成的三维图像的失真度,提高了基于二维图像生成的三维显示效果。而且相对于现有的DIBR技术,无需计算得到图像的深度信息,避免了多次重复计算引入的误差,进一步提高了三维显示效果。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种绘制三维图像的方法,其中,包括:
    分别获取以第一视点对目标进行采集得到的不可见光图像和以第二视点对所述目标进行采集得到的第一彩色图像;
    由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;
    按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;
    由所述第一彩色图像和所述第二彩色图像形成三维图像。
  2. 根据权利要求1所述的方法,其中,所述不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。
  3. 根据权利要求2所述的方法,其中,所述由所述不可见光图像计算所述第一视点和所述第二视点之间的视差,包括:
    根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;
    由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。
  4. 根据权利要求3所述的方法,其中,所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:
    利用下述公式1计算得到第一视点和所述第二视点之间的视差d,
    Figure PCTCN2017085147-appb-100001
    其中,B1为所述不可见光图像采集器与所述投影模组之间的距离,B2为所述不可见光图像采集器与彩色相机之间的距离;Z0是所述参考结构光图像所在平面相对于不可见光图像采集器的深度值;f是不可见光图像采集器与彩色相机的像面焦距,Δu是不可见光图像与预设的参考结构光图像的各像素之间的位移。
  5. 根据权利要求1所述的方法,其中,所述按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像,包括:
    根据视差d,建立所述不可见光图像的第一像素坐标Lir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:
    Iir(uir,vir)=IR(ur+d,vr);
    将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;
    对所述第二彩色图像进行平滑、去噪处理。
  6. 根据权利要求2所述的方法,其中,还包括:
    利用所述不可见光图像计算得到所述第一视点的深度图像;
    利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;
    所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:
    将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;
    由所述第一彩色图像和所述第四彩色图像形成三维图像。
  7. 根据权利要求2所述的方法,其中,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机和所述不可见光图像采集器以及所述投影模组处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器为红外相机。
  8. 根据权利要求1所述的方法,其中,所述彩色相机和所述不可见光图像采集器的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。
  9. 一种图像处理设备,其中,包括输入接口、处理器和存储器;
    所述输入接口用于获得不可见光图像采集器和彩色相机采集得到的图像;
    所述存储器用于存储计算机程序;
    所述处理器执行所述计算机程序,用于:
    通过所述输入接口分别获取以第一视点的所述不可见光图像采集器对目标进行采集得到的不可见光图像和以第二视点的所述彩色相机对所述目标进行采集得到的第一彩色图像;
    由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;
    按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;
    由所述第一彩色图像和所述第二彩色图像形成三维图像。
  10. 根据权利要求9所述的图像处理设备,其中,所述不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。
  11. 根据权利要求10所述的图像处理设备,其中,所述处理器具体用于:
    根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;
    由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。
  12. 根据权利要求11所述的图像处理设备,其中,所述处理器执行所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:
    利用下述公式1计算得到第一视点和所述第二视点之间的视差d,
    Figure PCTCN2017085147-appb-100002
    其中,B1为所述不可见光图像采集器与所述投影模组之间的距离,B2为所述不可见光图像采集器与彩色相机之间的距离;Z0是所述参考结构光图像所在平面相对于不可见光图像采集器的深度值;f是不可见光图像采集器与彩色相机的像面焦距,Δu是不可见光图像与预设的参考结构光图像的各像素之间的位移。
  13. 根据权利要求9所述的图像处理设备,其中,所述处理器执行所述按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像,包括:
    根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:
    Iir(uir,vir)=Ir(ur+d,vr);
    将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;
    对所述第二彩色图像进行平滑、去噪处理。
  14. 根据权利要求10所述的图像处理设备,其中,所述处理器还用于:
    利用所述不可见光图像计算得到所述第一视点的深度图像;
    利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;
    所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:
    将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;
    由所述第一彩色图像和所述第四彩色图像形成三维图像。
  15. 根据权利要求10所述的图像处理设备,其中,还包括显示屏,用于显示所述三维图像。
  16. 一种三维图像绘制***,其中,包括投影模组、不可见光图像采集器、彩色相机、与所述不可见光图像采集器和彩色相机连接的图像处理设备;
    所述图像处理设备用于:
    分别获取以第一视点的不可见光图像采集器对目标进行采集得到的不可见光图像和以第二视点的彩色相机对所述目标进行采集得到的第一彩色图像;
    由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;
    按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;
    由所述第一彩色图像和所述第二彩色图像形成三维图像。
  17. 根据权利要求16所述的三维图像绘制***,其中,所述不可见光图像为在所述投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。
  18. 根据权利要求17所述的三维图像绘制***,其中,所述图像处理 设备还用于:
    利用所述不可见光图像计算得到所述第一视点的深度图像;
    利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;
    所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:
    将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;
    由所述第一彩色图像和所述第四彩色图像形成三维图像。
  19. 根据权利要求17所述的三维图像绘制***,其中,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机和所述不可见光图像采集器以及所述投影模组处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器为红外相机;和/或
    所述彩色相机和所述不可见光图像采集器的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。
  20. 根据权利要求16所述的三维图像绘制***,其中,还包括与所述图像处理设备连接的显示设备,所述显示设备用于显示所述图像处理设备输出的所述三维图像。
    Figure PCTCN2017085147-appb-100003
    Figure PCTCN2017085147-appb-100004
    Figure PCTCN2017085147-appb-100005
    Figure PCTCN2017085147-appb-100006
PCT/CN2017/085147 2016-08-19 2017-05-19 绘制三维图像的方法及其设备、*** WO2018032841A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610698004.0A CN106170086B (zh) 2016-08-19 2016-08-19 绘制三维图像的方法及其装置、***
CN201610698004.0 2016-08-19

Publications (1)

Publication Number Publication Date
WO2018032841A1 true WO2018032841A1 (zh) 2018-02-22

Family

ID=57375861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085147 WO2018032841A1 (zh) 2016-08-19 2017-05-19 绘制三维图像的方法及其设备、***

Country Status (2)

Country Link
CN (1) CN106170086B (zh)
WO (1) WO2018032841A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170086B (zh) * 2016-08-19 2019-03-15 深圳奥比中光科技有限公司 绘制三维图像的方法及其装置、***
CN106875435B (zh) * 2016-12-14 2021-04-30 奥比中光科技集团股份有限公司 获取深度图像的方法及***
CN107105217B (zh) * 2017-04-17 2018-11-30 深圳奥比中光科技有限公司 多模式深度计算处理器以及3d图像设备
CN108460368B (zh) * 2018-03-30 2021-07-09 百度在线网络技术(北京)有限公司 三维图像合成方法、装置及计算机可读存储介质
CN113436129B (zh) * 2021-08-24 2021-11-16 南京微纳科技研究院有限公司 图像融合***、方法、装置、设备及存储介质
CN114119680B (zh) * 2021-09-09 2022-09-20 合肥的卢深视科技有限公司 图像获取方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060114320A1 (en) * 2004-11-30 2006-06-01 Honda Motor Co. Ltd. Position detecting apparatus and method of correcting data therein
CN102999939A (zh) * 2012-09-21 2013-03-27 魏益群 坐标获取装置、实时三维重建***和方法、立体交互设备
CN104185006A (zh) * 2013-05-24 2014-12-03 索尼公司 成像设备以及成像方法
CN104918035A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种获取目标三维图像的方法及***
CN106170086A (zh) * 2016-08-19 2016-11-30 深圳奥比中光科技有限公司 绘制三维图像的方法及其装置、***
CN106604020A (zh) * 2016-11-24 2017-04-26 深圳奥比中光科技有限公司 一种用于3d显示的专用处理器
CN106791763A (zh) * 2016-11-24 2017-05-31 深圳奥比中光科技有限公司 一种用于3d显示和3d交互的专用处理器

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101502372B1 (ko) * 2008-11-26 2015-03-16 삼성전자주식회사 영상 획득 장치 및 방법
CN101662695B (zh) * 2009-09-24 2011-06-15 清华大学 一种获取虚拟视图的方法和装置
US9406132B2 (en) * 2010-07-16 2016-08-02 Qualcomm Incorporated Vision-based quality metric for three dimensional video
CN102289841B (zh) * 2011-08-11 2013-01-16 四川虹微技术有限公司 一种立体图像观众感知深度的调节方法
WO2014002849A1 (ja) * 2012-06-29 2014-01-03 富士フイルム株式会社 3次元測定方法、装置及びシステム、並びに画像処理装置
KR101904718B1 (ko) * 2012-08-27 2018-10-05 삼성전자주식회사 컬러 영상 및 깊이 영상 촬영 장치 및 방법
US10517483B2 (en) * 2012-12-05 2019-12-31 Accuvein, Inc. System for detecting fluorescence and projecting a representative image
CN103824318B (zh) * 2014-02-13 2016-11-23 西安交通大学 一种多摄像头阵列的深度感知方法
CN103796004B (zh) * 2014-02-13 2015-09-30 西安交通大学 一种主动结构光的双目深度感知方法
CN105791662A (zh) * 2014-12-22 2016-07-20 联想(北京)有限公司 电子设备和控制方法
CN105120257B (zh) * 2015-08-18 2017-12-15 宁波盈芯信息科技有限公司 一种基于结构光编码的垂直深度感知装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060114320A1 (en) * 2004-11-30 2006-06-01 Honda Motor Co. Ltd. Position detecting apparatus and method of correcting data therein
CN102999939A (zh) * 2012-09-21 2013-03-27 魏益群 坐标获取装置、实时三维重建***和方法、立体交互设备
CN104185006A (zh) * 2013-05-24 2014-12-03 索尼公司 成像设备以及成像方法
CN104918035A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种获取目标三维图像的方法及***
CN106170086A (zh) * 2016-08-19 2016-11-30 深圳奥比中光科技有限公司 绘制三维图像的方法及其装置、***
CN106604020A (zh) * 2016-11-24 2017-04-26 深圳奥比中光科技有限公司 一种用于3d显示的专用处理器
CN106791763A (zh) * 2016-11-24 2017-05-31 深圳奥比中光科技有限公司 一种用于3d显示和3d交互的专用处理器

Also Published As

Publication number Publication date
CN106170086A (zh) 2016-11-30
CN106170086B (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
US11354840B2 (en) Three dimensional acquisition and rendering
WO2018032841A1 (zh) 绘制三维图像的方法及其设备、***
JP5887267B2 (ja) 3次元画像補間装置、3次元撮像装置および3次元画像補間方法
WO2019100933A1 (zh) 用于三维测量的方法、装置以及***
US8736672B2 (en) Algorithmic interaxial reduction
CN103115613B (zh) 一种空间三维定位方法
CN106254854B (zh) 三维图像的获得方法、装置及***
US9813693B1 (en) Accounting for perspective effects in images
TWI591584B (zh) 三維感測方法與三維感測裝置
US9615081B2 (en) Method and multi-camera portable device for producing stereo images
CN110827392B (zh) 单目图像三维重建方法、***及装置
JP2010113720A (ja) 距離情報を光学像と組み合わせる方法及び装置
WO2009140908A1 (zh) 光标处理方法、装置及***
TWI788739B (zh) 3d顯示設備、3d圖像顯示方法
CN104599317A (zh) 一种实现3d扫描建模功能的移动终端及方法
KR101853269B1 (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 장치
JP2013115668A (ja) 画像処理装置、および画像処理方法、並びにプログラム
KR20190044439A (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 방법
Krutikova et al. Creation of a depth map from stereo images of faces for 3D model reconstruction
TWI820246B (zh) 具有像差估計之設備、估計來自廣角影像的像差之方法及電腦程式產品
KR20110025083A (ko) 입체 영상 시스템에서 입체 영상 디스플레이 장치 및 방법
CN106331672B (zh) 视点图像的获得方法、装置及***
CN104463958A (zh) 基于视差图融合的三维超分辨率方法
KR101358432B1 (ko) 디스플레이 장치 및 방법
TWI725620B (zh) 全向立體視覺的相機配置系統及相機配置方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17840817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17840817

Country of ref document: EP

Kind code of ref document: A1