CN109889803B - Structured light image acquisition method and device - Google Patents

Structured light image acquisition method and device Download PDF

Info

Publication number
CN109889803B
CN109889803B CN201910023968.9A CN201910023968A CN109889803B CN 109889803 B CN109889803 B CN 109889803B CN 201910023968 A CN201910023968 A CN 201910023968A CN 109889803 B CN109889803 B CN 109889803B
Authority
CN
China
Prior art keywords
image
structured light
space
frame
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910023968.9A
Other languages
Chinese (zh)
Other versions
CN109889803A (en
Inventor
王兆民
许星
郭胜男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN201910023968.9A priority Critical patent/CN109889803B/en
Publication of CN109889803A publication Critical patent/CN109889803A/en
Application granted granted Critical
Publication of CN109889803B publication Critical patent/CN109889803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The invention is suitable for the technical field of image acquisition, and provides a structured light image acquisition method and a device, wherein the structured light image acquisition method comprises the following steps: projecting structured light to a space to be measured according to a preset projection frequency and a preset collection frequency, and collecting multi-frame images, wherein the multi-frame images at least comprise a first image and a second image, the first image comprises ambient light information, and the second image comprises ambient light information and structured light information; judging whether the space to be detected is in a dynamic state or not according to the multi-frame image; if the space to be detected is in a dynamic state, performing 3D noise reduction processing on the multi-frame image; outputting a precise structured light image; by controlling the projection frequency and the acquisition frequency, the projection frequency and the acquisition frequency do not correspond to each other one by one, and by carrying out contrastive analysis on the first image and the second image, the influence of ambient light can be effectively removed, the accuracy of the acquired structured light image is improved, and further the image depth information is more accurate.

Description

Structured light image acquisition method and device
Technical Field
The invention belongs to the technical field of image acquisition, and particularly relates to a structured light image acquisition method and device.
Background
The structured light depth camera projects the coded structured light pattern to the target space, collects the structured light image reflected by the target space, and obtains image depth information after further calculation. Based on the depth image, a plurality of functions such as 3D modeling, face recognition, gesture interaction and the like can be realized, and meanwhile, the structured light depth camera has the advantages of high resolution, high precision, low power consumption and the like, and is widely applied to intelligent equipment such as mobile phones, computers, robots, virtual reality, augmented reality and the like.
However, at present, structured light depth cameras also face significant challenges. For example, when the structured light depth camera is applied to a scene with strong ambient light, such as outdoors, the intensity of the ambient light is equal to or even exceeds the intensity of the projected structured light image, and therefore the accuracy of the acquired structured light image is seriously affected, and the calculation of the depth image is further affected.
Disclosure of Invention
In view of this, an embodiment of the present invention provides a structured light image obtaining method, so as to solve the technical problem in the prior art that the accuracy of structured light image collection by a structured light depth camera is low.
A first aspect of an embodiment of the present invention provides a structured light image acquisition method, including:
projecting structured light to a space to be measured according to a preset projection frequency and a preset collection frequency, and collecting multi-frame images, wherein the multi-frame images at least comprise a first image and a second image, the first image comprises ambient light information, and the second image comprises ambient light information and structured light information;
judging whether the space to be detected is in a dynamic state or not according to the multi-frame image;
if the space to be detected is in a dynamic state, performing 3D noise reduction processing on the multi-frame image;
an accurate structured light image is output.
A second aspect of an embodiment of the present invention provides a structured light image capturing apparatus, including:
the projection module is used for projecting the structured light to the space to be measured;
the acquisition module is used for acquiring images;
the data processing module is used for controlling the projection frequency of the projection module and the acquisition frequency of the acquisition module, so that the projection module projects structured light to a space to be measured and acquires multi-frame images according to the preset projection frequency and the acquisition frequency, wherein the multi-frame images at least comprise a first image and a second image, the first image comprises ambient light information, and the second image comprises ambient light information and structured light information;
the device is used for judging whether the space to be detected is in a dynamic state or not according to the multi-frame images, and if the space to be detected is in the dynamic state, performing 3D noise reduction processing on the multi-frame images;
for outputting a precise structured-light image.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described method.
In the embodiment of the invention, the projection frequency and the acquisition frequency are controlled and are not in one-to-one correspondence, so that the acquired multi-frame image at least comprises a first image and a second image, wherein the first image only comprises ambient light information, and the second image comprises ambient light information and structured light information. Moreover, before image processing, whether the space to be measured is in a dynamic scene is judged, and a corresponding processing mode is selected according to the judgment result, so that the precision of the acquired structured light image is further improved, and the acquired image depth information is more accurate.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a structured light image capturing apparatus according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a data processing module in the structured light image capturing apparatus according to an embodiment of the present invention;
fig. 3 is a first schematic flow chart illustrating an implementation of a structured light image obtaining method according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of an implementation of a structured light image obtaining method according to an embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating an implementation of a structured light image obtaining method according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of an implementation of a structured light image obtaining method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a schematic structural diagram of a structured light image obtaining apparatus 10 according to this embodiment, where the structured light image obtaining apparatus 10 is configured to obtain a structured light image and calculate image depth information according to the structured light image. The structured light image acquisition device 10 includes a projection module 11, an acquisition module 12 and a data processing module 13, and the projection module 11 and the acquisition module 12 are both connected with the data processing module 13. Of course, the structured light image capture device 10 may also include other modules, which are not listed here.
The projection module 11 includes a light source and an optical component (the optical component may include a diffractive optical element, etc.), and the light source is configured to project structured light to the space 20 to be measured, where the structured light may be structured light with a random speckle texture structure or linear light, and is not limited herein. The structured light projected by the light source may form randomly or regularly distributed projection patterns in the space 20 to be measured, and the acquisition module 12 is configured to acquire the projection patterns. In actual use, the image actually collected by the collection module 12 includes an image formed by the ambient light and the structured light reflected by the surface of the space 20 to be measured, considering the existence of the ambient light. The data processing module 13 may receive and store the image acquired by the acquisition module 12, identify a structured light image in the image, and calculate image depth information according to the structured light image. Optionally, the light source is an infrared light source, which can project infrared light to the space 20 to be measured, the collecting module 12 is correspondingly an infrared camera, the structured light projected by the infrared light source can form randomly or regularly distributed projection patterns in the space 20 to be measured, and the collecting module 12 can collect the projection patterns. Of course, in other embodiments, the light source may be of other types, and is not limited to the above.
The data processing module 13 can synchronously control the projection frequency f of the projection module 111And the acquisition frequency f of the acquisition module 122Acquisition frequency f2And a projection frequency f1In contrast, after the image is captured by the capture module 12, a plurality of frames of images (the first image and the second image are different frames of images) including at least a first image and a second image may be generated in an interlaced manner, where the first image includes ambient light information and the second image includes ambient light information and structured light information. It should be understood that the first image and the second image are only used for convenience of description, and the first image may include ambient light information and structured light information, and the second image includes ambient light information, which is not limited herein.
Fig. 3 shows a flowchart of an implementation of a structured light image obtaining method according to an embodiment of the present invention, where the method may be executed by a structured light image obtaining apparatus, and the structured light image obtaining apparatus may be configured in a mobile terminal, and may be implemented by software, or may be implemented by hardware, or may be implemented by both software and hardware. As shown in fig. 3, the structured light image acquisition method may include the steps of:
step S10: according to a preset projection frequency and a preset collection frequency, projecting structured light to a space to be measured and collecting multi-frame images, wherein the multi-frame images at least comprise a first image and a second image, the first image comprises ambient light information, and the second image comprises ambient light information and structured light information.
Referring to the embodiment shown in fig. 1, the projection module 11 of the structured light image capturing apparatus 10 can be configured to have a predetermined projection frequency f1The structured light is projected to the space 20 to be measured, the light source in the projection module 11 may be an infrared light source, and the structured light may be structured light having a random speckle texture structure, or may be linear light, which is not limited herein. When performing projection, the data processing module 13 may control the projection module 11, so that the projection module 11 may periodically project the structured light according to a preset projection frequency, for example, the light source is periodically turned on or off.
Projection frequency f1Corresponding to a projection period T1Is 1/f1In a projection period T1The infrared light source has a predetermined lighting period T11And a turning-off period T12(T1=T11+T12) Infrared light source in the extinguishing time period T12No structured light is projected therein, and a lighting period T11Which may project structured light toward the space 20 to be measured. The infrared light source may be alternately turned on and off for an on period T11And a turning-off period T12The time lengths of the time lengths can be the same or different, and can be set according to requirements. Here the projection period T of the light source1The setting may be performed according to actual needs, for example, the first turning off and then turning on may be a period, the second turning off and then continuously turning on twice may be a period, and other forms may also be used, which are not limited herein.
In order to meet the requirements of laser safety standards and the consideration of power consumption, the structured light projected by the projection module 11 is generally low in brightness and therefore is not easily distinguished from strong ambient light, and when the projection module is used in an environment with strong ambient light, the ambient light easily interferes with the structured light, so that the accuracy of the structured light image acquired by the acquisition module 12 is greatly reduced. Although it is conceivable to use a higher-precision filter to reduce the interference of the ambient light reflection or use a higher-precision amplifier to amplify the reflected laser light in the target wavelength band, the above methods all increase the complexity of the circuit structure and the production cost, and are not favorable for the miniaturization and low-cost control of the structured-light image capturing apparatus 10.
Referring to the embodiment shown in fig. 1, the acquisition module 12 of the structured light image capturing apparatus 10 can acquire the image according to a preset acquisition frequency f2An image of the reflection of the space 20 to be measured is acquired. When the light source in the projection module 11 is an infrared light source, the acquisition module 12 can be selected as an infrared camera; the structured light projected by the infrared light source may form randomly or regularly distributed projection patterns in the space 20 to be measured, and the acquisition module 12 acquires the projection patterns. In actual use, the image actually collected by the collection module 12 includes an image formed by the ambient light and the structured light reflected by the surface of the space 20 to be measured, considering the existence of the ambient light.
When image acquisition is performed, the data processing module 13 may control the acquisition module 12. Acquisition frequency f2And the projection frequency f1In contrast, the multi-frame images that can be captured by the capture module 12 include at least an image with ambient light information and structured light information. When the acquisition module 12 is in the extinguishing period T of the infrared light source12When the image is collected, the collected image only contains the ambient light information; when the collection module 12 is in the lighting time period T of the infrared light source11When the image is collected, the collected image contains the ambient light information and the structured light information.
In one embodiment, the projection frequency f of the projection module 111And the collection frequency f of the collection module 122The ratio range of (A) is: 1/2 is less than or equal to f1/f2<1. For example, the projection frequency f of the projection module 111For acquiring the frequency f of the module 122Half of (f)1/f21/2), taking into account the projection frequency f1At 30fps (Frame per Second, number of frames per Second), the acquisition frequency f2At 60 fps. Among the images acquired by the acquisition module 12, a first image I1Corresponding to a second image I containing ambient light information2Correspondingly contains ambient light information and structured light information, and at the moment, has a structured light image I20=I2-I1. As another example, the projection frequency f1Or may be greater than the acquisition frequency f2Half of, taking into account the projection frequency f1At 40fps, acquisition frequency f2At 60 fps. In the image collected by the collection module 12, I1Corresponding to an ambient light image, I2And I3Corresponding to a structured-light image containing ambient light, the second image I2And a third image I3Respectively, of20=I2-I1,I30=I3-I1. Considering the ambient light information of the first frame as the ambient light information in two consecutive frames of images, the frame rate of the depth image can be effectively increased. Of course, the projection frequency f1And acquisition frequency f2Other values are possible and not limited to the above.
It should be understood that the above description is for the first image I1And a second image I2The processing mode of (2) is only suitable for static scenes, or the situation that the movement of dynamic scenes is slow relative to the acquisition frequency; for a dynamic scene, the accuracy of the processing method will be reduced, and a 3D noise reduction processing method is required. Therefore, before selecting the corresponding processing mode, it is necessary to determine whether the space 20 to be measured is in a dynamic state.
Step S20: and judging whether the space to be detected is in a dynamic state or not according to the multi-frame image.
The multi-frame image refers to a multi-frame image acquired by the acquisition module 12, and the multi-frame image at least includes a first image and a second image. After the data processing module 13 receives and stores the multi-frame images acquired by the acquisition module 12, it determines whether the space to be measured is in a dynamic state according to the multi-frame images, so as to further confirm the processing mode to be adopted for the images, thereby being beneficial to obtaining more accurate structured light patterns. Referring to fig. 2, in an embodiment, the data processing module 13 includes a receiving unit 131 for receiving the multiple frames of images acquired by the acquiring module 12.
Referring to fig. 4, in an embodiment, a method for the data processing module 13 to determine whether the space 20 to be measured is dynamic may be as follows:
step S201: pattern information formed by the structured light is identified.
Step S202: the first image and the second image are determined based on the pattern information.
Referring to fig. 2, the structured light projected by the infrared light source may form randomly or regularly distributed projection patterns in the space 20 to be measured, after the collecting module 12 collects the image of the space 20 to be measured, the image is transmitted to the data processing module 13, and the data processing module 13 includes an identifying unit 132 for identifying pattern information formed by the structured light. Since the image collected by the collecting module 12 includes multiple frames of images, some of the images only include ambient light, and some of the images include ambient light and structured light, the first image and the second image need to be determined for further image processing.
Step S203: and judging whether the motion intensity of the space to be detected exceeds a judgment threshold value or not according to the first image and the second image.
Referring to fig. 2, when determining the dynamic and static conditions of the space to be measured 20, a motion intensity threshold (i.e., a determination threshold) may be set, the data processing module 13 includes a determining unit 133, and the determining unit 133 may obtain the motion intensity of the space to be measured 20 according to the obtained first image and the obtained second image, and compare the motion intensity with a preset motion intensity threshold to determine whether the motion intensity of the space to be measured exceeds the motion intensity threshold.
If the motion intensity of the space 20 to be measured does not exceed the motion intensity threshold, then:
step S204: if the space to be measured is determined to be in the static scene, the following step S40 is continued.
Step S40: and carrying out comparative analysis on the first image and the second image.
When the space 20 to be measured is in a static scene, it can be considered that the ambient light in the static scene does not change significantly, and the first image I1And a second image I2The part distinguished in the above is whether the structured light is included, so that the data processing module 13 only needs to process the first image I when performing image processing1And a second image I2And performing comparative analysis. Referring to FIG. 2, in one embodiment, numbersThe data processing module 13 further comprises a contrast analysis unit 134 for comparing the first image I1And a second image I2Carrying out comparative analysis to obtain a structured light image I20=I2-I1
If the motion intensity of the space 20 exceeds the motion intensity threshold, then:
step S205: and determining that the space to be measured is in the dynamic scene.
When the space 20 to be measured is in a dynamic scene, the background of the dynamic scene changes continuously, so that the ambient light changes continuously, which causes the first image I in the multi-frame image acquired by the acquisition module 121And a second image I2If only the first image I is to be processed1And a second image I2Performing simple contrast analysis to obtain structured light image I20The error is large, so that 3D noise reduction processing needs to be carried out on a plurality of frames of images, and a more accurate structured light image is obtained.
If the space to be measured is in a dynamic state, then:
step S30: and 3D noise reduction processing is carried out on the multi-frame images.
Referring to fig. 5, when performing 3D denoising, the data processing module 13 filters non-overlapping information (the part of information is noise caused by ambient light and other factors) by comparing adjacent frames of images to obtain a pure structured light image, and one possible operation steps may be as follows:
step S301: a current block in the current frame image is determined.
Referring to fig. 2, the data processing module 13 includes a 3D denoising unit 135, wherein the 3D denoising unit 135 first divides the current frame image into a current block having a certain pixel size, the current block may include a plurality of pixels, and the current block is used as a basic processing unit.
Step S302: searching a matching block of the current block in a frame image adjacent to the current frame image.
When searching, a search window is first determined according to a current block in a current frame, and optionally, the size of the search window is the same as that of the current block. The adjacent frame image of the current frame is selected as a search object, for example, a previous frame image of the current frame is considered as a search object, a search is performed in the previous frame image according to a predetermined search window, a reference block most similar to the current block is found, and the reference block is set as a best matching block corresponding to the current block in the previous frame image.
In one embodiment, the search for the matching block may be implemented by Mean Absolute Difference (MAD) algorithm, which may be based on the following basic manner: taking a search window with the same size as the current block in the previous frame of image, and calculating the similarity of the search window and the current block; and traversing the whole previous frame image, finding a reference block which is most similar to the current block from all the available reference blocks (the smaller the average absolute difference is, the more similar the two surfaces are), and taking the reference block as the best matching block with the current block (the reference block with the minimum average absolute difference with the current block). Of course, in other embodiments, the search for the matching block may be performed in other manners, and is not limited to the above manner, and is not limited herein.
In one embodiment, when the above average absolute difference algorithm is used for searching the matching block, the number of pixels involved in calculation can be set as required.
For example, all pixels included in the current block are involved in the calculation, and at this time, all pixels in the search window and all pixels in the current block need to be calculated during the search, so as to obtain an average absolute difference value of each search window, and the accuracy of the calculation result is higher, but the number of pixels involved in the calculation is huge, so that the overall calculation amount is increased.
For another example, in order to reduce the amount of calculation, only a portion of pixels in the current block may be added to the calculation, and the portion of pixels involved in the calculation may be determined again in the current block as needed; during searching, pixels corresponding to the current block in the search window need to be selected for calculation, so that an average absolute difference value of each search window is obtained. All pixels in the current block are grouped into a set, and the part of the pixel set participating in the calculation is the subset of the current block. Therefore, the above search method may be a subset matching method. The number of pixels participating in calculation in the subset matching method is small, so that the calculation amount is greatly reduced, and the average search time can be effectively shortened.
Step S303: judging whether the searching is successful or not according to the similarity of the current block and the matching block;
in order to ensure the reliability of the inter-frame filtering boundary, the matching degree (i.e., similarity) between the matching block and the current block needs to be determined. This is because, in the above searching process, only the reference block with the smallest average absolute difference between the current block and the previous frame image can be found, but it cannot be guaranteed whether the similarity between the current block and the reference block meets the requirement. For example, an arbitrary frame of image not including the current block may also be obtained by using the above average absolute difference algorithm, and a matching block closest to the current block may be obtained, but the matching block is only more similar to the current block with respect to other reference blocks, but the average absolute difference may be very large, so that the matching block may not include information of the current block.
Therefore, in order to improve the search effect and the search accuracy, after the matching block is obtained, the matching degree of the matching block and the current block needs to be judged. In an embodiment, when the matching degree between the matching block and the current block is determined, an average absolute difference (MAD value) between the matching block and the current block may be compared with a preset threshold, and if the average absolute difference is not greater than the preset threshold, it indicates that the matching degree between the matching block and the current block is high, and the matching degree meets a preset requirement, at this time, the search is considered to be successful, and then the search of the matching block is performed on an adjacent image of the previous frame of image. If the average absolute difference value is larger than the preset threshold value, the matching degree of the matching block and the current block is low, the matching degree does not meet the preset requirement, the searching is considered to be failed at the moment, the searching of the matching block is not carried out on the adjacent image of the previous frame of image at the moment, and the invalid searching is avoided; returning to step S10, the data processing module 13 controls the projection module 11 and the collection module 12 to respectively control the projection frequency f according to the preset projection frequency f1And acquisition frequency f2And carrying out structured light projection and image acquisition.
Further, if the search is successful:
step S304: and performing inter-frame filtering processing.
After the searching process is carried out and the matching block matched with the current block is obtained in the adjacent frame image of the current frame image, the motion track of the current block in the continuous frame image can be determined, so that the inter-frame filtering can be carried out according to the motion track. Because in several continuous frames of images, the repeated information includes structured light patterns, and because the space 20 to be measured is in a dynamic scene, the ambient light is constantly changing, and the information is not repeated, when performing inter-frame filtering, the repeated information can be retained, and the non-overlapping information is filtered, so that a pure structured light image can be obtained.
After completing step S40 when the space to be measured is in the static scene, and after completing step S304 when the space to be measured is in the dynamic scene, the following steps are performed:
step S50: an accurate structured light image is output.
Referring to fig. 2, the data processing module 13 further includes an output unit 136 and a calculating unit 137, wherein the output unit 136 is configured to output the structured light image to the calculating unit 137, and the calculating unit 137 is configured to calculate the image depth information according to the structured light image. After obtaining the clear structured light image in the above manner, the structured light image needs to be output to the depth information calculation unit 137 so as to further calculate the depth information of the structured light image.
In one embodiment, the space 20 is in a static scene, and the first image I is processed by the structured light image data processing module 131And a second image I2Comparative analysis (e.g. I)20=I2-I1) And obtaining the structured light image output at the moment, wherein the structured light image is the structured light image corresponding to the static scene.
In an embodiment, the space 20 to be measured is in a dynamic scene, the structured light image is obtained by the data processing module 13 after performing 3D noise reduction processing, and the output structured light image is the structured light image corresponding to the dynamic scene.
Referring to fig. 6, in an embodiment, after step S50, the method further includes:
step S60: image depth information is calculated from the structured light image.
The depth information calculation unit 137 performs image depth information calculation after receiving the structured light image information, so that the depth information of the image can be obtained. Because the received structured light image is purer and has higher precision, the image depth information obtained by calculation is more accurate.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The structured light image acquisition method provided by the embodiment has the beneficial effects that:
at present, in order to meet the requirements of laser safety standards and consider power consumption and the like, the structured light intensity projected by the projection module 11 is generally low, and therefore is not easy to be distinguished from strong ambient light, when the projection module is used in an environment with strong ambient light, because outdoor ambient light is strong, when the ambient light intensity is equal to or even exceeds the intensity of a projected structured light image, the accuracy of the acquired structured light image can be seriously affected, so that the accuracy of the structured light image acquired by the acquisition module 12 is greatly reduced, and further, the calculation of a depth image is affected. Although it is conceivable to use a higher-precision filter to reduce the interference of the reflection of the ambient light or use a higher-precision amplifier to amplify the reflected laser light of the target wavelength band, the complexity of the circuit configuration and the production cost are increased, which are disadvantageous for the miniaturization and low-cost control of the structured-light image capturing apparatus 10.
This embodiment provides a new solution. Firstly, the projection frequency and the collection frequency are controlled and are not in one-to-one correspondence, so that a plurality of collected images at least comprise a first image and a second image, wherein the first image only comprises ambient light information, and the second image comprises the ambient light information and structured light information. Secondly, before image processing, whether the space to be measured is in a dynamic scene is judged, and a corresponding processing mode is selected according to the judgment result, so that the precision of the acquired structured light image is further improved, and the acquired image depth information is more accurate.
Fig. 7 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 7, the terminal device 70 of this embodiment includes: a processor 71, a memory 72 and a computer program 73, such as a structured light image acquisition program, stored in said memory 72 and executable on said processor 71. The processor 71, when executing the computer program 73, implements the steps in the above-described structured light image acquisition method embodiment, such as the steps 201 to 205 shown in fig. 2. Alternatively, the processor 71, when executing the computer program 73, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the units 131 to 135 shown in fig. 7.
Illustratively, the computer program 73 may be divided into one or more modules/units, which are stored in the memory 72 and executed by the processor 71 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 73 in the terminal device 70.
The terminal device 70 may be a desktop computer, a notebook computer, or a PAD. The terminal device 70 may include, but is not limited to, a processor 71, a memory 72. Those skilled in the art will appreciate that fig. 7 is merely an example of a terminal device 70 and does not constitute a limitation of terminal device 70 and may include more or fewer components than shown, or some components may be combined, or different components, for example, the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 71 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 72 may be an internal storage unit of the terminal device 70, such as a hard disk or a memory of the terminal device 80. The memory 72 may also be an external storage device of the terminal device 70, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 70. Further, the memory 72 may also include both an internal storage unit and an external storage device of the terminal device 70. The memory 72 is used for storing the computer programs and other programs and data required by the terminal device. The memory 72 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (7)

1. A structured light image acquisition method, comprising:
projecting structured light to a space to be measured and collecting multi-frame images according to a preset projection frequency and a collection frequency, wherein the multi-frame images at least comprise a first image and a second image, the first image comprises ambient light information, the second image comprises ambient light information and structured light information, and the projection frequency and the collection frequency are different;
judging whether the space to be detected is in a dynamic state or not by setting a motion intensity threshold value according to the multi-frame image;
if the space to be measured is in a static state, comparing and analyzing the first image and the second image, and if the space to be measured is in a dynamic state, performing 3D noise reduction processing on the multi-frame image to filter noise waves caused by ambient light;
outputting a precise structured light image;
wherein the 3D noise reduction processing includes:
determining a current block in a current frame image;
searching a matching block of the current block in an adjacent frame image of the current frame image;
judging whether the searching is successful or not according to the similarity of the current block and the matching block;
if the search is successful, performing inter-frame filtering processing;
the step of searching for a matching block of the current block in a neighboring frame image of the current frame includes:
determining a search range in the front and rear adjacent frame images of the current frame;
calculating an average absolute difference algorithm value of a search range and the current block;
and determining the search range with the minimum mean absolute difference algorithm value as a matching block.
2. The method for acquiring a structured light image according to claim 1, wherein the step of determining whether the space to be measured is in a dynamic state according to the multi-frame image comprises:
identifying pattern information formed by the structured light;
determining the first image and the second image according to the pattern information;
judging whether the motion intensity of the space to be detected exceeds a judgment threshold value or not according to the first image and the second image;
if the motion intensity of the space to be detected exceeds a judgment threshold value, determining that the space to be detected is in a dynamic state;
and if the motion intensity of the space to be measured does not exceed a judgment threshold value, determining that the space to be measured is in a static state.
3. The method of claim 1, wherein the step of determining whether the search is successful according to the similarity between the current block and the matching block comprises:
judging whether the average absolute difference algorithm value of the matching block and the current block exceeds a threshold value;
if the search result does not exceed the threshold, determining that the search is successful;
and if the number exceeds the threshold value, determining that the search is unsuccessful, returning to the step of projecting the structured light to the space to be detected and collecting multi-frame images according to the preset projection frequency and collection frequency.
4. The structured light image acquisition method according to any one of claims 1 to 3, further comprising, after the step of outputting the precise structured light image:
and calculating image depth information according to the structured light image.
5. A structured light image capture device, comprising:
the projection module is used for projecting the structured light to the space to be measured;
the acquisition module is used for acquiring images;
the data processing module is used for controlling the projection frequency of the projection module and the acquisition frequency of the acquisition module, so that the projection module projects structured light to a space to be measured and acquires multi-frame images according to a preset projection frequency and an acquisition frequency, wherein the multi-frame images at least comprise a first image and a second image, the first image comprises ambient light information, the second image comprises ambient light information and structured light information, and the projection frequency and the acquisition frequency are different;
the device is used for judging whether the space to be detected is in a dynamic state or not by setting a motion intensity threshold value according to the multi-frame images, comparing and analyzing the first image and the second image if the space to be detected is in a static state, and carrying out 3D noise reduction processing on the multi-frame images if the space to be detected is in a dynamic state to filter noise waves caused by ambient light;
for outputting a precise structured light image;
wherein the 3D noise reduction processing includes:
determining a current block in a current frame image;
searching a matching block of the current block in an adjacent frame image of the current frame image;
judging whether the searching is successful or not according to the similarity of the current block and the matching block;
if the search is successful, performing inter-frame filtering processing;
the step of searching for a matching block of the current block in a neighboring frame image of the current frame includes:
determining a search range in the front and rear adjacent frame images of the current frame;
calculating an average absolute difference algorithm value of a search range and the current block;
and determining the search range with the minimum mean absolute difference algorithm value as a matching block.
6. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when executing the computer program.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201910023968.9A 2019-01-10 2019-01-10 Structured light image acquisition method and device Active CN109889803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910023968.9A CN109889803B (en) 2019-01-10 2019-01-10 Structured light image acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910023968.9A CN109889803B (en) 2019-01-10 2019-01-10 Structured light image acquisition method and device

Publications (2)

Publication Number Publication Date
CN109889803A CN109889803A (en) 2019-06-14
CN109889803B true CN109889803B (en) 2022-03-29

Family

ID=66925871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910023968.9A Active CN109889803B (en) 2019-01-10 2019-01-10 Structured light image acquisition method and device

Country Status (1)

Country Link
CN (1) CN109889803B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110376606A (en) * 2019-07-26 2019-10-25 信利光电股份有限公司 Structure light processing method and structure optical mode group
CN112164003A (en) * 2020-09-11 2021-01-01 珠海市一微半导体有限公司 Method for acquiring laser image by mobile robot, chip and robot
CN113221910A (en) * 2021-03-29 2021-08-06 追创科技(苏州)有限公司 Structured light image processing method, obstacle detection method, module and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102611830A (en) * 2012-01-13 2012-07-25 深圳市黄河数字技术有限公司 Image noise reducing method and image noise reducing system
CN103528518A (en) * 2013-10-18 2014-01-22 中国科学院西安光学精密机械研究所 Flash frequency laser speckle three-dimensional target obtaining system and method
CN107667527A (en) * 2015-03-30 2018-02-06 X开发有限责任公司 Imager for detecting visible light and projected patterns
CN107682607A (en) * 2017-10-27 2018-02-09 广东欧珀移动通信有限公司 Image acquiring method, device, mobile terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102611830A (en) * 2012-01-13 2012-07-25 深圳市黄河数字技术有限公司 Image noise reducing method and image noise reducing system
CN103528518A (en) * 2013-10-18 2014-01-22 中国科学院西安光学精密机械研究所 Flash frequency laser speckle three-dimensional target obtaining system and method
CN107667527A (en) * 2015-03-30 2018-02-06 X开发有限责任公司 Imager for detecting visible light and projected patterns
CN107682607A (en) * 2017-10-27 2018-02-09 广东欧珀移动通信有限公司 Image acquiring method, device, mobile terminal and storage medium

Also Published As

Publication number Publication date
CN109889803A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN109889803B (en) Structured light image acquisition method and device
US10110913B2 (en) Motion estimation using hybrid video imaging system
US9995578B2 (en) Image depth perception device
US9179071B2 (en) Electronic device and image selection method thereof
US20170337693A1 (en) Method and system of real-time image segmentation for image processing
RU2607774C2 (en) Control method in image capture system, control apparatus and computer-readable storage medium
US9432590B2 (en) DCT based flicker detection
US8494217B2 (en) Image processing apparatus, image processing method, and program
CN107087121B (en) Automatic broadcasting guide method and device based on motion detection
US11143879B2 (en) Semi-dense depth estimation from a dynamic vision sensor (DVS) stereo pair and a pulsed speckle pattern projector
US10078907B2 (en) Distance measurement apparatus, distance measurement method, and storage medium
US20210334992A1 (en) Sensor-based depth estimation
KR102516495B1 (en) Methods and apparatus for improved 3-d data reconstruction from stereo-temporal image sequences
Miyajima et al. A real-time stereo vision system with FPGA
CN109005367B (en) High dynamic range image generation method, mobile terminal and storage medium
WO2014169273A1 (en) Systems, methods, and media for generating structured light
US20140160247A1 (en) Techniques for wavelet-based image disparity estimation
CN111383189B (en) Method and device for removing moire and image display
CN110874815B (en) Method and apparatus for generating a three-dimensional reconstruction of an object with reduced distortion
CN112511859B (en) Video processing method, device and storage medium
US9241141B1 (en) Projection block extraction
JP6507843B2 (en) Image analysis method and image analysis apparatus
US9842260B2 (en) Image processing apparatus and image processing method of performing image segmentation
CN108040244B (en) Snapshot method and device based on light field video stream and storage medium
CN104992155A (en) Method and apparatus for acquiring face positions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co., Ltd

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant