WO2022241943A1 - 一种基于飞行时间的深度计算方法、***及存储介质 - Google Patents

一种基于飞行时间的深度计算方法、***及存储介质 Download PDF

Info

Publication number
WO2022241943A1
WO2022241943A1 PCT/CN2021/107952 CN2021107952W WO2022241943A1 WO 2022241943 A1 WO2022241943 A1 WO 2022241943A1 CN 2021107952 W CN2021107952 W CN 2021107952W WO 2022241943 A1 WO2022241943 A1 WO 2022241943A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase
signal
flight
time
charge
Prior art date
Application number
PCT/CN2021/107952
Other languages
English (en)
French (fr)
Inventor
余洪涛
蒙敏荣
谷涛
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2022241943A1 publication Critical patent/WO2022241943A1/zh
Priority to US18/226,052 priority Critical patent/US20230366992A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • G01S7/4866Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak by fitting a model or function to the received signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4913Circuits for detection, sampling, integration or read-out
    • G01S7/4914Circuits for detection, sampling, integration or read-out of detector arrays, e.g. charge-transfer gates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4915Time delay measurement, e.g. operational details for pixel components; Phase measurement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application belongs to the technical field of image processing, and in particular relates to a time-of-flight-based depth calculation method, system and storage medium.
  • the method of obtaining depth images is generally to obtain multiple phase images, calculate the inverse trigonometric function according to the difference of multiple phase images, and then obtain the true
  • the phase of , the depth image is obtained according to the phase.
  • the inverse trigonometric function is a non-linear function, and the phase value can be obtained by selecting the closest value by looking up the table and matching the iterative algorithm. It is not possible to solve multiple results in parallel, and it is particularly time-consuming when calculating floating-point numbers.
  • the depth resolution of the depth camera is VGA (the standard VGA display area is 640 ⁇ 480), it needs to solve the values of 640 ⁇ 480 inverse trigonometric functions, so that it is very time-consuming and cannot achieve high frame rate.
  • Output if the product uses a processor with strong computing power, it will increase the cost.
  • Embodiments of the present application provide a time-of-flight-based depth calculation method, system, and storage medium, which can solve the problem of low depth measurement efficiency.
  • the embodiment of the present application provides a time-of-flight-based depth calculation method, including:
  • phase image Acquiring a phase image, wherein the phase image is generated according to the reflection signal reflected by the target area collected by the signal acquisition module within a single frame period;
  • the first phase is obtained based on the phase transformation model and the difference ratio of the charge signal
  • a depth value of the target area is calculated.
  • the embodiment of the present application provides a time-of-flight based depth calculation system, including:
  • a signal transmitting module is used to transmit an infrared beam to a target area
  • a signal acquisition module including at least one tap, used to collect charge signals of reflected signals reflected back from the target area at different timings, and form a phase image based on the charge signals;
  • a processing module configured to calculate the depth value of the target area according to the phase image and the time-of-flight-based depth calculation method described in the first aspect.
  • an embodiment of the present application provides a terminal device, including: a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor executes The computer program implements the time-of-flight-based depth calculation method described in any one of the first aspects above.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and it is characterized in that, when the computer program is executed by a processor, any of the above-mentioned first aspects can be implemented.
  • an embodiment of the present application provides a computer program product, which, when the computer program product is run on a terminal device, causes the terminal device to execute the time-of-flight-based depth calculation method described in any one of the above-mentioned first aspects.
  • the embodiment of the present application has the beneficial effects that: the present application obtains the phase image, and based on the phase image, obtains the difference ratio of the charge signal corresponding to the reflection signal collected by the signal acquisition module at different times; When the difference ratio is greater than or equal to the preset threshold, the first phase is obtained based on the difference ratio between the phase transformation model and the charge signal; based on the first phase, the depth value of the target area is calculated; in this application, the difference ratio of the charge signal is greater than When or is equal to the preset threshold, the phase transformation model is used to calculate the phase, which can ensure that the obtained depth value between the target area and the signal acquisition module is more accurate and the calculation speed is faster.
  • FIG. 1 is a schematic structural diagram of a time-of-flight-based depth calculation system provided by an embodiment of the present application
  • FIG. 2 is a schematic flow chart of a time-of-flight-based depth calculation method provided by an embodiment of the present application
  • FIG. 3 is a schematic flow chart of a calculation method for the difference ratio of charge signals provided by an embodiment of the present application
  • Fig. 4 is a schematic flow chart of a calculation method of a depth value provided by an embodiment of the present application.
  • Fig. 5 is a schematic flowchart of a calculation method of a depth value provided by another embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a processing module provided by an embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the term “if” may be construed depending on the context as “when” or “once” or “in response to determining” or “in response to detecting” .
  • the phrase “if determined” or “if [the described condition or event] is detected” may be construed, depending on the context, to mean “once determined” or “in response to the determination” or “once detected [the described condition or event] ]” or “in response to detection of [described condition or event]”.
  • references to "one embodiment” or “some embodiments” or the like in the specification of the present application means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • Fig. 1 is a schematic structural diagram of a time-of-flight-based depth calculation system provided according to the present invention, the system comprising:
  • a signal transmitting module 10 configured to transmit an infrared beam to a target area
  • the signal acquisition module 20 includes at least one tap, which is used to collect the charge signals of the reflected signals reflected back by the target area at different timings, and form a phase image based on the charge signals;
  • the processing module 30 obtains the difference ratios of charge signals of different time sequences based on the phase images, and calculates the corresponding phase according to the relationship between the difference ratios of the charge signals and the preset threshold, and then uses the phases to calculate the depth value of the target area.
  • the signal transmitting module 10 includes a light source, which may be a light emitting diode (LED), an edge emitting laser (EEL), a vertical cavity surface emitting laser (VCSEL) and other light sources, or may be a light source array composed of multiple light sources.
  • the light beam emitted by the light source can be visible light, ultraviolet light, etc. in addition to infrared light.
  • the power supply can be a stable DC power supply, and the light source emits infrared light beams of different intensities at a certain frequency under the control of stable DC power supplies of different powers, which can be used for indirect time-of-flight (Indirect-TOF) measurement.
  • Set according to the measurement distance for example, it can be set to 1MHz to 100MHz, and the measurement distance is from several meters to hundreds of meters.
  • the amplitude of the light beam emitted by the light source can be modulated into a pulsed light beam, a square wave light beam, a sine wave light beam, etc., which is not limited here.
  • the signal acquisition module 20 may be an image sensor composed of a Charge Coupled Device (CCD), Complementary Metal Oxide Semiconductor (CMOS), Avalanche Diode (AD), Single Photon Avalanche Diode (SPAD) and the like.
  • CCD Charge Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • AD Avalanche Diode
  • SPD Single Photon Avalanche Diode
  • a readout circuit composed of one or more of a signal amplifier, a time-to-digital converter (TDC), an analog-to-digital converter (ADC) and other devices is also connected to the image sensor.
  • TDC time-to-digital converter
  • ADC analog-to-digital converter
  • the processing module 30 may also be used to control the signal transmitting module 10 to transmit a transmitting signal, and the reflected signal may be an infrared beam.
  • the signal acquisition module 20 is used for receiving a transmission signal, which may be a transmission light beam.
  • the signal acquisition module 20 may be a TOF image sensor, which includes at least one pixel.
  • each pixel of the TOF image sensor of the present application can contain 4 or more taps (taps, used to store and read or discharge the taps generated by the corresponding electrodes under the control of the corresponding electrodes.
  • the charge signal generated by the reflected light pulse each tap switches the taps in a certain order within a single frame period T (or a single exposure time) to collect the charge signal generated by the pixel receiving the reflected light pulse.
  • T or a single exposure time
  • Fig. 2 shows the schematic flow chart of the depth calculation based on the time of flight provided by the present application, with reference to Fig. 2, the detailed description of the method is as follows:
  • phase image Acquire a phase image, wherein the phase image is generated according to a reflection signal reflected by a target area collected by a signal acquisition module within a single frame period.
  • the signal acquisition module can collect the reflection signal reflected by the target object at a preset timing in a single frame period or a single exposure time, the reflection signal can generate a charge signal, and the tap collects the reflection signal at the same timing.
  • a single frame period refers to the time for acquiring a frame of images
  • the preset timing refers to a preset time and sequence.
  • the transmitted signal is an infrared beam
  • the reflected signal may be a reflected beam.
  • the taps on the pixels of the signal acquisition module can receive electrons generated by reflected infrared light through a certain timing acquisition pixel within a single frame period T (or within a single exposure time), and convert the electrons into charge signals, and convert the charge signals into After the grayscale value is converted, the grayscale value is saved to the corresponding pixel.
  • the gray values stored in all the pixels on the signal acquisition module are integrated into one image, which is the phase image. It should be noted that the gray value stored in the pixel represents the intensity of the reflected light signal; multiple gray values can be stored in one pixel to represent the number of electrons collected by multiple taps at different timings.
  • the charge signals corresponding to the reflected signals collected by the taps in the signal acquisition module according to different time sequences can be obtained, and the difference ratio is calculated according to the charge signals.
  • step S102 may include:
  • the charge signal (that is, the number of electrons) can be calculated according to the gray value, the bias of the signal acquisition module, and the gain used during signal acquisition.
  • one grayscale value can represent one electron number
  • one pixel can include one or more taps
  • one tap corresponds to one grayscale value
  • multiple taps correspond to multiple grayscale values, that is, one pixel can include Multiple grayscale values.
  • the phase delay between the transmitted signal and the transmitted signal can be obtained from the demodulated signals of the four phases, and the four-phase
  • the phase difference between the demodulated signals is 90 degrees. If each pixel corresponds to four taps, the duty cycle of the reflected signal detected by the four taps is 50% continuous wave, and the delay of the reflected signal detected by the four taps to the transmitted signal is 0°, 90°, and 180° respectively. and 270°.
  • the reflected signal is sampled at phases 0°, 90°, 180° and 270°.
  • the sampling point at 0° will be recorded as the first phase sampling point, and the sampling point at 90° will be recorded as the second phase sampling point.
  • the 180° sampling point is recorded as the third phase sampling point, and the 180° sampling point is recorded as the fourth phase sampling point.
  • the difference ratio of the charge signal obtained based on the difference ratio calculation model includes:
  • A is the difference ratio of the charge signals acquired by the taps at different timings in a single frame period
  • Q1 is the charge signal of the reflected signal collected by the signal acquisition module at the first phase sampling point
  • Q2 is the signal acquisition The charge signal of the reflected signal collected by the module at the second phase sampling point
  • Q3 is the charge signal of the reflected signal collected by the signal acquisition module at the third phase sampling point
  • Q4 is the sampled signal of the signal acquisition module at the fourth phase
  • the charge signal of the reflection signal collected at one point, the first phase sampling point, the second phase sampling point, the third phase sampling point and the fourth phase sampling point correspond to different times within a single frame period.
  • the phase transformation model can be used to calculate the phase after a certain transformation of the difference ratio of the charge signal, which is recorded as the first phase in this application. A more accurate phase can be obtained, thereby making the final depth value more accurate.
  • the phase transformation model includes:
  • A is the difference ratio of the charge signal
  • B is the preset value
  • the preset value B can be set as required.
  • the preset threshold can be set according to the nature of the inverse trigonometric function.
  • the preset threshold can be set as 0.5, when the signal strength difference is less than 0.5, directly using the phase calculation model to calculate the phase can guarantee the calculation accuracy.
  • the time-of-flight can be calculated, and then the depth value can be calculated, and the depth value represents the distance between the target area and the signal acquisition module.
  • step S104 may include:
  • the flight time can be calculated according to the time-of-flight model Among them, ⁇ t is the flight time, is the phase, when the difference ratio of the charge signal is greater than or equal to the preset threshold is the first phase, and f m is the modulation frequency of the signal acquisition module.
  • the depth value can be calculated according to the depth model Among them, d is the depth value, c is the speed of light in vacuum, and ⁇ t is the flight time.
  • the distance between the optical centers of the signal transmitting module and the signal collecting module is smaller than a preset value, and the optical axes between them are parallel to each other, the distance can be directly calculated according to the depth calculation model.
  • the phase image formed by the reflected light beam reflected back from the target area is collected by the acquisition signal acquisition module within a single frame period; based on the phase image, the charge signal of each tap is obtained and the difference of the charge signal is calculated ratio.
  • the difference ratio of the charge signal is greater than or equal to the preset threshold
  • the first phase is obtained based on the phase transformation model and the difference ratio of the charge signal
  • the depth value of the target area is calculated; the application uses the charge signal
  • the phase transformation model is used to calculate the first phase, which can ensure that the obtained depth value of the target area is more accurate, and the present application has simple calculation and high calculation efficiency compared with the prior art.
  • the above method may further include:
  • the phase when the difference ratio of the charge signal is less than a preset threshold, the phase can be directly calculated using the phase calculation model, which is referred to as the second phase in this application.
  • the method for calculating the depth value based on the second phase is the same as the above-mentioned method for calculating the depth value based on the first phase, and reference can be made to the above-mentioned method for calculating the depth value based on the first phase, which will not be repeated here.
  • the above method may further include:
  • the calculation amount of floating-point data is large, which is slower than that of fixed-point data. Therefore, when the first phase is a floating-point phase, the floating-point phase can be converted into a fixed-point phase first. Then, the calculation can be performed to improve the calculation efficiency.
  • the flight time can be calculated according to the third phase, and then the depth value of the target area can be calculated.
  • step S301 may include:
  • the number of fixed-point numbers required when using fixed-point numbers to characterize the accuracy of floating-point phases can be determined according to the accuracy of the first phase, that is, the accuracy of the floating-point phase, based on the number of fixed-point numbers.
  • the number of digits of the fixed-point phase is recorded as the first digit in this application.
  • the precision of the first phase is 0.000001
  • 1,000,000 fixed-point numbers are required to represent the precision of the floating-point phase
  • the dynamic range is 0-1,000,000, so the number of digits of the fixed-point phase is 20, expressed as 20bit.
  • the precision of the first phase is 0.001
  • 1000 fixed-point numbers are needed to represent the precision of the floating-point phase
  • the dynamic range is 0-1000, so the number of digits of the fixed-point phase is 10, which means It is 10bit.
  • the third phase can be based on the phase conversion model ⁇ is the third phase, round() means rounding, is the first phase, and n is the first digit of the fixed-point phase.
  • the third phase is substituted into the time-of-flight calculation model , get flight time
  • the above method may further include:
  • the second phase is converted into a fixed-point phase to obtain a fourth phase.
  • a depth value of the target area is calculated according to the fourth phase.
  • the method for converting the second phase into the fourth phase is the same as the method for converting the first phase into the third phase in S301 above, please refer to the description of step S301 , which will not be repeated here.
  • the above method may further include:
  • the depth value and the accurate distance of the above target area it is determined whether the depth value of the above target area meets the requirements.
  • the difference between the depth value of the target area and the accurate distance is calculated. If the above difference is within the preset range, it is determined that the depth value of the target area satisfies the requirements, otherwise it does not.
  • the accuracy of the above method can be determined by verifying whether the depth value of the target area meets the requirements.
  • multiple frames of phase images can also be obtained continuously, and the average value of multiple depth values can be calculated through multiple solutions, so as to obtain a more accurate depth value between the target area and the signal acquisition module.
  • the processing module 400 may include: a data acquisition unit 410 , a first calculation unit 420 , a second calculation unit 430 and a depth calculation unit 440 .
  • the data acquisition unit 410 is configured to acquire a phase image, wherein the phase image is generated according to the reflection signal reflected by the target area collected by the signal acquisition module within a single frame period;
  • the first calculation unit 420 is configured to obtain, based on the phase image, a difference ratio of charge signals corresponding to reflection signals collected by the signal collection module at different times;
  • the second calculation unit 430 is configured to obtain the first phase based on the phase transformation model and the difference ratio of the charge signal when the difference ratio of the charge signal is greater than or equal to a preset threshold;
  • a depth calculation unit 440 configured to calculate a depth value of the target area based on the first phase.
  • the phase transformation model includes:
  • A is the difference ratio of the charge signal
  • B is the preset value
  • what is connected to the first computing unit 420 also includes:
  • the third computing unit for purposes of
  • a second phase between the transmitted signal and the reflected signal is obtained based on a phase calculation model, wherein the phase calculation model includes is the second phase, A is the difference ratio of the charge signal;
  • the first computing unit 420 may be specifically configured to:
  • the first computing unit 420 may be specifically configured to:
  • Calculation model based on difference ratio Calculate the difference ratio of the charge signal, wherein A is the difference ratio of the charge signal, Q1 is the charge signal of the reflected signal collected by the signal acquisition module at the first phase sampling point, and Q2 is the signal acquisition The charge signal of the reflected signal collected by the module at the second phase sampling point, Q3 is the charge signal of the reflected signal collected by the signal acquisition module at the third phase sampling point, and Q4 is the sampled signal of the signal acquisition module at the fourth phase
  • the charge signal of the reflection signal collected at one point, the first phase sampling point, the second phase sampling point, the third phase sampling point and the fourth phase sampling point correspond to different times in a single frame period.
  • the depth calculation unit 440 may specifically be used for:
  • time of flight is calculated, wherein the time of flight represents the time from the time when the signal transmission module sends out the transmission signal to the time when the signal collection module collects the transmission signal;
  • a depth value for the target area is calculated.
  • the first phase is a floating-point phase
  • the second calculation unit 430 further includes:
  • the depth calculation unit 440 can specifically be used for:
  • a depth value of the target area is calculated.
  • the embodiment of the present application also provides a terminal device.
  • the terminal device 500 may include: at least one processor 510, a memory 520, and A running computer program, when the processor 510 executes the computer program, implements the steps in any of the foregoing method embodiments, for example, step S101 to step S104 in the embodiment shown in FIG. 2 .
  • the processor 510 executes the computer program, it realizes the functions of the modules/units in the above-mentioned device embodiments, for example, the functions of the modules 410 to 440 shown in FIG. 6 .
  • the computer program can be divided into one or more modules/units, and one or more modules/units are stored in the memory 520 and executed by the processor 510 to complete the present application.
  • the one or more modules/units may be a series of computer program segments capable of accomplishing specific functions, and the program segments are used to describe the execution process of the computer program in the terminal device 500 .
  • FIG. 7 is only an example of a terminal device, and does not constitute a limitation on the terminal device. It may include more or less components than those shown in the figure, or combine certain components, or different components, such as Input and output devices, network access devices, buses, etc.
  • the processor 510 can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the memory 520 can be an internal storage unit of the terminal device, or an external storage device of the terminal device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card) etc.
  • the memory 520 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 520 can also be used to temporarily store data that has been output or will be output.
  • the bus may be an Industry Standard Architecture (Industry Standard Architecture, ISA) bus, a Peripheral Component Interconnect (PCI) bus, or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the buses in the drawings of the present application are not limited to only one bus or one type of bus.
  • the distance measuring method provided by the embodiment of the present application can be applied to terminal devices such as computers, tablet computers, notebook computers, netbooks, personal digital assistants (PDAs), and the embodiments of the present application do not make any restrictions on the specific types of terminal devices .
  • terminal devices such as computers, tablet computers, notebook computers, netbooks, personal digital assistants (PDAs), and the embodiments of the present application do not make any restrictions on the specific types of terminal devices .
  • PDAs personal digital assistants
  • the embodiment of the present application also provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each embodiment of the above-mentioned distance measuring method can be realized.
  • An embodiment of the present application provides a computer program product.
  • the computer program product When the computer program product is run on a mobile terminal, the mobile terminal can realize the steps in each embodiment of the distance measuring method when executed.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, all or part of the procedures in the methods of the above embodiments in the present application can be completed by instructing related hardware through computer programs, and the computer programs can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps in the above-mentioned various method embodiments can be realized.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may at least include: any entity or device capable of carrying computer program codes to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signal, telecommunication signal and software distribution medium.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signal telecommunication signal and software distribution medium.
  • U disk mobile hard disk
  • magnetic disk or optical disk etc.
  • computer readable media may not be electrical carrier signals and telecommunication signals under legislation and patent practice.
  • the disclosed device/network device and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Quality & Reliability (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本申请适用于图像处理技术领域,提供了一种基于飞行时间的深度计算方法、***及存储介质,该方法包括:获取相位图像,基于相位图像,得到信号采集模块在不同时间采集的反射信号对应的电荷信号的差值比值;在电荷信号的差值比值大于或等于预设阈值时,基于相位变换模型和电荷信号的差值比值,得到第一相位;基于第一相位,计算目标区域的深度值;本申请在电荷信号的差值比值大于或等于预设阈值时,采用相位变换模型计算相位,可以保证得到的目标区域与信号采集模块之间的深度值更准确,且计算速度更快。

Description

一种基于飞行时间的深度计算方法、***及存储介质 技术领域
本申请属于图像处理技术领域,尤其涉及一种基于飞行时间的深度计算方法、***及存储介质。
背景技术
随着Tof(Time-of-flight,飞行时间)技术日趋成熟,其获取深度图像的方法一般均是通过获取多个相位图,根据多个相位图的差值计算其反三角函数,进而获得真正的相位,根据相位获得深度图像。然而,反三角函数为非线性函数,可通过查表选择最接近的值并匹配迭代算法获取相位值,计算过程复杂,利用该方法计算相位时不仅需要查找表,查找表会消耗内存,导致性能降低,并无法并行处理求解多个结果,且在浮点数计算时,特别耗时。
进一步地,如果深度相机的深度分辨率为VGA(标准VGA显示区域为640×480),即需要求解640×480个反三角函数的值,以致于其非常耗时,达不到高帧率的输出,如果产品采用计算能力强的处理器,又会增加成本。
发明内容
本申请实施例提供了一种基于飞行时间的深度计算方法、***及存储介质,可以解决深度测量效率低的问题。
第一方面,本申请实施例提供了一种基于飞行时间的深度计算方法,包括:
获取相位图像,其中,相位图像根据信号采集模块在单个帧周期内采集的经过目标区域反射的反射信号生成;
基于所述相位图像,得到所述信号采集模块在不同时间采集的反射信号对应的电荷信号的差值比值;
在所述电荷信号的差值比值大于或等于预设阈值时,基于相位变换模型和所述电荷信号的差值比值,得到第一相位;
基于所述第一相位,计算所述目标区域的深度值。
第二方面,本申请实施例提供了一种基于飞行时间的深度计算***,包括:
信号发射模块,用于向目标区域发射红外光束;
信号采集模块,包括至少一个抽头,用于以不同时序采集经所述目标区域反射回的反射信号的电荷信号,并基于电荷信号形成相位图像;
处理模块,用于根据所述相位图像和上述第一方面所述的基于飞行时间的深度计算方法计算所述目标区域的深度值。
第三方面,本申请实施例提供了一种终端设备,包括:存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述第一方面中任一项所述的基于飞行时间的深度计算方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述第一方面中任一项所述的基于飞行时间的深度计算方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项所述的基于飞行时间的深度计算方法。
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
本申请实施例与现有技术相比存在的有益效果是:本申请获取相位图像,基于相位图像,得到信号采集模块在不同时间采集的反射信号对应的电荷信号的差值比值;在电荷信号的差值比值大于或等于预设阈值时,基于相位变换模型和电荷信号的差值比值,得到第一相位;基于第一相位,计算目标区域的深度值;本申请在电荷信号的差值比值大于或等于预设阈值时,采用相位变换模 型计算相位,可以保证得到的目标区域与信号采集模块之间的深度值更准确,且计算速度更快。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的基于飞行时间的深度计算***的结构示意图;
图2是本申请一实施例提供的基于飞行时间的深度计算方法的流程示意图;
图3是本申请一实施例提供的电荷信号的差值比值的计算方法的流程示意图;
图4是本申请一实施例提供的深度值的计算方法的流程示意图;
图5是本申请另一实施例提供的深度值的计算方法的流程示意图;
图6是本申请一实施例提供的处理模块的结构示意图;
图7是本申请一实施例提供的终端设备的结构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的***、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当……时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
图1为根据本发明提供的一种基于飞行时间的深度计算***的结构示意图,该***包括:
信号发射模块10,用于向目标区域发射红外光束;
信号采集模块20,包括至少一个抽头,用于以不同时序采集经目标区域反射回的反射信号的电荷信号,并基于电荷信号形成相位图像;
处理模块30,基于相位图像得到不同时序的电荷信号的差值比值,并根据电荷信号的差值比值与预设阈值的关系计算对应的相位,进而利用相位计算目标区域的深度值。
在本实施例中,信号发射模块10包括光源,光源可以是发光二极管(LED)、 边发射激光器(EEL)、垂直腔面发射激光器(VCSEL)等光源,也可以是多个光源组成的光源阵列,光源所发射的光束除红外光外,还可以是可见光、紫外光等。
在本实施例中,电源可以为稳定直流电源,光源在不同功率的稳定直流电源的控制下以一定的频率发射不同强度的红外光束,可以用于间接飞行时间法(Indirect-TOF)测量,频率根据测量距离进行设定,比如可以设置成1MHz~100MHz,测量距离在几米至几百米。具体的,光源发射的光束振幅可被调制为脉冲光束、方波光束、正弦波光束等光束,此处不作限制。
在本实施例中,信号采集模块20可以是电荷耦合元件(CCD)、互补金属氧化物半导体(CMOS)、雪崩二极管(AD)、单光子雪崩二极管(SPAD)等组成的图像传感器。一般地,与图像传感器连接的还包括由信号放大器、时数转换器(TDC)、模数转换器(ADC)等器件中的一种或多种组成的读出电路。
在本实施例中,处理模块30还可以用于控制信号发射模块10发射发射信号,反射信号可以是红外光束。信号采集模块20用于接收发射信号,发射信号可以是发射光束。
在本实施例中,信号采集模块20可以为TOF图像传感器,其包括至少一个像素。与传统的仅用于拍照的图像传感器相比,本申请的TOF图像传感器每个像素可以包含4个或4个以上的抽头(tap,用于在相应电极的控制下存储并读取或者排出由反射光脉冲产生的电荷信号),每个抽头在单个帧周期T(或单次曝光时间内)内以一定的次序依次切换抽头以采集像素接收反射光脉冲产生的电荷信号。需要说明的是,每个像素包含的抽头数可根据具体情况进行设计,每个像素亦可仅包括一个抽头,该抽头按时序进行电荷信号采集即可,此处不作限制。
以下结合图1对本申请实施例的基于飞行时间的深度计算方法进行详细说明。
图2示出了本申请提供的基于飞行时间的深度计算的示意性流程图,参照 图2,对该方法的详述如下:
S101,获取相位图像,其中,相位图像根据信号采集模块在单个帧周期内采集的经过目标区域反射的反射信号生成。
在本实施例中,信号采集模块在单个帧周期或单次曝光时间内可以通过预设时序采集经过目标物体反射回来的反射信号,反射信号可以产生电荷信号,抽头以相同的时序采集反射信号。其中,单个帧周期指的是获取一帧图像的时间,预设时序指的是预设的时间和顺序。在发射信号为红外光束时,反射信号可以是反射光束。
具体的,信号采集模块像素上的抽头在单个帧周期T内(或单次曝光时间内)可通过一定时序采集像素接收反射红外光产生的电子,并将电子转换为电荷信号,将电荷信号转换成灰度值后,将灰度值保存至对应的像素上。将信号采集模块上的所有像素中保存的灰度值整合为一幅图像,即为相位图像。需要说明的是,像素中保存的灰度值表征反射光信号强度;一个像素中可存储多个灰度值以表示多个抽头在不同时序采集的电子数。
S102,基于所述相位图像,得到所述信号采集模块在不同时间采集的反射信号对应的电荷信号的差值比值。
具体的,基于相位图像,可以得到信号采集模块中的抽头按照不同时序采集的反射信号对应的电荷信号,并根据电荷信号计算差值比值。
如图3所示,在一个实施例中,步骤S102的实现过程可以包括:
S1021,基于相位图像中的灰度值,得到所述灰度值对应的电荷信号。
在本实施例中,电荷信号(即电子数)可以根据灰度值、信号采集模块的偏置、信号采集时使用的增益计算得到。具体的,电荷信号可以基于电子数计算模型Q=(ADU-m)×G,其中,Q为电荷信号,ADU为灰度值,m为信号采集模块的偏置,G为增益。
需要说明的是,一个灰度值可表征一个电子数,一个像素可包括一个或多个抽头,一个抽头对应一个灰度值,多个抽头对应多个灰度值,也即一个像素 中可包括多个灰度值。
S1022,基于所述电荷信号,计算抽头不同时序采集的电荷信号的差值比值。
更具体地,基于间接飞行时间测量法,在发射信号为正弦波信号或方波信号时,发射信号和发射信号之间的相位延迟,可以根据四个相位的解调信号得到,四个相位的解调信号互相之间的相位差值为90度。若每个像素对应四个抽头,四抽头探测到的反射信号的占空比都为50%的连续波,四抽头探测到的反射信号对于发射信号的延迟分别是0°、90°、180°以及270°。在反射信号在相位0°、90°、180°以及270°进行采样,本申请中将在0°采样点记为第一相位采样点,将90°采样点记为第二相位采样点,将180°采样点记为第三相位采样点,将180°采样点记为第四相位采样点。基于差值比值计算模型得到电荷信号的差值比值包括:
Figure PCTCN2021107952-appb-000001
其中,A为单个帧周期内抽头以不同时序获取的电荷信号的差值比值,Q 1为所述信号采集模块在第一相位采样点采集的反射信号的电荷信号,Q 2为所述信号采集模块在第二相位采样点采集的反射信号的电荷信号,Q 3为所述信号采集模块在第三相位采样点采集的反射信号的电荷信号,Q 4为所述信号采集模块在第四相位采样点采集的反射信号的电荷信号,所述第一相位采样点、所述第二相位采样点、所述第三相位采样点和所述第四相位采样点对应单个帧周期内的不同时间。
S103,在所述电荷信号的差值比值大于或等于预设阈值时,基于相位变换模型和所述电荷信号的差值比值,得到第一相位。
在本实施例中,在电荷信号的差值比值大于或等于预设阈值时,如果直接利用间接飞行时间测量法中计算相位的模型计算相位,得到的目标区域的深度值的精度较低,误差较大,因此,在电荷信号的差值比值大于或等于预设阈值时,可以利用相位变换模型,将电荷信号的差值比值进行一定的变换后计算相 位,本申请中记为第一相位,可以得到更准确的相位,进而使最终得到的深度值更准确。
在一个实施例中,相位变换模型包括:
Figure PCTCN2021107952-appb-000002
其中,
Figure PCTCN2021107952-appb-000003
为所述第一相位,
Figure PCTCN2021107952-appb-000004
A为电荷信号的差值比值,B为预设值。
在本实施例中,计算
Figure PCTCN2021107952-appb-000005
时可以对
Figure PCTCN2021107952-appb-000006
进行泰勒展开计算。对
Figure PCTCN2021107952-appb-000007
进行泰勒展开,拟合度高,可以确保
Figure PCTCN2021107952-appb-000008
的精度。
在本实施例中,假设arctanx=arctan((A+B)/(1-A×B))=arctan(A)+arctan(B),因此,arctan(A)=arctanx-arctanB,又因为A大于预设阈值,x=(A+B)/(1-A×B)则1/x一定小于预设阈值,因此,
Figure PCTCN2021107952-appb-000009
因此可得,
Figure PCTCN2021107952-appb-000010
在本实施例中,预设值B可以根据需要设置。
具体的,预设阈值可以根据反三角函数的性质设置,信号强度差值大于0.5时,直接利用相位计算模型计算信号强度差值的反三角函数,精度会降低,因此,预设阈值可以设置为0.5,在信号强度差值小于0.5时,直接利用相位计算模型计算相位可以保证计算精度。
S104,基于所述第一相位,计算所述目标区域的深度值。
在本实施例中,根据间接飞行时间测量法,得到相位后,可以计算飞行时间,进而计算深度值,深度值表征目标区域与信号采集模块之间的距离。
如图4所示,具体的,步骤S104的实现过程可以包括:
S1041,基于所述第一相位,计算飞行时间,其中,所述飞行时间表征所述信号发射模块发出所述发射信号的时间至所述信号采集模块采集到所述发射信号的时间。
在本实施例中,飞行时间可以根据飞行时间计算模型
Figure PCTCN2021107952-appb-000011
其中,Δt为 飞行时间,
Figure PCTCN2021107952-appb-000012
为相位,在电荷信号的差值比值大于或等于预设阈值时
Figure PCTCN2021107952-appb-000013
为第一相位,f m为信号采集模块的调制频率。
S1042,基于所述飞行时间,计算所述目标区域的深度值。
在本实施例中,深度值可以根据深度计算模型
Figure PCTCN2021107952-appb-000014
其中,d为深度值,c为真空中的光速,Δt为飞行时间。
在本实施例中,如果信号发射模块和信号采集模块两者之间的光心间的距离小于预设值,且两者之间的光轴相互平行,则可以直接根据深度计算模型计算距离。反之,则需要对信号采集模块和信号发射模块进行标定,获得信号采集模块的内外参数,然后利用内外参数和深度计算模型计算深度值。
本申请实施例中,通过获取信号采集模块在单个帧周期内采集经目标区域反射回的反射光束形成的相位图像;基于所述相位图像,获取每个抽头的电荷信号并计算电荷信号的差值比值。在电荷信号差值比值大于或等于预设阈值时,基于相位变换模型和所述电荷信号的差值比值,得到第一相位;基于第一相位,计算目标区域的深度值;本申请在电荷信号的差值比值大于或等于预设阈值时,采用相位变换模型计算第一相位,可以保证得到的目标区域的深度值更准确,且本申请相对于现有技术计算简单,计算效率高。
如图5所示,在一种可能的实现方式中,在步骤S104之后,上述方法还可以包括:
S201,在所述电荷信号的差值比值小于预设阈值时,基于相位计算模型,得到第二相位,其中,所述相位计算模型包括
Figure PCTCN2021107952-appb-000015
为第二相位,A为电荷信号的差值比值。
在本实施例中,在电荷信号的差值比值小于预设阈值时,可以直接利用相位计算模型计算相位,本申请中记为第二相位。
S202,基于所述第二相位,计算所述目标区域的深度值。
在本实施例中,根据第二相位计算深度值的方法与上述根据第一相位计算 深度值的方法相同,可以参考上述根据第一相位计算深度值的方法,在此不再赘述。
在一种可能的实现方式中,如果第一相位为浮点型相位,在步骤S103之后,上述方法还可以包括:
S301,将所述第一相位转换成定点型相位,得到第三相位。
在本实施例中,浮点型数据的计算量大,相较于定点型数据的计算慢,因此,在第一相位为浮点型相位时,可以先将浮点型相位转换成定点型相位然后再进行计算,可以提高计算效率。
在本实施例中,得到第三相位后,可以根据第三相位计算飞行时间,进而计算目标区域的深度值。
具体的,步骤S301的实现方法可以包括:
S3011,基于第一相位的精度,确定定点型相位的第一位数。
在本实施例中,可以根据第一相位的精度,也就是浮点型相位的精度确定使用定点数表征浮点型相位的精度时所需要的定点数的个数,基于定点数的个数确定定点型相位的位数,本申请中记为第一位数。
作为举例,如果第一相位的精度为0.000001,转换为定点型相位时,则需要1000000个定点数才可表示浮点型相位的精度,动态范围为0-1000000,则定点型相位的位数为20,表示为20bit。如果第一相位的精度为0.001,转换为定点型相位时,则需要1000个定点数才可表示浮点型相位的精度,动态范围为0-1000,则定点型相位的位数为10,表示为10bit。
S3012,基于上述定点型相位的第一位数,得到第三相位。
在本实施例中,第三相位可以根据相位转换模型
Figure PCTCN2021107952-appb-000016
Δθ为第三相位,round()表示取整,
Figure PCTCN2021107952-appb-000017
为第一相位,n为定点型相位的第一位数。
在本实施例中,将第三相位代入飞行时间计算模型
Figure PCTCN2021107952-appb-000018
中,得到飞行 时间
Figure PCTCN2021107952-appb-000019
本申请实施例中,通过将浮点型相位转换成定点型相位,利用定点型相位计算目标区域的深度值,既保留了原浮点型相位的精度,亦提高计算效率,使本申请可在嵌入式等计算能力较弱的设备中使用。
在一种可能的实现方式中,如果第二相位为浮点型相位,在步骤S201之后,上述方法还可以包括:
将所述第二相位转换成定点型相位,得到第四相位。根据第四相位计算目标区域的深度值。
在本实施例中,将第二相位转换成第四相位的方法与上述S301中将第一相位转换成第三相位的方法相同,请参照步骤S301的说明,在此不再赘述。
在一种可能的实现方式中,为了验证在电荷信号的差值比值大于或等于预设阈值时所计算的目标区域的深度值的准确性,在步骤S104之后,上述方法还可包括:
根据上述目标区域的深度值和准确距离,确定上述目标区域的深度值是否满足要求。
在本实施例中,计算目标区域的深度值与准确距离的差值。若上述差值在预设范围内,则确定目标区域的深度值满足要求,反之则不满足。
本申请实施例中,通过验证目标区域的深度值是否满足要求,可以确定上述方法的准确性。
需要说明的是,还可以连续获取多帧相位图像,通过多次求解计算多个深度值的平均值,进而获取目标区域与信号采集模块之间更加精确的深度值。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
参考图6,该处理模块400可以包括:数据获取单元410、第一计算单元 420、第二计算单元430和深度计算单元440。
其中,数据获取单元410,用于获取相位图像,其中,相位图像根据信号采集模块在单个帧周期内采集的经过目标区域反射的反射信号生成;
第一计算单元420,用于基于所述相位图像,得到所述信号采集模块在不同时间采集的反射信号对应的电荷信号的差值比值;
第二计算单元430,用于在所述电荷信号的差值比值大于或等于预设阈值时,基于相位变换模型和所述电荷信号的差值比值,得到第一相位;
深度计算单元440,用于基于所述第一相位,计算所述目标区域的深度值。
在一种可能的实现方式中,所述相位变换模型包括:
Figure PCTCN2021107952-appb-000020
其中,
Figure PCTCN2021107952-appb-000021
为所述第一相位,
Figure PCTCN2021107952-appb-000022
A为电荷信号的差值比值,B为预设值。
在一种可能的实现方式中,与第一计算单元420相连的还包括:
第三计算单元,用于
在所述电荷信号的差值比值小于预设阈值时,基于相位计算模型,得到所述发射信号和所述反射信号之间的第二相位,其中,所述相位计算模型包括
Figure PCTCN2021107952-appb-000023
为第二相位,A为所述电荷信号的差值比值;
基于所述第二相位,计算所述目标区域的深度值。
在一种可能的实现方式中,第一计算单元420具体可以用于:
基于所述相位图像中的灰度值,得到各个所述灰度值对应的电荷信号;
基于各个所述电荷信号,计算所述电荷信号的差值比值。
在一种可能的实现方式中,第一计算单元420具体可以用于:
基于差值比值计算模型
Figure PCTCN2021107952-appb-000024
计算所述电荷信号的差值比值,其中,A为电荷信号的差值比值,Q 1为所述信号采集模块在第一相位采样点采集的反射信号的电荷信号,Q 2为所述信号采集模块在第二相位采样点采集的反射信号 的电荷信号,Q 3为所述信号采集模块在第三相位采样点采集的反射信号的电荷信号,Q 4为所述信号采集模块在第四相位采样点采集的反射信号的电荷信号,所述第一相位采样点、所述第二相位采样点、所述第三相位采样点和所述第四相位采样点对应单个帧周期中的不同时间。
在一种可能的实现方式中,深度计算单元440具体可以用于:
基于所述第一相位,计算飞行时间,其中,所述飞行时间表征所述信号发射模块发出所述发射信号的时间至所述信号采集模块采集到所述发射信号的时间;
基于所述飞行时间,计算所述目标区域的深度值。
在一种可能的实现方式中,所述第一相位为浮点型相位,与第二计算单元430相连的还包括:
将所述第一相位转换成定点型相位,得到第三相位;
相应的,深度计算单元440具体可以用于:
基于所述第三相位,计算所述目标区域的深度值。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程, 在此不再赘述。
本申请实施例还提供了一种终端设备,参见图7,该终端设备500可以包括:至少一个处理器510、存储器520以及存储在所述存储器520中并可在所述至少一个处理器510上运行的计算机程序,所述处理器510执行所述计算机程序时实现上述任意各个方法实施例中的步骤,例如图2所示实施例中的步骤S101至步骤S104。或者,处理器510执行所述计算机程序时实现上述各装置实施例中各模块/单元的功能,例如图6所示模块410至440的功能。
示例性的,计算机程序可以被分割成一个或多个模块/单元,一个或者多个模块/单元被存储在存储器520中,并由处理器510执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序段,该程序段用于描述计算机程序在终端设备500中的执行过程。
本领域技术人员可以理解,图7仅仅是终端设备的示例,并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如输入输出设备、网络接入设备、总线等。
处理器510可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器520可以是终端设备的内部存储单元,也可以是终端设备的外部存储设备,例如插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。所述存储器520用于存储所述计算机程序以及终端设备所需的其他程序和数据。所述存储器520还可以用于暂时地存储已经输出或者将要输出的数据。
总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、 外部设备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,本申请附图中的总线并不限定仅有一根总线或一种类型的总线。
本申请实施例提供的距离测量方法可以应用于计算机、平板电脑、笔记本电脑、上网本、个人数字助理(personal digital assistant,PDA)等终端设备上,本申请实施例对终端设备的具体类型不作任何限制。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述距离测量方法各个实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述距离测量方法各个实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详 述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种基于飞行时间的深度计算方法,其特征在于,包括:
    获取相位图像,其中,所述相位图像根据信号采集模块在单个帧周期内采集的经过目标区域反射的反射信号生成;
    基于所述相位图像,得到所述信号采集模块在不同时间采集的反射信号对应的电荷信号的差值比值;
    在所述电荷信号的差值比值大于或等于预设阈值时,基于相位变换模型和所述电荷信号的差值比值,得到第一相位;
    基于所述第一相位,计算所述目标区域的深度值。
  2. 如权利要求1所述的基于飞行时间的深度计算方法,其特征在于,所述相位变换模型包括:
    Figure PCTCN2021107952-appb-100001
    其中,
    Figure PCTCN2021107952-appb-100002
    为所述第一相位,
    Figure PCTCN2021107952-appb-100003
    A为所述电荷信号的差值比值,B为预设值。
  3. 如权利要求1所述的基于飞行时间的深度计算方法,其特征在于,在基于所述相位图像,得到所述信号采集模块在不同时间采集的反射信号对应的电荷信号的差值比值之后,包括:
    在所述电荷信号的差值比值小于预设阈值时,基于相位计算模型,得到第二相位,其中,所述相位计算模型包括
    Figure PCTCN2021107952-appb-100004
    Figure PCTCN2021107952-appb-100005
    为第二相位,A为所述电荷信号的差值比值;
    基于所述第二相位,计算所述目标区域的深度值。
  4. 如权利要求1所述的基于飞行时间的深度计算方法,其特征在于,所述基于所述相位图像,得到所述信号采集模块在不同时间采集的反射信号对应的电荷信号的差值比值,包括:
    基于所述相位图像中的灰度值,得到各个所述灰度值对应的电荷信号;
    基于各个所述电荷信号,计算所述电荷信号的差值比值。
  5. 如权利要求4所述的基于飞行时间的深度计算方法,其特征在于,在所述基于各个所述电荷信号,计算所述电荷信号的差值比值,包括:
    基于差值比值计算模型
    Figure PCTCN2021107952-appb-100006
    计算所述电荷信号的差值比值,其中,A为电荷信号的差值比值,Q 1为所述信号采集模块在第一相位采样点采集的反射信号的电荷信号,Q 2为所述信号采集模块在第二相位采样点采集的反射信号的电荷信号,Q 3为所述信号采集模块在第三相位采样点采集的反射信号的电荷信号,Q 4为所述信号采集模块在第四相位采样点采集的反射信号的电荷信号,所述第一相位采样点、所述第二相位采样点、所述第三相位采样点和所述第四相位采样点对应单个帧周期中的不同时间。
  6. 如权利要求1至5任一项所述的基于飞行时间的深度计算方法,其特征在于,所述基于所述第一相位,计算所述目标区域的深度值,包括:
    基于所述第一相位,计算飞行时间,其中,所述飞行时间表征所述信号发射模块发出发射信号的时间至所述信号采集模块采集到所述发射信号的时间;
    基于所述飞行时间,计算所述目标区域的深度值。
  7. 如权利要求1所述的基于飞行时间的深度计算方法,其特征在于,所述第一相位为浮点型相位,在基于相位变换模型和所述电荷信号的差值比值,得到第一相位之后,包括:
    将所述第一相位转换成定点型相位,得到第三相位;
    相应的,基于所述第一相位,计算所述目标区域的深度值,包括:
    基于所述第三相位,计算所述目标区域的深度值。
  8. 一种基于飞行时间的深度计算***,其特征在于,包括:
    信号发射模块,用于向目标区域发射红外光束;
    信号采集模块,包括至少一个抽头,用于以不同时序采集经所述目标区域反射回的反射信号的电荷信号,并基于电荷信号形成相位图像;
    处理模块,用于根据所述相位图像和上述权利要求1至7任一项所述的基 于飞行时间的深度计算方法计算所述目标区域的深度值。
  9. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的基于飞行时间的深度计算方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的基于飞行时间的深度计算方法。
PCT/CN2021/107952 2021-05-21 2021-07-22 一种基于飞行时间的深度计算方法、***及存储介质 WO2022241943A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/226,052 US20230366992A1 (en) 2021-05-21 2023-07-25 Depth calculation method and system based on time of flight, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110558527.6 2021-05-21
CN202110558527.6A CN113298778B (zh) 2021-05-21 2021-05-21 一种基于飞行时间的深度计算方法、***及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/226,052 Continuation US20230366992A1 (en) 2021-05-21 2023-07-25 Depth calculation method and system based on time of flight, and storage medium

Publications (1)

Publication Number Publication Date
WO2022241943A1 true WO2022241943A1 (zh) 2022-11-24

Family

ID=77323662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107952 WO2022241943A1 (zh) 2021-05-21 2021-07-22 一种基于飞行时间的深度计算方法、***及存储介质

Country Status (3)

Country Link
US (1) US20230366992A1 (zh)
CN (1) CN113298778B (zh)
WO (1) WO2022241943A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320667A (zh) * 2022-09-07 2023-06-23 奥比中光科技集团股份有限公司 一种消除运动伪影的深度相机及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193938A1 (en) * 2014-01-06 2015-07-09 Microsoft Corporation Fast general multipath correction in time-of-flight imaging
CN111487648A (zh) * 2020-04-16 2020-08-04 北京深测科技有限公司 一种基于飞行时间的非视域成像方法和***
CN111580067A (zh) * 2019-02-19 2020-08-25 光宝电子(广州)有限公司 基于飞行时间测距的运算装置、感测装置及处理方法
CN111736173A (zh) * 2020-05-24 2020-10-02 深圳奥比中光科技有限公司 一种基于tof的深度测量装置、方法及电子设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3548642A (en) * 1967-03-02 1970-12-22 Magnaflux Corp Synthetic aperture ultrasonic imaging systems
KR101666020B1 (ko) * 2010-06-25 2016-10-25 삼성전자주식회사 깊이 영상 생성 장치 및 그 방법
KR101904720B1 (ko) * 2012-12-28 2018-10-05 삼성전자주식회사 영상 처리 장치 및 방법
KR102194233B1 (ko) * 2014-05-19 2020-12-22 삼성전자주식회사 깊이 영상 생성 장치 및 방법
US9858672B2 (en) * 2016-01-15 2018-01-02 Oculus Vr, Llc Depth mapping using structured light and time of flight
KR102618542B1 (ko) * 2016-09-07 2023-12-27 삼성전자주식회사 ToF (time of flight) 촬영 장치 및 ToF 촬영 장치에서 깊이 이미지의 블러 감소를 위하여 이미지를 처리하는 방법
KR102560397B1 (ko) * 2018-09-28 2023-07-27 엘지이노텍 주식회사 카메라 장치 및 그의 깊이 정보 추출 방법
KR102562361B1 (ko) * 2018-10-05 2023-08-02 엘지이노텍 주식회사 깊이 정보를 획득하는 방법 및 카메라 모듈
CN109544617B (zh) * 2018-12-05 2024-04-16 光微信息科技(合肥)有限公司 应用于相位式tof传感器的温度补偿方法以及温度补偿装置
CN109889809A (zh) * 2019-04-12 2019-06-14 深圳市光微科技有限公司 深度相机模组、深度相机、深度图获取方法以及深度相机模组形成方法
CN111311615A (zh) * 2020-02-11 2020-06-19 香港光云科技有限公司 基于ToF的场景分割方法及***、存储介质及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193938A1 (en) * 2014-01-06 2015-07-09 Microsoft Corporation Fast general multipath correction in time-of-flight imaging
CN111580067A (zh) * 2019-02-19 2020-08-25 光宝电子(广州)有限公司 基于飞行时间测距的运算装置、感测装置及处理方法
CN111487648A (zh) * 2020-04-16 2020-08-04 北京深测科技有限公司 一种基于飞行时间的非视域成像方法和***
CN111736173A (zh) * 2020-05-24 2020-10-02 深圳奥比中光科技有限公司 一种基于tof的深度测量装置、方法及电子设备

Also Published As

Publication number Publication date
CN113298778B (zh) 2023-04-07
US20230366992A1 (en) 2023-11-16
CN113298778A (zh) 2021-08-24

Similar Documents

Publication Publication Date Title
US9768785B2 (en) Methods and apparatus for counting pulses representing an analog signal
US10094915B2 (en) Wrap around ranging method and circuit
CN107968658B (zh) 用于lidar***的模数转换器
US8432304B2 (en) Error correction in thermometer codes
WO2022227405A1 (zh) 道路病害检测方法及装置、电子设备和存储介质
WO2022241943A1 (zh) 一种基于飞行时间的深度计算方法、***及存储介质
CN112558096B (zh) 一种基于共享内存的测距方法、***以及存储介质
CN110488311B (zh) 深度距离测量方法、装置、存储介质及电子设备
WO2023273094A1 (zh) 一种光谱反射率的确定方法、装置及设备
US10416294B2 (en) Ranging device read-out circuit
CN107817484B (zh) 激光雷达放大电路的放大倍数处理方法及装置
CN115755078A (zh) 一种激光雷达的测距方法、激光雷达及存储介质
WO2022241942A1 (zh) 一种深度相机及深度计算方法
CN113487678A (zh) 一种相机校准方法、***及处理电路
WO2022160622A1 (zh) 一种距离测量方法、装置及***
CN112255635A (zh) 一种距离测量方法、***及设备
CN110612429A (zh) 三维影像测距***及方法
WO2022188885A1 (zh) 飞行时间测量方法、装置及时间飞行深度相机
CN107657078B (zh) 基于fpga的超声相控阵浮点聚焦发射实现方法
US11953620B2 (en) Arrangement and method for runtime measurement of a signal between two events
Sheehan et al. Hardware friendly spline sketched lidar
WO2023206352A1 (en) Apparatus for light detection and ranging
US20230221419A1 (en) Lidar adaptive single-pass histogramming for low power lidar system
CN115390423B (zh) 一种高精度多事件时间数字转换器及转换方法
Riccardo et al. Event-driven SPAD camera with 60 ps IRF and up to 1.6· 10^ 8 photon time-tagging measurements per second

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21940392

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21940392

Country of ref document: EP

Kind code of ref document: A1