CN112950694A - Image fusion method, single camera module, shooting device and storage medium - Google Patents

Image fusion method, single camera module, shooting device and storage medium Download PDF

Info

Publication number
CN112950694A
CN112950694A CN202110181961.7A CN202110181961A CN112950694A CN 112950694 A CN112950694 A CN 112950694A CN 202110181961 A CN202110181961 A CN 202110181961A CN 112950694 A CN112950694 A CN 112950694A
Authority
CN
China
Prior art keywords
light source
tof
resolution
lens module
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110181961.7A
Other languages
Chinese (zh)
Inventor
黄毅鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110181961.7A priority Critical patent/CN112950694A/en
Publication of CN112950694A publication Critical patent/CN112950694A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of Optical Distance (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The embodiment of the invention discloses an image fusion method, a single camera module, a shooting device and a storage medium, which are used for achieving the purpose of outputting a depth graph with low power consumption and high resolution by adopting a single iToF module. The embodiment of the invention is applied to a single camera module, and the single camera module comprises a time-of-flight (ToF) receiving lens module and a ToF transmitting lens module; the method provided by the embodiment of the invention comprises the following steps: emitting a first light source through the ToF emission lens module; acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating according to the second light source to obtain an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being less than a second threshold value; and fusing the intensity map and the depth map to obtain a target depth map.

Description

Image fusion method, single camera module, shooting device and storage medium
Technical Field
The present invention relates to the field of images, and in particular, to an image fusion method, a single camera module, a shooting device, and a storage medium.
Background
In the prior art, to complete the output of the dense depth map, two hardware modules, namely a Direct Time of flight (dToF) module and an RGB wide-angle camera module, are required, and the requirement on hardware is high.
Disclosure of Invention
The embodiment of the invention provides an image fusion method, a single camera module, a shooting device and a storage medium, which are used for achieving the purpose of outputting a depth graph with low power consumption and high resolution by adopting a single iToF module.
Optionally, in a first aspect of the present application, an image fusion method is provided, where the method is applied to a single camera module, where the single camera module includes a time-of-flight ToF receiving lens module and a ToF transmitting lens module, and the method may include:
emitting a first light source through the ToF emission lens module;
acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating according to the second light source to obtain an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being less than a second threshold value;
and fusing the intensity map and the depth map to obtain a target depth map.
This application second aspect provides a single camera module, single camera module includes time of flight ToF receiving lens module and ToF transmitting lens module, single camera module can include:
the transmitting module is used for transmitting a first light source through the ToF transmitting lens module;
the acquisition module is used for acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being less than a second threshold value according to the second light source;
this application third aspect provides a single camera module, a serial communication port, single camera module includes time of flight TOF receiving lens module and TOF transmitting lens module, single camera module includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for performing the method of the first aspect of the application.
In another aspect, an embodiment of the present invention provides a shooting device, which may include the single camera module according to the second aspect or the third aspect.
A further aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method according to the first aspect of the embodiments of the present invention.
In another aspect, an embodiment of the present invention discloses a computer program product, which, when running on a computer, causes the computer to execute the method of the first aspect of the embodiment of the present invention.
In another aspect, an embodiment of the present invention discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to execute the method according to the first aspect of the embodiment of the present invention.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the present application, the method is applied to a single camera module, where the single camera module includes a time-of-flight ToF receiving lens module and a ToF transmitting lens module, and the method includes: emitting a first light source through the ToF emission lens module; acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating according to the second light source to obtain an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being less than a second threshold value; and fusing the intensity map and the depth map to obtain a target depth map. By adopting a single iToF module, the purpose of outputting the depth graph with low power consumption and high resolution is achieved. Namely, a high-resolution intensity map and a low-resolution depth map can be acquired and finally fused into a high-resolution depth map.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and obviously, the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to the drawings.
FIG. 1 is a schematic diagram of several currently mainstream 3D vision schemes;
FIG. 2 is a schematic diagram of the principle of iToF measurement;
FIG. 3 is a schematic diagram of the dToF measurement principle;
FIG. 4 is a schematic diagram of a conventional CMOS pixel;
FIG. 5 is a schematic diagram of an iToF pixel circuit;
FIG. 6 is a schematic diagram of an embodiment of a method for image fusion in an embodiment of the present application;
fig. 7A is a schematic view of a single camera module in the embodiment of the present application;
fig. 7B is a functional schematic diagram of each element in a single camera module in the embodiment of the present application;
fig. 7C is a schematic diagram of a point light source VCSEL chip emitting 10 × 10 light emitting arrays through a point light source emitting lens in the embodiment of the present application;
FIG. 7D is a schematic diagram of an embodiment of a lattice light source illuminating an array of iToF sensor photosensitive pixels;
FIG. 7E is a schematic diagram of a single camera module including a DOE element according to an embodiment of the present application;
FIG. 7F is a schematic view of a single camera module without a DOE element in an embodiment of the present application;
FIG. 7G is a schematic diagram of a high resolution depth map generated by fusing a high resolution intensity map and a low resolution depth map according to an embodiment of the present disclosure;
fig. 8A is a schematic view of an embodiment of a single camera module according to the embodiment of the present invention;
FIG. 8B is a diagram of an embodiment of a camera in an embodiment of the invention;
fig. 9 is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides an image fusion method, a single camera module, a shooting device and a storage medium, which are used for achieving the purpose of outputting a depth graph with low power consumption and high resolution by adopting a single iToF module.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. The embodiments based on the present invention should fall into the protection scope of the present invention.
The optical ranging imaging technology can obtain complete 3D information of a scene, and helps a machine to realize high-precision identification, positioning, scene reconstruction and understanding, and has become one of essential basic technologies for VR (Virtual Reality)/AR (Augmented Reality) application.
As shown in fig. 1, is a schematic diagram of several 3D vision schemes currently in the mainstream. Fig. 1 summarizes the current mainstream optical 3D (3-dimensional) imaging technique. It can be seen that the optical 3D imaging technology can be divided into two major categories, active and passive, according to whether additional light sources are needed for supplementary lighting. Passive 3D imaging is similar to the human eye and can be achieved with only one or more conventional two-dimensional imaging chips. However, for scenes with few feature points, such as a large non-textured surface, or scenes that are too bright or too dark, the range finding is prone to errors. And the distance measurement is carried out by only depending on the imaging sensor, so that the calculated amount is large, and more time and energy are consumed. Active 3D imaging technology is currently used more in the field of consumer electronics. In contrast, active imaging techniques require a special light source to emit a signal, which is received by a conventional or custom imaging chip back from the target object. The signal returned by the object is influenced by the reflecting object in time and space, so that the distance information of the measured object is carried. Therefore, the distance information of the scene can be obtained by processing and interpreting the echo signals. In active 3D imaging systems, the most widely used technologies in consumer electronics are structured light and Time of flight (ToF) technologies. Structured light technology needs laser emission end and receiving chip to keep a certain distance, and the module volume is great, consequently relatively speaking, occupies the smaller ToF technique of volume and more receives many terminal manufacturers and favours.
ToF technology is very close to radar principles. The method is characterized in that signals are transmitted through an active transmitting end (a radar transmits electromagnetic waves, the ToF technology transmits the optical signals, and actually, the optical signals are electromagnetic waves), after the signals are reflected by an object, a receiving device receives return signals (echoes) carrying object information (position, speed and the like), and the information of the object to be measured is calculated after the return signals are processed. In particular to ToF technology, the core element of the receiving device is a custom sensor. The time from the emitting end to the time from the object to the sensor can be measured or calculated, and the distance between the object and the sensor equipment can be calculated by combining the speed of light.
The ToF technique can be divided into two major categories, Indirect Time of flight (iToF) and Direct Time of flight (dToF) according to the principle of calculating Time of flight. The iToF is used for calculating the distance by measuring the phase difference between the transmitted wave and the echo to calculate the flight time of light between the sensor and the measured object; and dtofs calculate the distance by directly measuring the return time of the echo.
Fig. 2 is a schematic diagram illustrating the principle of iToF measurement. Fig. 3 is a schematic diagram illustrating the principle of dtofs measurement. The main difference between iToF and dtod in hardware is that the sensor at the receiving end operates on a completely different principle. The light-sensitive element of the iToF is a PD (Photo-Diode) in the same principle as a commonly used Complementary Metal Oxide Semiconductor (CMOS) light-sensitive chip light-sensitive device. The main difference between the two is the way in which the PD processes the electrical signal after converting the optical signal into the electrical signal.
Fig. 4 is a schematic diagram of a conventional CMOS pixel. Fig. 5 is a schematic diagram of an iToF pixel circuit. In FIG. 4, a conventional oneA single pixel of the CMOS camera chip is composed of 1 PD and 3-4 transistors, and the main working principle is to convert an optical signal into an electric signal or electron (current) by utilizing the PD. Electrons are stored in corresponding Floating Diffusions (FD) through several switching transistors, and finally read out is controlled by a gate signal in a certain order. The iToF single pixel is complex and may consist of 1 PD and around 8 transistors. The main operation principle is to convert the optical signal into an electrical signal by using PD, and then pass through the alternatively gated TG (transfer gate, i.e. TX in FIG. 5)AAnd TXB) The pulse-type optical signal is transferred to the left and right FD. Since the signal waveform at the transmitting end is controlled by a special laser driving circuit, it can be considered as known. Then, the phase of the received signal of each pixel can be calculated by comparing the charge difference between the left FD and the right FD of each pixel in the receiver sensor (sensor). The distance of the object can be calculated by calculating the difference between the phase of the receiving end and the phase of the transmitting end.
However, the working distance is short when the area light source is used for ranging, and the resolution ratio can be reduced by reducing the power consumption: at present, the main method for reducing power consumption is to use a dot matrix light source to replace a surface light source, for example: laser radar (LiDAR). But this method yields no more than 24 x 24 spatial resolution (LiDAR) directly from the hardware. In order to obtain higher resolution, a camera with RGB (R for Red, G for Green, B for Blue, and three primary colors) needs to be turned on again, a high-resolution scene is restored by a machine learning method, and a depth map close to the resolution of QQVGA is restored by adding 24 × 24 depth information. Therefore, a solution is needed that can increase the measurement distance without reducing the resolution. Wherein, QQVGA is 1/4 screens of QVGA, and the resolution is 120 × 160. Qvga (quartvga), 1/4 size of standard VGA (Video Graphics Array) resolution, i.e. 320 × 240, is mainly used in mobile phones and portable players.
As mentioned above, a solution that overcomes the above-mentioned problems of the prior art is the LiDAR solution. The scheme is described as follows:
the lattice light source is adopted to replace a surface light source, so that the power consumption is saved (the total power consumption of the scheme is not more than 300mW, and for comparison, the power consumption of the current typical iToF system is more than 1W and cannot be generally adopted in consumer electronics products); each point source can collect a spatial distance, so the distance information spatial resolution collected by the LiDAR scheme does not exceed its total number of points. The total points are 24 × 24, and the device is divided into 4 regions to work in a time-sharing mode.
Meanwhile, one RGB works to shoot high-resolution image information of the same scene. Distinguishing objects of a scene by adopting a machine learning algorithm, and primarily judging distance information; and combining the RGB image obtained by machine learning and the sparse depth map information acquired by the ToF to accurately restore the depth of the scene, and finally outputting the image depth information with high resolution (greater than QQVGA) from the depth resolution of 24 x 24.
However, in the prior art, the hardware requirement is high. Because two hardware modules, dToF (i.e., LiDAR) and RGB Wide-angle cameras, are necessary to complete the output of dense depth maps.
The technical solution of the present invention is further described below by way of an embodiment, as shown in fig. 6, which is a schematic diagram of an embodiment of a method for image fusion in an embodiment of the present application, where the method is applied to a single camera module, and the single camera module includes a time-of-flight ToF receiving lens module and a ToF transmitting lens module, and may include:
601. and emitting the first light source through the ToF emission lens module.
Optionally, the single camera module includes an iToF receiving lens module and an iToF transmitting lens module; or, the single camera module comprises a dToF receiving lens module and a dToF transmitting lens module.
Optionally, the ToF emission lens module may include a surface light source emission lens and a point light source emission lens.
Optionally, the ToF emission lens module may include a surface light source emission lens.
Optionally, the shooting device may include a single camera module. The camera may be provided in a terminal device. It can be understood that the terminal device according to the embodiment of the present invention has a photographing and shooting function. A general handheld electronic terminal such as a mobile phone, a smart phone, a portable terminal, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP) device, a notebook (Note Pad), a Wireless Broadband (Wibro) terminal, a tablet (PC), an intelligent PC, a POS (Point of Sales), a car computer, and the like may be included.
The terminal device may also comprise a wearable device. The wearable device may be worn directly on the user or may be a portable electronic device integrated into the user's clothing or accessories. Wearable equipment is not only a hardware equipment, can realize powerful intelligent function through software support and data interaction, high in the clouds interaction more, for example: the system has the functions of calculation, positioning and alarming, and can be connected with a mobile phone and various terminals. Wearable devices may include, but are not limited to, wrist-supported watch types (e.g., wrist watches, wrist-supported products), foot-supported shoes types (e.g., shoes, socks, or other leg-worn products), head-supported Glass types (e.g., glasses, helmets, headbands, etc.), and various types of non-mainstream products such as smart clothing, bags, crutches, accessories, and the like.
602. And acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating according to the second light source to obtain an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being smaller than a second threshold value.
(1) In a case that the ToF emission lens module includes a surface light source emission lens and a point light source emission lens, the emitting a first light source through the ToF emission lens module may include:
emitting a first area array light source through the area light source emission lens; emitting a first point array light source through the point light source emitting lens;
the acquiring, by the ToF receiving lens module, a second light source returned by the object from the first light source, and calculating an intensity map with a resolution greater than a first threshold and a depth map with a resolution less than a second threshold according to the second light source may include:
collecting a second area array light source returned by the object from the first area array light source and a second area array light source returned by the object from the first point array light source through the ToF receiving lens module; and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value and a depth map with the resolution ratio smaller than a second threshold value according to the second area array light source and the second area array light source.
Optionally, the first area array light source is emitted through the area light source emission lens; emitting a first point array light source through the point light source emission lens may include: emitting a first area array light source at a first time through the area light source emission lens; emitting a first point array light source at a second moment through the point light source emitting lens;
the collecting, by the ToF receiving lens module, the second area array light source returned by the object from the first area array light source, and the collecting, by the ToF receiving lens module, the second area array light source returned by the object from the first area array light source may include: and acquiring a second area array light source returned by the object from the first area array light source at a third moment and acquiring a second dot matrix light source returned by the object from the first dot matrix light source at a fourth moment through the ToF receiving lens module.
Optionally, the calculating, according to the second area array light source and the second area array light source, an intensity map with a resolution greater than a first threshold and a depth map with a resolution less than a second threshold may include: and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value according to the second area array light source, and calculating to obtain a depth map with the resolution ratio smaller than a second threshold value according to the second area array light source.
(2) In a case where the ToF emission lens module includes a surface light source emission lens, the emitting a first light source through the ToF emission lens module may include:
emitting a third surface array light source through the surface light source emission lens;
the acquiring, by the ToF receiving lens module, a second light source returned by the object from the first light source, and calculating an intensity map with a resolution greater than a first threshold and a depth map with a resolution less than a second threshold according to the second light source may include:
collecting a fourth array light source returned by the object from the third array light source through the ToF receiving lens module; and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value and a depth map with the resolution ratio smaller than a second threshold value according to the fourth array light source.
Optionally, the emitting a third array light source through the surface light source emitting lens may include: emitting a third surface light source at a fifth moment through the surface light source emission lens;
through the ToF receiving lens module, collecting a fourth array light source returned by the object from the third array light source may include: and collecting a fourth array light source returned by the object from the third array light source at a sixth moment through the ToF receiving lens module.
Exemplarily, as shown in fig. 7A, the exemplary view is a schematic view of a single camera module in the embodiment of the present application. In fig. 7A, a single camera module includes a receiving lens, a point light source emitting lens, and a surface light source emitting lens.
It is understood that the chip model corresponding to the receiving lens is not limited. The number of points of the dot matrix light source emitted by the point light source emitting lens is not limited, and is generally not more than 1000 points in order to save power consumption. Here, 12 × 12 or 10 × 10 may be taken as an example for explanation. The surface light source emitted by the surface light source emission lens can be described by taking a scene covering the whole Field of view (FOV) as an example.
Fig. 7B is a schematic functional diagram of each element in a single camera module according to an embodiment of the present disclosure. The single camera module comprises an iToF receiving lens module, an iToF transmitting lens module and a timing and signal processing circuit; wherein, the iToF receiving lens module may include: an iToF sensor and a receiving lens; the iToF emission lens module may include: a point light source Vertical Cavity Surface-Emitting Laser (VCSEL) chip, a point light source Emitting lens, a Surface light source VCSEL chip, and a Surface light source Emitting lens.
For example, the point light source VCSEL chip may emit the dot light source through the point light source emission lens at time t1, and the surface light source VCSEL chip may emit the area light source through the surface light source emission lens at time t 2. And the iToF sensor receives the lattice light returned by the object at the time t3 through the iToF receiving lens, calculates corresponding depth information, receives the area lattice light returned by the object at the time t4 and outputs dense scene intensity information.
It is understood that, in the embodiment, the specific implementation manner of the lattice light source may emit 10 × 10 laser light through the point light source VCSEL chip, as shown in fig. 7C, which is a schematic diagram of the point light source VCSEL chip emitting 10 × 10 light emitting array through the point light source emitting lens in the embodiment of the present application.
Fig. 7C shows the arrangement of the light emitting points of the point light source VCSEL chip used in this embodiment, and considering that the size of a single camera module has certain requirements on the size of the point light source VCSEL chip, the size of the module is further reduced in order to reduce the size of the chip. The application can adopt a point light source VCSEL chip with 10 × 10 physical light emitting lattices, and a collimating lens group and a DOE (Diffractive Optical Elements) are added above the point light source VCSEL chip. The effect of the DOE here is to replicate the DOE pattern to 3 x 3 ═ 9 parts. For example: the number of laser points finally projected from a single camera module is 10 × 10 (3 × 3) ═ 900 points.
When the test starts, as at time t1, a group of lattice light sources is first emitted from the point source VCSEL chip. This pointolite is shot out through pointolite emission lens, shines on surveyed scene object and by the object reflection, through receiving the camera lens module, shines on the iToF sensor. Fig. 7D is a schematic diagram of the lattice light source irradiating the iToF sensor photosensitive pixel array according to the embodiment of the present application. Wherein the circular portion represents the dot matrix signal light. For convenience of explanation, fig. 7D shows 10 pixels a to J. It can be seen that not all pixels of the iToF sensor receive the signal light, and therefore not every pixel outputs depth information (e.g., pixel C, H does not receive the signal light, and therefore the corresponding region does not output true depth information of the scene). Here, a, B, F and G receive the same spot, and thus theoretically output the same range information. In practice, of course, the energy of the light spot is hardly perfectly equally distributed among a, B, F and G pixels, and therefore, the distance calculated by each pixel has a certain error according to the difference of the energy intensity of the light spot. To solve this problem, the distances are calculated for the four pixels a, B, F, and G, and then the results are averaged.
It can be understood that the measurement sequence of the point light source and the surface light source is not strictly required, alternate measurement is only required, and the sequence can be exchanged.
It will be appreciated that some of the present application is described below, by way of example, as follows:
1) considering that the indoor and outdoor light sources generally do not have 940nm, the interference of a noise light source can be reduced, and the iToF sensor still maintains higher quantum detection efficiency for 940nm, therefore, the VCSEL with the emission wavelength of 940nm can be used as the light source in the embodiment of the application. Of course, if other wavelengths are adopted, such as 845nm, 1350nm and other lasers, which are completely suitable for the scheme, only the iToF sensor sensitive to the corresponding wavelength needs to be selected.
2) The current general iToF sensor performance can satisfy the following conditions: the lattice light source measures 5m scenes, and the integration time is required to be less than 1ms every time one frame of depth information is output. The surface light source outputs corresponding scene intensity information with full resolution, and the integration time is not more than 2 ms. Therefore, the scheme can meet the frame rate (the measurement time per frame is required to be not more than 33ms) output of 30 fps.
3) In the case of a point array light source measuring distance, the number of laser points emitted by the VCSEL may be hundreds or thousands. For example, current algorithms may demonstrate that fusing high resolution RGB images with 144-point depth maps can result in dense depth maps of at least 192 × 256 resolution. Generally, the VCSEL should not have more than 5000 points, because the size of the iToF sensor is limited, too many points cannot be distributed on the sensor independently, and the lower the VCSEL point is, the lower the hardware power consumption is.
4) The DOE diffraction optical element adopts a micro-nano etching process to form diffraction units which are distributed in a two-dimensional mode, each diffraction unit can have specific appearance, specific refractive index and the like, and the laser wave front phase distribution is finely regulated and controlled. Generally, the effect achieved is to reproduce the incident light pattern in certain portions. For example 3 x 3 copies in this example. In theory any number of copies can be used in the protocol. E.g. 1 x 3, 3 x 3, 5 x 5 etc. The more copies, the fewer VCSEL real light emitting points are needed and the smaller the VCSEL area. However, when the number of copies is too large, the light energy efficiency is greatly lost. A certain balance between energy efficiency and area is therefore required. This is why the present embodiment will be described by taking 3 × 3 as an example.
5) The physical resolution of the iToF sensor chip in the present embodiment may be various forms, such as 240 × 180, 320 × 240, 640 × 480, and the like, and is not particularly limited.
Optionally, in the embodiment of the present application, a lattice laser may be adopted to cooperate with a surface laser, and the depth map information with high resolution may be restored by using the advantage of the high lateral resolution of the surface light source and combining the advantage of the local high signal-to-noise ratio of the lattice light source. Wherein the hardware portion may contain three core elements: 1) an iToF sensor; 2) area array (flood) light sources; 3) and a dot matrix light source. The dot matrix light source of the embodiment of the application adopts a DOE copying mode. In fact, if there is no sensitivity to module size, or if it is desired to further increase system energy efficiency, then it is contemplated that no DOE element may be used. As shown in fig. 7E, a schematic diagram of a single camera module in the embodiment of the present application includes a DOE element. As shown in fig. 7F, the single camera module in the embodiment of the present application DOEs not include a DOE element. However, in fig. 7G, if more lattice light sources are needed, the number of the VCSEL emitters needs to be larger when the VCSEL initially emits the lattice light source.
603. And fusing the intensity map and the depth map to obtain a target depth map.
Optionally, the fusing the intensity map and the depth map to obtain a target depth map may include: carrying out grey-scale image sampling on the intensity image to obtain a grey-scale image; according to the gray level image and the depth image, carrying out image pixel alignment, weighted filtering, denoising, edge enhancement color interpolation based on gradient and sampling processing on the 3D image to obtain a processing image; and carrying out image filtering and data smoothing according to the gray level image and the intensity image to obtain a target depth image.
Optionally, the resolution of the target depth map is greater than the first threshold.
Exemplarily, down-sampling a gray scale image of the intensity image, aligning the intensity image with a depth image in pixel level of two frames of images, performing weighted filtering, and filtering image noise to obtain a smoother 3D image; and then carrying out gradient-based edge enhancement color interpolation, sampling the image on the 3D image, and carrying out image filtering and smoothing processing together with the intensity image to obtain a target depth image.
Illustratively, as shown in fig. 7G, a schematic diagram of generating a high-resolution depth map for the fusion of the high-resolution intensity map and the low-resolution depth map in the embodiment of the present application is generated.
It will be appreciated that combining the high resolution intensity map of the first frame with the low resolution depth map of the second frame, the high resolution intensity map may provide scene detail information, for example: the table, chair, wall, floor, etc. in the scene can be distinguished by using a machine learning algorithm, and the relative distance is preliminarily given. With a low resolution depth map, accurate distance information for the corresponding scene can be determined. The two images are fused, and a high-resolution depth map with accurate depth information can be output. The specific fusion algorithm has multiple approaches, which are not described in detail herein.
In the embodiment of the present application, the method is applied to a single camera module, where the single camera module includes a time-of-flight ToF receiving lens module and a ToF transmitting lens module, and the method includes: emitting a first light source through the ToF emission lens module; acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating according to the second light source to obtain an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being less than a second threshold value; and fusing the intensity map and the depth map to obtain a target depth map. By adopting a single iToF module, the purpose of outputting the depth graph with low power consumption and high resolution is achieved. Namely, the high-resolution intensity map and the low-resolution depth map can be acquired in a time-sharing manner, and finally the high-resolution intensity map and the low-resolution depth map are fused into a high-resolution depth map.
One iToF camera is needed to complete the work; the calibration steps of the RGB camera and the iTOF camera in the production process are not needed. Compared with the method of only using a surface light source for ranging, the point light source can improve the effective measuring distance; in general, the intensity of a light source decays with distance and has an inverse square relationship with distance. Therefore, compared with a surface light source (floodlight), the spot light source is adopted, under the condition that the total power is not changed, the energy of the spot light source is more concentrated at a certain point needing to be measured, so that the local signal-to-noise ratio is higher, and the effective distance of the measurement depth can be increased by more than one time.
As shown in fig. 8A, which is a schematic view of an embodiment of a single camera module according to an embodiment of the present invention, the single camera module includes a time-of-flight ToF receiving lens module and a ToF transmitting lens module, and the single camera module includes:
an emitting module 801, configured to emit a first light source through the ToF emitting lens module;
an acquisition module 802, configured to acquire, through the ToF receiving lens module, a second light source returned by an object from the first light source, and obtain, according to the second light source, an intensity map with a resolution greater than a first threshold and a depth map with a resolution less than a second threshold by calculation;
and the processing module 803 is configured to fuse the intensity map and the depth map to obtain a target depth map.
Optionally, the ToF emission lens module includes a surface light source emission lens and a point light source emission lens;
the emitting module 801 is specifically configured to emit a first area array light source through the area light source emitting lens; emitting a first point array light source through the point light source emitting lens;
an acquisition module 802, specifically configured to acquire, by the ToF receiving lens module, a second area array light source returned by the object from the first area array light source, and a second dot array light source returned by the object from the first dot array light source; and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value and a depth map with the resolution ratio smaller than a second threshold value according to the second area array light source and the second area array light source.
Optionally, the acquisition module 802 is specifically configured to calculate an intensity map with a resolution greater than a first threshold according to the second area array light source, and calculate a depth map with a resolution less than a second threshold according to the second area array light source.
Optionally, the ToF emission lens module includes a surface light source emission lens;
the emitting module 801 is specifically configured to emit a third array light source through the surface light source emitting lens;
an acquisition module 802, specifically configured to acquire, through the ToF receiving lens module, a fourth array light source returned by the object from the third array light source; and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value and a depth map with the resolution ratio smaller than a second threshold value according to the fourth array light source.
Optionally, the ToF emission lens module further includes a diffractive optical element;
the diffractive optical element is used for duplicating the first lattice light source.
Optionally, the processing module 803 is specifically configured to perform grayscale image sampling on the intensity map to obtain a grayscale image; according to the gray level image and the depth image, carrying out image pixel alignment, weighted filtering, denoising, edge enhancement color interpolation based on gradient and sampling processing on the 3D image to obtain a processing image; and carrying out image filtering and data smoothing according to the gray level image and the intensity image to obtain a target depth image.
Optionally, the resolution of the target depth map is greater than the first threshold.
Optionally, this application embodiment still provides a single camera module, single camera module includes time of flight ToF receiving lens module and ToF transmitting lens module, single camera module includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for performing the method as described in the above method embodiments.
As shown in fig. 8B, which is a schematic view of an embodiment of a shooting device in an embodiment of the present invention, a single camera module as shown in fig. 8A may be included.
As shown in fig. 9, which is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention, the method may include:
fig. 9 is a block diagram illustrating a partial structure of a mobile phone related to a terminal device provided in an embodiment of the present invention. Referring to fig. 9, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, display unit 940, sensor 950, audio circuit 990, wireless fidelity (WiFi) module 970, processor 980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 9 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 9:
the RF circuit 910 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information of a base station and then processing the received downlink information to the processor 980; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 920 may be used to store software programs and modules, and the processor 980 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (e.g., a user's operation on or near the touch panel 931 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 940 may include a Display panel 941, and optionally, the Display panel 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 931 may cover the display panel 941, and when the touch panel 931 detects a touch operation on or near the touch panel 931, the touch panel transmits the touch operation to the processor 980 to determine the type of the touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of the touch event. Although in fig. 9, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 941 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 941 and/or backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and convert the electrical signal into a sound signal for output by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 960, and outputs the audio data to the processor 980 for processing, and then transmits the audio data to, for example, another mobile phone through the RF circuit 910, or outputs the audio data to the memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 9 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Alternatively, processor 980 may include one or more processing units; preferably, the processor 980 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
The handset also includes a power supply 960 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 980 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the invention, the method is applied to a single camera module, and the single camera module comprises a time-of-flight (ToF) receiving lens module and a ToF transmitting lens module;
a processor 980 for emitting a first light source through the ToF emitting lens module; acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating according to the second light source to obtain an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being less than a second threshold value; and fusing the intensity map and the depth map to obtain a target depth map.
Optionally, the ToF emission lens module includes a surface light source emission lens and a point light source emission lens;
a processor 980, specifically configured to emit a first area array light source through the area light source emission lens; emitting a first point array light source through the point light source emitting lens; the second area array light source returned by the object from the first area array light source and the second area array light source returned by the object from the first point array light source are collected by the ToF receiving lens module; and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value and a depth map with the resolution ratio smaller than a second threshold value according to the second area array light source and the second area array light source.
The processor 980 is specifically configured to calculate an intensity map with a resolution greater than a first threshold according to the second area array light source, and calculate a depth map with a resolution less than a second threshold according to the second area array light source.
Optionally, the ToF emission lens module includes a surface light source emission lens;
a processor 980, specifically configured to emit a third array light source through the surface light source emission lens; collecting a fourth array light source returned by the object from the third array light source through the ToF receiving lens module; and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value and a depth map with the resolution ratio smaller than a second threshold value according to the fourth array light source.
Optionally, the ToF emission lens module further includes a diffractive optical element;
the diffractive optical element is used for duplicating the first lattice light source.
Optionally, the processor 980 is specifically configured to perform grayscale image sampling on the intensity map to obtain a grayscale image; according to the gray level image and the depth image, carrying out image pixel alignment, weighted filtering, denoising, edge enhancement color interpolation based on gradient and sampling processing on the 3D image to obtain a processing image; and carrying out image filtering and data smoothing according to the gray level image and the intensity image to obtain a target depth image.
Optionally, the resolution of the target depth map is greater than the first threshold.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A method for image fusion is applied to a single camera module, wherein the single camera module comprises a time-of-flight (ToF) receiving lens module and a ToF transmitting lens module, and the method comprises the following steps:
emitting a first light source through the ToF emission lens module;
acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating according to the second light source to obtain an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being less than a second threshold value;
and fusing the intensity map and the depth map to obtain a target depth map.
2. The method of claim 1, wherein the ToF emission lens module comprises a surface light source emission lens and a point light source emission lens;
through ToF emission lens module, launch first light source, include:
emitting a first area array light source through the area light source emission lens;
emitting a first point array light source through the point light source emitting lens;
the collecting, by the ToF receiving lens module, a second light source returned by the object from the first light source, and calculating, according to the second light source, an intensity map with a resolution greater than a first threshold and a depth map with a resolution less than a second threshold, includes:
the second area array light source returned by the object from the first area array light source and the second area array light source returned by the object from the first point array light source are collected by the ToF receiving lens module;
and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value and a depth map with the resolution ratio smaller than a second threshold value according to the second area array light source and the second area array light source.
3. The method of claim 2, wherein computing an intensity map with a resolution greater than a first threshold and a depth map with a resolution less than a second threshold from the second area-array light source and the second area-array light source comprises:
and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value according to the second area array light source, and calculating to obtain a depth map with the resolution ratio smaller than a second threshold value according to the second area array light source.
4. The method of claim 1, wherein the ToF emission lens module comprises a surface light source emission lens;
through ToF emission lens module, launch first light source, include:
emitting a third surface array light source through the surface light source emission lens;
the collecting, by the ToF receiving lens module, a second light source returned by the object from the first light source, and calculating, according to the second light source, an intensity map with a resolution greater than a first threshold and a depth map with a resolution less than a second threshold, includes:
collecting a fourth array light source returned by the object from the third array light source through the ToF receiving lens module;
and calculating to obtain an intensity map with the resolution ratio larger than a first threshold value and a depth map with the resolution ratio smaller than a second threshold value according to the fourth array light source.
5. The method according to claim 2 or 3, wherein the ToF emission lens module further comprises a diffractive optical element;
the diffractive optical element is used for duplicating the first lattice light source.
6. The method according to any one of claims 1-4, wherein said fusing the intensity map and the depth map to obtain a target depth map comprises:
carrying out grey-scale image sampling on the intensity image to obtain a grey-scale image;
according to the gray level image and the depth image, carrying out image pixel alignment, weighted filtering, denoising, edge enhancement color interpolation based on gradient and sampling processing on the 3D image to obtain a processing image;
and carrying out image filtering and data smoothing according to the gray level image and the intensity image to obtain a target depth image.
7. The method of any of claims 1-4, wherein the resolution of the target depth map is greater than the first threshold.
8. The utility model provides a single camera module, its characterized in that, single camera module includes time of flight TOF receiving lens module and TOF transmitting lens module, single camera module includes:
the transmitting module is used for transmitting a first light source through the ToF transmitting lens module;
the acquisition module is used for acquiring a second light source returned by the object from the first light source through the ToF receiving lens module, and calculating an intensity map with the resolution being greater than a first threshold value and a depth map with the resolution being less than a second threshold value according to the second light source;
and the processing module is used for fusing the intensity map and the depth map to obtain a target depth map.
9. The utility model provides a single camera module, its characterized in that, single camera module includes time of flight TOF receiving lens module and TOF transmitting lens module, single camera module includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for performing the method of any one of claims 1-7.
10. A computer-readable storage medium comprising instructions that, when executed on a processor, cause the processor to perform the method of any one of claims 1-7.
11. A camera device comprising a single camera module according to claim 8 or 9.
CN202110181961.7A 2021-02-08 2021-02-08 Image fusion method, single camera module, shooting device and storage medium Pending CN112950694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110181961.7A CN112950694A (en) 2021-02-08 2021-02-08 Image fusion method, single camera module, shooting device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110181961.7A CN112950694A (en) 2021-02-08 2021-02-08 Image fusion method, single camera module, shooting device and storage medium

Publications (1)

Publication Number Publication Date
CN112950694A true CN112950694A (en) 2021-06-11

Family

ID=76245305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110181961.7A Pending CN112950694A (en) 2021-02-08 2021-02-08 Image fusion method, single camera module, shooting device and storage medium

Country Status (1)

Country Link
CN (1) CN112950694A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542534A (en) * 2021-09-17 2021-10-22 珠海视熙科技有限公司 TOF camera control method and device and storage medium
CN113658089A (en) * 2021-09-09 2021-11-16 南开大学 Double-data-stream fusion object identification method based on depth camera
CN114302057A (en) * 2021-12-24 2022-04-08 维沃移动通信有限公司 Image parameter determination method and device, electronic equipment and storage medium
CN115294107A (en) * 2022-09-29 2022-11-04 江苏三通科技有限公司 Diode pin surface oxidation detection method based on image recognition
EP4156085A4 (en) * 2021-08-06 2023-04-26 Shenzhen Goodix Technology Co., Ltd. Depth image collection apparatus, depth image fusion method and terminal device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015178575A1 (en) * 2014-05-21 2015-11-26 주식회사 더에스 Apparatus for acquiring three-dimensional time of flight image
CN106772430A (en) * 2016-12-30 2017-05-31 南京理工大学 The single pixel photon counting 3-D imaging system and method approached based on multiresolution wavelet
CN110536067A (en) * 2019-09-04 2019-12-03 Oppo广东移动通信有限公司 Image processing method, device, terminal device and computer readable storage medium
CN111239729A (en) * 2020-01-17 2020-06-05 西安交通大学 Speckle and floodlight projection fused ToF depth sensor and distance measuring method thereof
CN111366941A (en) * 2020-04-20 2020-07-03 深圳奥比中光科技有限公司 TOF depth measuring device and method
CN111678457A (en) * 2020-05-08 2020-09-18 西安交通大学 ToF device under OLED transparent screen and distance measuring method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015178575A1 (en) * 2014-05-21 2015-11-26 주식회사 더에스 Apparatus for acquiring three-dimensional time of flight image
CN106772430A (en) * 2016-12-30 2017-05-31 南京理工大学 The single pixel photon counting 3-D imaging system and method approached based on multiresolution wavelet
CN110536067A (en) * 2019-09-04 2019-12-03 Oppo广东移动通信有限公司 Image processing method, device, terminal device and computer readable storage medium
CN111239729A (en) * 2020-01-17 2020-06-05 西安交通大学 Speckle and floodlight projection fused ToF depth sensor and distance measuring method thereof
CN111366941A (en) * 2020-04-20 2020-07-03 深圳奥比中光科技有限公司 TOF depth measuring device and method
CN111678457A (en) * 2020-05-08 2020-09-18 西安交通大学 ToF device under OLED transparent screen and distance measuring method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUTONG ZHONG, YU WANG*, YAN PIAO: "Depth image interpolation algorithm based on confidence map", PROCEEDINGS OF SPIE, pages 1 - 8 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4156085A4 (en) * 2021-08-06 2023-04-26 Shenzhen Goodix Technology Co., Ltd. Depth image collection apparatus, depth image fusion method and terminal device
US11928802B2 (en) 2021-08-06 2024-03-12 Shenzhen GOODIX Technology Co., Ltd. Apparatus for acquiring depth image, method for fusing depth images, and terminal device
CN113658089A (en) * 2021-09-09 2021-11-16 南开大学 Double-data-stream fusion object identification method based on depth camera
CN113542534A (en) * 2021-09-17 2021-10-22 珠海视熙科技有限公司 TOF camera control method and device and storage medium
CN114302057A (en) * 2021-12-24 2022-04-08 维沃移动通信有限公司 Image parameter determination method and device, electronic equipment and storage medium
CN115294107A (en) * 2022-09-29 2022-11-04 江苏三通科技有限公司 Diode pin surface oxidation detection method based on image recognition
CN115294107B (en) * 2022-09-29 2022-12-27 江苏三通科技有限公司 Diode pin surface oxidation detection method based on image recognition

Similar Documents

Publication Publication Date Title
CN112950694A (en) Image fusion method, single camera module, shooting device and storage medium
CN105190426B (en) Time-of-flight sensor binning
US10841174B1 (en) Electronic device with intuitive control interface
KR102497683B1 (en) Method, device, device and storage medium for controlling multiple virtual characters
WO2021120403A1 (en) Depth measurement device and method
EP3410391A1 (en) Image blurring method, electronic device and computer readable storage medium
US10564765B2 (en) Terminal and method of controlling therefor
CN108965666B (en) Mobile terminal and image shooting method
CN109068043A (en) A kind of image imaging method and device of mobile terminal
CN108271012A (en) A kind of acquisition methods of depth information, device and mobile terminal
CN111311757B (en) Scene synthesis method and device, storage medium and mobile terminal
KR102633468B1 (en) Method and device for displaying hotspot maps, and computer devices and readable storage media
WO2021129776A1 (en) Imaging processing method, and electronic device
US11450296B2 (en) Fade-in user interface display based on finger distance or hand proximity
CN110113528A (en) A kind of parameter acquiring method and terminal device
CN108833903A (en) Structured light projection mould group, depth camera and terminal
CN106851119B (en) Picture generation method and equipment and mobile terminal
CN107888829B (en) Focusing method of mobile terminal, mobile terminal and storage medium
CN107782250A (en) A kind of depth information measuring method, device and mobile terminal
CN109471119A (en) A kind of method and terminal device controlling power consumption
CN112181138B (en) Self-adaptive intelligent head and hand VR system and method
CN110536067B (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
WO2023098583A1 (en) Rendering method and related device thereof
US20230005227A1 (en) Electronic device and method for offering virtual reality service
CN108550182A (en) A kind of three-dimensional modeling method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination