CN109447049A - The quantification design method and stereo visual system of light source - Google Patents

The quantification design method and stereo visual system of light source Download PDF

Info

Publication number
CN109447049A
CN109447049A CN201811629354.7A CN201811629354A CN109447049A CN 109447049 A CN109447049 A CN 109447049A CN 201811629354 A CN201811629354 A CN 201811629354A CN 109447049 A CN109447049 A CN 109447049A
Authority
CN
China
Prior art keywords
image
light source
face
dot matrix
hot spot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811629354.7A
Other languages
Chinese (zh)
Other versions
CN109447049B (en
Inventor
彭莎
刘关松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Howei Technology (wuhan) Co Ltd
Original Assignee
Howei Technology (wuhan) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Howei Technology (wuhan) Co Ltd filed Critical Howei Technology (wuhan) Co Ltd
Priority to CN201811629354.7A priority Critical patent/CN109447049B/en
Publication of CN109447049A publication Critical patent/CN109447049A/en
Application granted granted Critical
Publication of CN109447049B publication Critical patent/CN109447049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of quantification design method of light source and stereo visual systems, N number of depth image of corresponding N group facial image is calculated separately out according to N group facial image first, then the face of N number of facial image is aligned to obtain normalized parameter, and the mean depth image of the face of N number of depth image is obtained using the normalized parameter, concentration gradient image is obtained then according to the mean depth image, the distribution density for the dot matrix hot spot that the light source issues is constrained further according to the concentration gradient image, the distribution density of the dot matrix hot spot is set to be more suitable for face, to improve the measurement accuracy of the face depth of stereo visual system, also pass through the illumination zone of the dimension constraint dot matrix hot spot of mean depth image, the distribution density of the dot matrix hot spot of face exterior domain can be reduced, to reduce energy consumption.

Description

The quantification design method and stereo visual system of light source
Technical field
The present invention relates to visual pattern processing technology field more particularly to the quantification design methods and stereopsis of a kind of light source Feel system.
Background technique
Binocular stereo vision (Binocular Stereo Vision) is a kind of important form of machine vision, it is base In principle of parallax and two images of the imaging device from different position acquisition visual scenes are utilized, by calculating image corresponding points Between position deviation, the method to obtain object dimensional geological information.Binocular stereo vision merges the original that two " eyes " obtain Beginning image simultaneously observes the difference between them, can obtain apparent sense of depth, establish the corresponding relationship between feature, by same sky Between photosites of the physical points in different images be mapped, by calculating the available parallax of this difference (Disparity) Image.
Binocular stereo vision can be divided into active stereo vision (actively providing light source) according to the difference of light source and passively stand Body vision (by environment light), wherein active stereo vision is by increasing the light source (issuing dot matrix hot spot) of pseudo random pattern to field Scape is textured, and for passive stereo vision, can be made up in the deficiency for repeating texture or weak texture scene, pseudo- The enhanced X-ray source of the stochastic model texture of scene, reduces the complexity of Stereo Matching Algorithm, improves the essence of depth image Degree.
The light source of current existing pseudo random pattern is a brightness uniformity and the uniform dot matrix laser of dot density, is suitble to answer To provide texture information when active Stereo matching ranging in uncertain scene, dot matrix laser needs to issue pseudorandom at this time The light source of mode, energy consumption is larger, but in the application of recognition of face, in the face recognition application that especially moves end equipment, The power of dot matrix laser in other words dot matrix hot spot dot density should Quantitative design, to reach dot matrix laser energy consumption and people The balance of face accuracy of identification.
Summary of the invention
The purpose of the present invention is to provide a kind of quantification design method of light source and stereo visual systems, are reducing stereopsis It can be improved the precision of recognition of face while feeling system energy consumption.
In order to achieve the above object, the present invention provides a kind of quantification design methods of light source, vertical for Quantitative design one The light source of body vision system, the light source illuminates a face for issuing dot matrix hot spot, and passes through multiple photographing modules Shooting is by the face of the light source active illumination, to obtain the facial image of the different shooting angles of one group of correspondence face, Include:
N group facial image is provided and calculates separately out N number of depth image of corresponding N group facial image, wherein N >=1, and It include at least two facial images of corresponding same face in every group of facial image;
Face in N number of depth image is aligned to obtain mean depth image;
Concentration gradient image is obtained according to the mean depth image;
The distribution density for the dot matrix hot spot that the light source issues is designed according to the concentration gradient image quantization.
Optionally, the gradient value in the distribution density of the dot matrix hot spot and the concentration gradient figure is positively correlated.
Optionally, include: by the step of face alignment of N number of depth image
The identical facial image of shooting angle is denoted as image to be transformed from every group of facial image, and marks The crucial pixel of k of each image to be transformed, wherein k is more than or equal to 4;
Obtain corresponding to the normalization of each image to be transformed according to k of each image to be transformed crucial pixel Parameter;
N number of depth image is normalized using the normalized parameter, so that in each depth image Face alignment.
Optionally, the described crucial pixels of k are respectively the picture where the profile of two eyes of face, nose and mouth Element.
Optionally, the normalized parameter includes one of zoom factor, translation coefficient and coefficient of rotary or a variety of.
Optionally, the depth value of each pixel is that N number of depth image is normalized on the mean depth image Depth-averaged value in respective pixel afterwards.
Optionally, gradient operator is used to carry out convolution to obtain the concentration gradient image mean depth image.
Optionally, after the distribution density for the dot matrix hot spot that the light source described in Quantitative design issues, also according to described average The illumination zone for the dot matrix hot spot that light source described in the dimension constraint of depth image issues.
The present invention also provides a kind of stereo visual system, including light source and multiple photographing modules, the light source is for sending out To illuminate a face, multiple photographing modules are used to shoot by the face of the light source active illumination dot matrix hot spot out, To obtain the facial image of the different shooting angles of one group of correspondence face, wherein the distribution density of the dot matrix hot spot is adopted With the quantification design method of the light source come Quantitative design.
Optionally, the gradient value in the distribution density of the dot matrix hot spot and a concentration gradient figure is positively correlated.
In the quantification design method and stereo visual system of light source provided by the invention, first according to N group facial image The face of N number of facial image, is then aligned to obtain normalizing by the N number of depth image for calculating separately out corresponding N group facial image Change parameter, and obtain using the normalized parameter mean depth image of the face of N number of depth image, then according to described Mean depth image obtains concentration gradient image, constrains the dot matrix hot spot that the light source issues further according to the concentration gradient image Distribution density, so that the distribution density of the dot matrix hot spot is more suitable for face, to improve the people of stereo visual system The measurement accuracy of face depth also passes through the illumination zone of the dimension constraint dot matrix hot spot of mean depth image, it is possible to reduce face The distribution density of the dot matrix hot spot of exterior domain, to reduce energy consumption.
Detailed description of the invention
Fig. 1 is the flow chart of the quantification design method of light source provided in an embodiment of the present invention;
Fig. 2 is the schematic diagram of concentration gradient image provided in an embodiment of the present invention.
Specific embodiment
A specific embodiment of the invention is described in more detail below in conjunction with schematic diagram.According to following description and Claims, advantages and features of the invention will become apparent from.It should be noted that attached drawing is all made of very simplified form and Using non-accurate ratio, only for the purpose of facilitating and clarifying the purpose of the embodiments of the invention.
As shown in Figure 1, being used for one stereoscopic vision system of Quantitative design the present invention provides a kind of quantification design method of light source The light source of system, the light source illuminate a face for issuing dot matrix hot spot, and by multiple photographing module shootings by institute The face of light source active illumination is stated, to obtain the facial image of the different shooting angles of one group of correspondence face, comprising:
S1: N group facial image is provided and calculates separately out N number of depth image corresponding to N group facial image, wherein N >= 1, and include at least two facial images for corresponding to same face in every group of facial image;
S2: the face of N number of depth image is aligned to obtain average face, and obtains the mean depth of the average face Image;
S3: concentration gradient image is obtained according to the mean depth image;
S4: the distribution density for the dot matrix hot spot that the light source issues is designed according to the concentration gradient image quantization.
In the present embodiment, the stereo visual system is binocular active stereo vision system, and is applied to mobile device (such as Mobile phone, tablet computer) face identification functions in, the stereo visual system have multiple photographing modules it is same to shoot respectively Face is to obtain the facial image of the different shooting angles of one group of correspondence face, and face is by the stereoscopic vision when shooting The light source active illumination of system.Optionally, the light source of the stereo visual system can be dot matrix laser, be used to form dot matrix Hot spot, the dot matrix hot spot can be infrared dot matrix hot spot, and the photographing module can be infrared photography module.
Based on binocular active stereo vision system, step S1 is first carried out, (is clapped by stereo visual system in Face Sample Storehouse Take the photograph) in acquisition N (N >=1) group facial image, include pair taken by different perspectives in every group of facial image in the present embodiment Two facial images of same face are answered, is i.e. include the facial image of a pair of corresponding same face in every group of facial image, then The corresponding depth image of every group of facial image is calculated separately out, one is obtained N number of depth image.
Further, since the sizes for organizing other facial image different in Face Sample Storehouse may be not of uniform size, face Size is not also identical, and the face distributing position of face is also different, therefore in order to eliminate face area in different groups of other facial images The position in domain, size, the difference in direction, so will be by coordinate transform, operation, which is normalized, to facial image makes in every group Face images are all normalized in a unified standard, this unified standard is average face.
Specifically, executing step S2, the face of N number of depth image is aligned to obtain average face, i.e. mean depth Image.Specifically, choosing the identical facial image of shooting angle in every group of facial image is denoted as image to be transformed, example Such as, it chooses the facial image of the LOOK LEFT shooting in every group of facial image or chooses the LOOK RIGHT shooting in every group of facial image Facial image be denoted as image to be transformed, the image to be transformed described at this time also has N number of.Then each figure to be transformed is marked The crucial pixel of k of picture, wherein k is more than or equal to two eyes that 4, k crucial pixel is respectively face in the image to be transformed Pixel where the profile of eyeball, nose and mouth, i.e., in the face of face other than ear, the profile of each organ is at least answered This has a crucial pixel, and certainly, the crucial pixel being distributed on the profile of each organ can have multiple, to be aligned with raising essences Degree.
Optionally, the present embodiment is by the following method by N number of image alignment to be transformed: firstly, obtaining each described The maximum value of abscissa and the maximum value and minimum value of minimum value and ordinate in the crucial pixel of k of image to be transformed, with To the zoom factor along abscissa direction and the zoom factor along ordinate direction;Then each image to be transformed is obtained The mean value of abscissa and the mean value of ordinate in k crucial pixel, to obtain along the translation coefficient in abscissa direction and along vertical seat Mark the translation coefficient in direction;Zoom factor and translation coefficient along abscissa direction and the zoom factor peace along ordinate direction It moves coefficient and constitutes normalized parameter;Finally N number of depth image is normalized using the normalized parameter, so that often Face alignment in a depth image, after above-mentioned alignment step, the size of each depth image is identical, and The position of k crucial pixel is identical as the k crucial position of pixel on average face on each depth image.Certainly, The normalized parameter can also include coefficient of rotary, and normalized method can be rigid transformation, affine transformation etc., the present invention With no restriction.
Further, it carries out each respective pixel on N number of depth image that can averagely obtain average face.The present embodiment In, the depth value of each respective pixel on N number of image to be transformed is averaged to obtain the mean depth of the average face Image, i.e., the depth value of each pixel is depth of N number of depth image in respective pixel on the described mean depth image The calculating of average value, the mean depth image can be indicated using following formula:
Wherein,For the depth distribution of the mean depth image, IsIt (p) is the depth distribution of each depth image.
After obtaining the mean depth image, step S3 is executed, gradient operator is can use and carries out convolution to be put down The concentration gradient image of equal face, it is specific as shown in Fig. 2, step S4 is then executed, according to concentration gradient image quantization design The distribution density for the dot matrix hot spot that the light source issues, in the present embodiment, the distribution density and the depth of the dot matrix hot spot Gradient value in gradient map is positively correlated, i.e., the higher position of gradient value in the described concentration gradient image, by the density of dot matrix hot spot What is be arranged is bigger, and the lower position of gradient value in the concentration gradient image, more by the distribution density setting of dot matrix hot spot It is small, the distribution density of the dot matrix hot spot by reducing the low region of gradient value, so that energy consumption is reduced, and it is higher in gradient value Region increases the distribution density of dot matrix hot spot, to improve accuracy of identification.
Optionally, the present embodiment mean depth image also obtained by calculation can determine effective identification region, measure It, can also be according to effective knowledge after the distribution density for changing the dot matrix hot spot of light source sending for designing the stereo visual system The illumination zone for the dot matrix hot spot that light source described in other region (region shared by face) Quantitative design issues prevents light source from issuing point Battle array hot spot becomes invalid hot spot.
Based on this, the present embodiment additionally provides a kind of stereo visual system, including light source and multiple photographing modules, the light Source illuminates a face for issuing dot matrix hot spot, and multiple photographing modules are for shooting described in the light source active illumination Face, to obtain the facial image of the different shooting angles of one group of correspondence face, wherein the distribution of the dot matrix hot spot is close Degree is using the quantification design method of the light source come Quantitative design.In the present embodiment, the distribution density of the dot matrix hot spot with Gradient value in the concentration gradient figure is positively correlated, also, the illumination zone of the light source can also pass through the mean depth Image determines, it can the region for the dot matrix hot spot irradiation for issuing the light source is as far as possible in the range of the average face (being radiated at dot matrix hot spot in effective identification region), avoid the stereo visual system when in use, the dot matrix that light source issues Hot spot exposes to the region other than face as invalid hot spot, both wastes energy, can not improve the precision of identification.
To sum up, in the quantification design method and stereo visual system of light source provided in an embodiment of the present invention, first according to N group Facial image calculates separately out N number of depth image of corresponding N group facial image, then by the face of N number of facial image be aligned with Normalized parameter is obtained, and obtains using the normalized parameter mean depth image of the face of N number of depth image, then Concentration gradient image is obtained according to the mean depth image, constrains what the light source issued further according to the concentration gradient image The distribution density of dot matrix hot spot enables the distribution density of the dot matrix hot spot to be more suitable for face, to improve stereoscopic vision The measurement accuracy of the face depth of system can be with also by the illumination zone of the dimension constraint dot matrix hot spot of mean depth image The distribution density for reducing the dot matrix hot spot of face exterior domain, to reduce energy consumption.
The above is only a preferred embodiment of the present invention, does not play the role of any restrictions to the present invention.Belonging to any Those skilled in the art, in the range of not departing from technical solution of the present invention, to the invention discloses technical solution and Technology contents make the variation such as any type of equivalent replacement or modification, belong to the content without departing from technical solution of the present invention, still Within belonging to the scope of protection of the present invention.

Claims (10)

1. a kind of quantification design method of light source, for the light source of one stereo visual system of Quantitative design, the light source is for sending out Dot matrix hot spot to be out to illuminate a face, and by multiple photographing modules shootings by the face of the light source active illumination, with Obtain the facial image of the different shooting angles of one group of correspondence face characterized by comprising
N group facial image is provided and calculates separately out N number of depth image of corresponding N group facial image, wherein N >=1, and every group It include at least two facial images of corresponding same face in facial image;
Face in N number of depth image is aligned to obtain mean depth image;
Concentration gradient image is obtained according to the mean depth image;
The distribution density for the dot matrix hot spot that the light source issues is designed according to the concentration gradient image quantization.
2. the quantification design method of light source as described in claim 1, which is characterized in that the distribution density of the dot matrix hot spot with Gradient value in the concentration gradient figure is positively correlated.
3. the quantification design method of light source as described in claim 1, which is characterized in that by the face of N number of depth image The step of alignment includes:
The identical facial image of shooting angle is denoted as image to be transformed from every group of facial image, and marks each The crucial pixel of k of the image to be transformed, wherein k is more than or equal to 4;
Obtain corresponding to the normalization ginseng of each image to be transformed according to k of each image to be transformed crucial pixel Number;
N number of depth image is normalized using the normalized parameter, so that the people in each depth image Face alignment.
4. the quantification design method of light source as claimed in claim 3, which is characterized in that the k crucial pixels are people respectively Pixel where the profile of two eyes of face, nose and mouth.
5. the quantification design method of light source as claimed in claim 3, which is characterized in that the normalized parameter includes scaling system One of number, translation coefficient and coefficient of rotary are a variety of.
6. the quantification design method of light source as claimed in claim 3, which is characterized in that each picture on the mean depth image The depth value of element is the depth-averaged value after N number of depth image is normalized in respective pixel.
7. the quantification design method of light source as described in claim 1, which is characterized in that the mean depth image using ladder It spends operator and carries out convolution to obtain the concentration gradient image.
8. such as the quantification design method of light source of any of claims 1-7, which is characterized in that described in Quantitative design After the distribution density for the dot matrix hot spot that light source issues, light source described in the dimension constraint also according to the mean depth image is issued Dot matrix hot spot illumination zone.
9. a kind of stereo visual system, which is characterized in that including light source and multiple photographing modules, the light source is for issuing dot matrix Hot spot is to illuminate a face, and multiple photographing modules are for shooting by the face of the light source active illumination, to obtain The facial image of the different shooting angles of one group of correspondence face, wherein the distribution density of the dot matrix hot spot is used as weighed Benefit requires the quantification design method of light source described in any one of 1-8 to carry out Quantitative design.
10. stereo visual system as claimed in claim 9, which is characterized in that the distribution density of the dot matrix hot spot is deep with one The gradient value spent in gradient map is positively correlated.
CN201811629354.7A 2018-12-28 2018-12-28 Light source quantitative design method and stereoscopic vision system Active CN109447049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811629354.7A CN109447049B (en) 2018-12-28 2018-12-28 Light source quantitative design method and stereoscopic vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811629354.7A CN109447049B (en) 2018-12-28 2018-12-28 Light source quantitative design method and stereoscopic vision system

Publications (2)

Publication Number Publication Date
CN109447049A true CN109447049A (en) 2019-03-08
CN109447049B CN109447049B (en) 2020-07-31

Family

ID=65538627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811629354.7A Active CN109447049B (en) 2018-12-28 2018-12-28 Light source quantitative design method and stereoscopic vision system

Country Status (1)

Country Link
CN (1) CN109447049B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105370A (en) * 2019-12-09 2020-05-05 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866497A (en) * 2010-06-18 2010-10-20 北京交通大学 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
CN103035026A (en) * 2012-11-24 2013-04-10 浙江大学 Maxim intensity projection method based on enhanced visual perception
US20140232837A1 (en) * 2013-02-19 2014-08-21 Korea Institute Of Science And Technology Multi-view 3d image display apparatus using modified common viewing zone
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN106570899A (en) * 2015-10-08 2017-04-19 腾讯科技(深圳)有限公司 Target object detection method and device
CN107622526A (en) * 2017-10-19 2018-01-23 张津瑞 A kind of method that 3-D scanning modeling is carried out based on mobile phone facial recognition component

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866497A (en) * 2010-06-18 2010-10-20 北京交通大学 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
CN103035026A (en) * 2012-11-24 2013-04-10 浙江大学 Maxim intensity projection method based on enhanced visual perception
US20140232837A1 (en) * 2013-02-19 2014-08-21 Korea Institute Of Science And Technology Multi-view 3d image display apparatus using modified common viewing zone
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN106570899A (en) * 2015-10-08 2017-04-19 腾讯科技(深圳)有限公司 Target object detection method and device
CN107622526A (en) * 2017-10-19 2018-01-23 张津瑞 A kind of method that 3-D scanning modeling is carried out based on mobile phone facial recognition component

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105370A (en) * 2019-12-09 2020-05-05 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111105370B (en) * 2019-12-09 2023-10-20 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN109447049B (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN107277491B (en) Generate the method and corresponding medium of the depth map of image
CN106600686A (en) Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
CN103971408B (en) Three-dimensional facial model generating system and method
CN102665086B (en) Method for obtaining parallax by using region-based local stereo matching
CN109767474A (en) A kind of more mesh camera calibration method, apparatus and storage medium
CN104599284B (en) Three-dimensional facial reconstruction method based on various visual angles mobile phone auto heterodyne image
CN104424640B (en) The method and apparatus for carrying out blurring treatment to image
US20120275667A1 (en) Calibration for stereoscopic capture system
CN107945234A (en) A kind of definite method and device of stereo camera external parameter
CN107230225A (en) The method and apparatus of three-dimensional reconstruction
CN101729920B (en) Method for displaying stereoscopic video with free visual angles
CN105654547B (en) Three-dimensional rebuilding method
JP2008535116A (en) Method and apparatus for three-dimensional rendering
CN112926464B (en) Face living body detection method and device
CN106778660B (en) A kind of human face posture bearing calibration and device
CN109461206A (en) A kind of the face three-dimensional reconstruction apparatus and method of multi-view stereo vision
CN110189294A (en) RGB-D image significance detection method based on depth Analysis on confidence
CN106068646A (en) Degree of depth drawing generating method, device and non-transience computer-readable medium
CN106934828A (en) Depth image processing method and depth image processing system
CN109724537B (en) Binocular three-dimensional imaging method and system
WO2014008320A1 (en) Systems and methods for capture and display of flex-focus panoramas
CN110458952A (en) A kind of three-dimensional rebuilding method and device based on trinocular vision
CN108881717A (en) A kind of Depth Imaging method and system
CN108648228A (en) A kind of binocular infrared human body dimension measurement method and system
CN113936050B (en) Speckle image generation method, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant