CN115272441A - Unstructured high-resolution panoramic depth solving method and sensing device - Google Patents

Unstructured high-resolution panoramic depth solving method and sensing device Download PDF

Info

Publication number
CN115272441A
CN115272441A CN202210668078.5A CN202210668078A CN115272441A CN 115272441 A CN115272441 A CN 115272441A CN 202210668078 A CN202210668078 A CN 202210668078A CN 115272441 A CN115272441 A CN 115272441A
Authority
CN
China
Prior art keywords
depth
camera
global
panoramic
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210668078.5A
Other languages
Chinese (zh)
Inventor
刘威
邵航
袁肖赟
高坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze Delta Region Institute of Tsinghua University Zhejiang
Original Assignee
Zhejiang Future Technology Institute (jiaxing)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Future Technology Institute (jiaxing) filed Critical Zhejiang Future Technology Institute (jiaxing)
Priority to CN202210668078.5A priority Critical patent/CN115272441A/en
Publication of CN115272441A publication Critical patent/CN115272441A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an unstructured high-resolution panoramic depth solving method and a sensing device, wherein the method comprises the following steps: acquiring a global image sequence; obtaining a global depth image sequence according to the large scene monocular depth estimation network model; carrying out feature detection on adjacent images in the global depth image sequence; according to the detected feature points, carrying out image registration to obtain a deformation matrix of adjacent images, and carrying out fusion splicing on the overall depth image to obtain a 360-degree panoramic fusion depth image; the resolution ratio of the 360-degree panoramic fusion depth map is improved, and a hundred million-pixel-level panoramic depth perception result is obtained; the unstructured high-resolution panoramic depth acquisition method and the sensing device are simple in structure, small in size and good in depth sensing performance, and defects of an existing three-dimensional visual sensing technology are overcome.

Description

Unstructured high-resolution panoramic depth solving method and sensing device
Technical Field
The application relates to the technical field of computational imaging and deep learning, in particular to an unstructured high-resolution panoramic depth solving method and a sensing device.
Background
The existing large scene depth perception slave sensor level generally has two modes, namely a monocular combined laser radar and a multi-view visual sensor. The laser radar combined monocular method has the characteristics of good robustness and stable performance, but the manufacturing cost of combined equipment is high, the multi-view sensor combined method requires that scenes are observed from different view angles, and different view angles are required to have certain parallax angles, so the structure is huge generally. Aiming at the requirement of large-scene panoramic depth perception, the existing method mostly adopts a method of combining a plurality of column-type binoculars, and the method is difficult to calibrate and build due to larger baseline distance, so that the existing three-dimensional visual perception requirement is difficult to meet. Therefore, a large scene depth perception hardware scheme and algorithm with simpler structure, small volume and good depth perception performance are needed.
Disclosure of Invention
Therefore, the application provides an unstructured high-resolution panoramic depth solving method and a sensing device, so as to solve the problems of complex structure, large volume and poor depth sensing performance in the prior art.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect, a method for unstructured high-resolution panoramic depth extraction includes:
acquiring a global image sequence;
obtaining a global depth image sequence according to the large scene monocular depth estimation network model;
carrying out feature detection on adjacent images in the global depth image sequence;
according to the detected feature points, carrying out image registration to obtain a deformation matrix of adjacent images, and carrying out fusion splicing on the overall depth image to obtain a 360-degree panoramic fusion depth image;
and improving the resolution of the 360-degree panoramic fusion depth map to obtain a hundred million-pixel-level panoramic depth perception result.
Preferably, the large scene monocular depth estimation network model training process is as follows:
carrying out visual calibration on the data acquisition equipment;
collecting natural scene data, and dividing the data into a training set, a test set and a verification set;
performing binocular depth calculation based on binocular structure parameters of a focusing type binocular camera and a stereo image pair to respectively obtain a first dense depth map under a binocular camera visual angle and a second dense depth map under a single camera visual angle;
processing all the collected data of each group to enable each group of data i to obtain the image of the corresponding single camera
Figure BDA0003693708150000021
Corresponding depth map
Figure BDA0003693708150000022
All data pairs
Figure BDA0003693708150000023
Dividing the training data set, the test data set and the verification data set;
and training the monocular depth network model, testing on the test data set, and stopping training when the tie depth solving error is smaller than a threshold value alpha to obtain the large-scene monocular depth estimation network model N.
Preferably, the binocular depth calculation includes distortion correction, stereo correction, disparity solution, and depth calculation.
Preferably, when the data acquisition equipment is subjected to visual calibration, three-camera calibration is performed by using a checkerboard based on the Zhang Zhengyou method.
Preferably, the image registration is performed by using a random consistent sampling algorithm.
Preferably, the global image sequence is acquired by using an external synchronization trigger or a software time stamp synchronization method.
Preferably, the global image sequence comprises 9 global images and 18 local camera images.
Preferably, the resolution enhancement of the 360-degree panorama fusion depth map is to fuse 18 local camera images into the 360-degree panorama fusion depth map by using a cross-scale image fusion network.
The second aspect, an unstructured high-resolution panorama degree of depth perception device, including the fixed bolster, the fixed bolster is annular fixed bolster, the camera array of evenly having arranged all around of annular fixed bolster, the camera array is equipped with 9, every the camera array includes array fixed bolster, first local shooting camera, the local camera of second and global camera, first local shooting camera the local camera of second with global camera installs on the array fixed bolster, global camera sets up first local shooting camera with between the local camera of second.
Preferably, the horizontal viewing angle of the global camera is 60 degrees or more.
Compared with the prior art, the method has the following beneficial effects:
the application provides an unstructured high-resolution panoramic depth solving method and a sensing device, wherein the method comprises the following steps: acquiring a global image sequence; obtaining a global depth image sequence according to the large scene monocular depth estimation network model; carrying out feature detection on adjacent images in the global depth image sequence; according to the detected feature points, carrying out image registration to obtain a deformation matrix of adjacent images, and carrying out fusion splicing on the overall depth image to obtain a 360-degree panoramic fusion depth image; the resolution ratio of the 360-degree panoramic fusion depth map is improved, and a hundred million-pixel-level panoramic depth perception result is obtained; the unstructured high-resolution panoramic depth solving method and the sensing device are simple in structure, small in size and good in depth sensing performance, and defects of an existing three-dimensional visual sensing technology are overcome.
Drawings
To more intuitively illustrate the prior art and the present application, several exemplary drawings are given below. It should be understood that the specific shapes, configurations and illustrations in the drawings are not to be construed as limiting, in general, the practice of the present application; for example, it is within the ability of those skilled in the art to make routine adjustments or further optimizations based on the technical concepts disclosed in the present application and the exemplary drawings, for the increase/decrease/attribution of certain units (components), specific shapes, positional relationships, connection manners, dimensional ratios, and the like.
Fig. 1 is a schematic structural diagram of an unstructured high-resolution panoramic depth sensing device provided in the present application;
FIG. 2 is a flow chart of an unstructured high-resolution panoramic depth solution method provided herein;
fig. 3 is a schematic structural diagram of a monocular depth perception training data acquisition device provided in the present application.
Description of reference numerals:
101. an array of cameras; 102. an array fixing support; 103. a first local camera; 104. a global camera; 105. a second partial photographing camera; 201. a training data acquisition fixing support; 202. a first camera; 203. a second camera; 204. a third camera.
Detailed Description
The present application will be described in further detail below with reference to specific embodiments in conjunction with the accompanying drawings.
In the description of the present application: "plurality" means two or more unless otherwise specified. The terms "first", "second", "third", and the like in this application are intended to distinguish one referenced item from another without having a special meaning in technical connotation (e.g., should not be construed as emphasizing a degree or order of importance, etc.). The terms "comprising," "including," "having," and the like, are intended to be inclusive and mean "not limited to" (some elements, components, materials, steps, etc.).
In the present application, terms such as "upper", "lower", "left", "right", "middle", and the like are generally used for easy visual understanding with reference to the drawings, and are not intended to absolutely limit the positional relationship in an actual product. Changes in these relative positional relationships are also considered to be within the scope of the present disclosure without departing from the technical concepts disclosed in the present disclosure.
With the fact that a monocular visual angle depth perception algorithm based on deep learning is verified and succeeded in the automatic driving field, the monocular depth perception algorithm based on data driving is gradually paid attention by researchers in other vision fields, the monocular depth perception has the advantages of being simple in structure and convenient to build, and panoramic imaging hardware is mostly arranged through annular monocular hardware, so that the monocular visual angle depth perception algorithm and the monocular depth perception algorithm have good fitting degree, a deep learning method needs huge data to train, and therefore a data acquisition scheme needs to be designed according to the hardware structure characteristics aiming at the panoramic depth perception based on array monocular. The application provides an unstructured high-resolution panoramic depth obtaining method and a sensing device.
Referring to fig. 1, in order to realize 360-hundred million-pixel panoramic imaging, the present application firstly provides an unstructured high-resolution panoramic depth sensing device, the whole device adopts a ring array design (fig. 1, left), and 9 rows of camera arrays 101 are arranged around the ring array (fig. 1, left), each camera array comprises three imaging cameras (as shown in fig. 1, right), and the device comprises an array fixing support 102, a first local camera 103, a second local camera 105, and a global camera 104, the difference between the global camera 104 and the first local camera 103, the second local camera 105 is the angle of view of the lens, the angle of view of the global camera 104 is larger than the angle of view of the first local camera 103 and the second local camera 105, and the pictures taken by the first local camera 103 and the second local camera 105 should be covered by the global camera 104. The panoramic depth perception hardware is characterized in that: the global cameras among the single camera arrays need to have a certain view angle overlapping area, for example, 9 camera arrays, the horizontal view angle of a single global camera needs to reach more than 40 degrees theoretically, and in order to ensure that the overlapping area is large enough and convenient for subsequent algorithm development, the horizontal view angle of a single global camera needs to reach more than 60 degrees. Second, for a single camera array, the vertical field of view of the global camera should be at least 2 times greater than the local camera vertical field of view. The unstructured in the application is mainly embodied in that the angle of the local camera can be configured arbitrarily. The structure shown in fig. 1 includes 9 global cameras and 18 local cameras. Further, the image resolution of all cameras is required to be more than 1200 ten thousand.
Referring to fig. 2, the present application provides an unstructured high-resolution panoramic depth finding method, including:
s1: acquiring a global image sequence;
specifically, the acquisition device shown in fig. 1 is set up, and data acquisition is performed by using an external synchronization trigger or software timestamp synchronization method. At a certain time stamp, 9 sets of array camera data are obtained, including 9 global images (G)1、G2,,,G9) And 18 partial camera images
Figure BDA0003693708150000051
Figure BDA0003693708150000052
Wherein G isi
Figure BDA0003693708150000053
Is an image on a set of array cameras.
S2: obtaining a global depth image sequence according to the large scene monocular depth estimation network model;
specifically, the large-scene monocular depth estimation network model N performs depth solution on 9 global images respectively to obtain a global depth image sequence (D)1 i、D2 i……D9 i)。
Referring to fig. 3, in order to train a large-scene monocular depth estimation network model N, the present application provides a monocular depth perception training data acquisition device, which is a hardware used only for training the global camera depth perception model of fig. 1, and is not required to be deployed in an actual scene during actual use, and provides a GroudTruth for the hardware of fig. 3 required for training the monocular depth perception network based on deep learning. The monocular depth perception training data acquisition device comprises: a training data acquisition fixing support 201, a focusing type binocular camera (a double-camera combination consisting of a first camera 202 and a third camera 204) and a single-camera combination mode of a second camera 203. The training data acquisition device is characterized in that: the three cameras adopt a convergent structure, the installation position and model of the second camera 203 are the same as those of the global camera in fig. 1, the horizontal and vertical field angles of view of the first camera 202 and the third camera 204 are required to be larger than those of the second camera 203, the second camera 203 is a global camera, and the imaging range of the second camera 203 is within the overlapping area of the first camera 202 and the third camera 204. The first camera 202 and the third camera 204 are standard binocular combination cameras.
The large-scene monocular depth estimation network model N is obtained by training through the following method:
s21: carrying out visual calibration on the data acquisition equipment;
specifically, the training data acquisition equipment in fig. 3 is set up, and the data acquisition equipment in fig. 3 is subjected to visual calibration, for example, three-camera calibration is performed by using checkerboard based on Zhang Zhengyou method, so as to obtain camera internal parameters K of the first camera 202, the second camera 203 and the third camera 2041、K2、K3And obtains the external parameters T of the second camera 203 and the third camera 204 relative to the first camera 20221(including the rotation matrix R21And a translation matrix t21) And T31(including the rotation matrix R31And a translation matrix t31)。
S22: the method comprises the steps that a first camera 202, a second camera 203 and a third camera 204 synchronously acquire natural scene data, and divide the data into a training set, a testing set and a verification set;
specifically, the acquisition hardware is deployed in a plurality of dynamic large scenes, such as squares, school playgrounds, traffic hubs and the like, and the scene diversity is as rich as possible. The first camera 202, the second camera 203 and the third camera 204 are synchronously acquired through external triggering or through software time stamping, and no less than 5 ten thousand sets of natural scene data (one set of data comprises 3 images from the first camera 202, the second camera 203 and the third camera 204 respectively) are required to be acquired, and the data are divided into a training set, a testing set and a verification set.
S23: based on the binocular structure parameters (internal reference K) of the first camera 202 and the third camera 2041、K3Root of Redborne ginseng T31) A stereo image pair (I) composed of a first camera 202 and a third camera 2041And I3) Performing binocular depth calculation to obtain a first dense depth map D under the view angle of the first camera 2021(ii) a And based on the internal and external parameters (internal parameter K) of the first camera 202 and the second camera 2031、K2External ginseng T21) Calculate a second dense depth map D for the second camera 2032
Specifically, the binocular depth calculation includes distortion correction, stereo correction, parallax solution, and depth calculation.
S24: processing all the acquired data of each group according to S23 to enable each data i group to obtain the image of the corresponding second camera 203
Figure BDA0003693708150000061
Corresponding depth map
Figure BDA0003693708150000062
Data pair
Figure BDA0003693708150000063
Data required for the actual network. All data pairs
Figure BDA0003693708150000064
The division into three groups, 90% is used to construct the training data set, 5% is used to construct the testing data set, and 5% is used to construct the validation data set.
S25: and (4) training the monocular depth network model based on the S24, testing on the test data set, and stopping training when the tie depth solving error is smaller than a threshold value alpha to obtain a large-scene monocular depth estimation network model N.
S3: carrying out feature detection on adjacent images in the global depth image sequence;
specifically, under the same timestamp, for a group of global depth image sequences, feature detection is performed on adjacent images in the global depth image sequences by using a feature detection algorithm.
S4: according to the detected feature points, carrying out image registration to obtain a deformation matrix of adjacent images, and carrying out fusion splicing on the overall depth image to obtain a 360-degree panoramic fusion depth image;
specifically, for a pair of adjacent images in the global depth image sequence, for the detected feature points, RANSAC (random consistent sampling algorithm) is used for image registration to obtain a deformation matrix TG of the adjacent images, and the adjacent images can be spliced by using the deformation matrix. According to the method, the deformation matrix TG between each pair of adjacent images in the global depth image sequence can be obtained respectively21、TG32……TG98And splicing the 9 global images by using the deformation matrix to obtain a 360-degree panoramic spliced image Dg.
S5: and improving the resolution of the 360-degree panoramic fusion depth map to obtain a hundred million-pixel-level panoramic depth perception result.
Specifically, a super-resolution method is used for improving the resolution of Dg, and a hundred million pixel level panoramic depth perception result D is obtainedG
More specifically, 18 local camera images are fused into Dg by using a cross-scale image fusion network (such as a CrossNet end-to-end super-resolution fusion method), so as to obtain a hundred million-pixel level panoramic image DG
All the technical features of the above embodiments can be arbitrarily combined (as long as there is no contradiction between the combinations of the technical features), and for brevity of description, all the possible combinations of the technical features in the above embodiments are not described; these examples, which are not explicitly described, should be considered to be within the scope of the present description.
The present application has been described in considerable detail with reference to certain embodiments and examples thereof. It should be understood that several conventional adaptations or further innovations of these specific embodiments may also be made based on the technical idea of the present application; however, such conventional modifications and further innovations can also fall into the scope of the claims of the present application as long as they do not depart from the technical idea of the present application.

Claims (10)

1. An unstructured high-resolution panoramic depth solving method is characterized by comprising the following steps:
acquiring a global image sequence;
obtaining a global depth image sequence according to the large-scene monocular depth estimation network model;
carrying out feature detection on adjacent images in the global depth image sequence;
according to the detected feature points, carrying out image registration to obtain a deformation matrix of adjacent images, and carrying out fusion splicing on the overall depth image to obtain a 360-degree panoramic fusion depth image;
and improving the resolution of the 360-degree panoramic fusion depth map to obtain a hundred million-pixel-level panoramic depth perception result.
2. The unstructured high-resolution panoramic depth extraction method according to claim 1, wherein the large-scene monocular depth estimation network model training process is as follows:
carrying out visual calibration on the data acquisition equipment;
collecting natural scene data, and dividing the data into a training set, a test set and a verification set;
performing binocular depth calculation based on binocular structure parameters of a focusing type binocular camera and a stereo image pair to respectively obtain a first dense depth map under a binocular camera visual angle and a second dense depth map under a single camera visual angle;
processing all the collected data of each group to enable each group of data i to obtain the image of the corresponding single camera
Figure FDA0003693708140000011
Corresponding depth map
Figure FDA0003693708140000012
All data pairs
Figure FDA0003693708140000013
Dividing the training data set, the test data set and the verification data set;
and training the monocular depth network model, testing on the test data set, and stopping training when the tie depth solving error is smaller than a threshold value alpha to obtain the large-scene monocular depth estimation network model N.
3. The unstructured high resolution panoramic depth computation method of claim 2, wherein the binocular depth computation includes distortion correction, stereo correction, parallax solution and depth computation.
4. The unstructured high-resolution panoramic depth derivation method of claim 2, wherein the visual calibration of the data acquisition device is based on Zhang Zhengyou with a checkerboard three-camera calibration.
5. The unstructured high-resolution panoramic depth derivation method of claim 1, wherein the image registration is performed using a random consistent sampling algorithm.
6. The unstructured high-resolution panoramic depth derivation method of claim 1, wherein the global image sequence is acquired using an external synchronization trigger or a software time stamp synchronization method.
7. The unstructured high-resolution panoramic depth derivation method of claim 1, wherein the sequence of global images comprises 9 global images and 18 local camera images.
8. The unstructured high-resolution panoramic depth derivation method of claim 6, wherein the resolution up-scaling of the 360 degree panoramic fusion depth map is to fuse 18 local camera images into the 360 degree panoramic fusion depth map using a cross-scale image fusion network.
9. The utility model provides a non-structural high-resolution panorama degree of depth perception device, a serial communication port, including the fixed bolster, the fixed bolster is annular fixed bolster, the camera array of evenly having arranged all around of annular fixed bolster, the camera array is equipped with 9, every the camera array includes array fixed bolster, first local shooting camera, the local camera and the global camera of taking of second, first local shooting camera the local camera of second with global camera installs on the array fixed bolster, global camera sets up first local shooting camera with between the local camera of second.
10. The unstructured high-resolution panoramic depth perception device according to claim 9, wherein the horizontal viewing angle of the global camera is above 60 degrees.
CN202210668078.5A 2022-06-14 2022-06-14 Unstructured high-resolution panoramic depth solving method and sensing device Pending CN115272441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210668078.5A CN115272441A (en) 2022-06-14 2022-06-14 Unstructured high-resolution panoramic depth solving method and sensing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210668078.5A CN115272441A (en) 2022-06-14 2022-06-14 Unstructured high-resolution panoramic depth solving method and sensing device

Publications (1)

Publication Number Publication Date
CN115272441A true CN115272441A (en) 2022-11-01

Family

ID=83759639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210668078.5A Pending CN115272441A (en) 2022-06-14 2022-06-14 Unstructured high-resolution panoramic depth solving method and sensing device

Country Status (1)

Country Link
CN (1) CN115272441A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980540A (en) * 2023-07-27 2023-10-31 湖北空间智能技术有限公司 Low-illumination image processing method and device for pod and panoramic pod system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980540A (en) * 2023-07-27 2023-10-31 湖北空间智能技术有限公司 Low-illumination image processing method and device for pod and panoramic pod system

Similar Documents

Publication Publication Date Title
CN105160680B (en) A kind of design method of the noiseless depth camera based on structure light
CN102164298B (en) Method for acquiring element image based on stereo matching in panoramic imaging system
JP5238429B2 (en) Stereoscopic image capturing apparatus and stereoscopic image capturing system
CN106303228B (en) A kind of rendering method and system of focus type light-field camera
Okano et al. Real-time integral imaging based on extremely high resolution video system
CN102209254B (en) One-dimensional integrated imaging method and device
CN102348051B (en) Camera head
CN101680756A (en) Compound eye imaging device, distance measurement device, parallax calculation method and distance measurement method
CN103472592B (en) A kind of fast high-throughout polarization imaging method of illuminated and polarization imager
US20150172577A1 (en) Image sensor and image capturing apparatus
CN105635530B (en) Optical field imaging system
TWI527434B (en) Method for using a light field camera to generate a three-dimensional image and the light field camera
EP3513550B1 (en) Flat digital image sensor
CN102917235A (en) Image processing apparatus, image processing method, and program
CN103019021A (en) 3D (three-dimensional) light field camera and method for processing images shot by same
CN103828344A (en) Image processing apparatus, image processing method and program, and image pickup apparatus including image processing apparatus
CN111182238B (en) High-resolution mobile electronic equipment imaging device and method based on scanning light field
CN103959770A (en) Image processing device, image processing method and program
WO2019026287A1 (en) Imaging device and information processing method
CN115272441A (en) Unstructured high-resolution panoramic depth solving method and sensing device
CN104106003A (en) Imaging device
CN107147858A (en) Image processing apparatus and its control method
CN111127379B (en) Rendering method of light field camera 2.0 and electronic equipment
US8593508B2 (en) Method for composing three dimensional image with long focal length and three dimensional imaging system
CN208724107U (en) A kind of stereo scene filming apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240208

Address after: 314001 9F, No.705, Asia Pacific Road, Nanhu District, Jiaxing City, Zhejiang Province

Applicant after: ZHEJIANG YANGTZE DELTA REGION INSTITUTE OF TSINGHUA University

Country or region after: China

Address before: No.152 Huixin Road, Nanhu District, Jiaxing City, Zhejiang Province 314000

Applicant before: ZHEJIANG FUTURE TECHNOLOGY INSTITUTE (JIAXING)

Country or region before: China