CN113593008A - True 3D image significant reconstruction method under complex scene - Google Patents

True 3D image significant reconstruction method under complex scene Download PDF

Info

Publication number
CN113593008A
CN113593008A CN202110765330.XA CN202110765330A CN113593008A CN 113593008 A CN113593008 A CN 113593008A CN 202110765330 A CN202110765330 A CN 202110765330A CN 113593008 A CN113593008 A CN 113593008A
Authority
CN
China
Prior art keywords
image
parallax
parallax image
scene
complex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110765330.XA
Other languages
Chinese (zh)
Other versions
CN113593008B (en
Inventor
李小伟
李颖
任芷晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202110765330.XA priority Critical patent/CN113593008B/en
Publication of CN113593008A publication Critical patent/CN113593008A/en
Application granted granted Critical
Publication of CN113593008B publication Critical patent/CN113593008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention provides a method for remarkably reconstructing a true 3D image in a complex scene, which comprises the steps of shooting a 3D scene in an object space through a camera array to obtain a parallax image sequence, then generating a space-time remarkable image of the parallax image sequence by utilizing a space-time consistency saliency detection algorithm, carrying out normalization processing on a gray value of each remarkable image to obtain a pixel mapping weight of each parallax image, filtering the interference of a complex background pixel on the imaging of a 3D remarkable target by the parallax image of a 3D object in the complex scene through the corresponding pixel mapping weight to obtain a micro image array of the 3D remarkable target, and finally reconstructing a 3D image of the remarkable target by utilizing a lens array.

Description

True 3D image significant reconstruction method under complex scene
One, the technical field
The invention relates to a computing integrated imaging reconstruction technology, in particular to a method for remarkably reconstructing a true 3D image in a complex scene.
Second, background Art
The integrated imaging display has the advantages of full parallax, true color, no stereoscopic viewing asthenopia, ultrathin volume, non-harsh display environment requirement and the like, becomes one of the hot spots of the naked eye 3D display research at present, and comprises two reciprocal processes of capturing and reproducing. In the capturing process, a camera array is used for shooting an object space scene, so that micro image arrays of different angles of the space scene are obtained; the reconstruction process is to reconstruct a 3D image in a viewing space by the obtained micro-image array through a micro-lens array with the same parameters, and the reconstructed image can be viewed from any direction within a limited viewing angle.
The traditional calculation integration imaging algorithm can better reconstruct a 3D image in a simple scene, and the 3D reconstruction effect is ideal because the interference of surrounding simple background elements on the 3D image reconstruction is small. However, when the object space scene of the 3D image is complex, the complex scene and the 3D salient object may have mutual crosstalk in the reconstruction process, so that the reconstruction effect of the 3D image is not ideal. Therefore, in a complex scene, 3D image reconstruction needs to eliminate crosstalk caused by complex scene pixels to 3D image reconstruction of a salient object. At present, no method for remarkably reconstructing a true 3D image under a corresponding complex scene exists.
Third, the invention
In order to solve the problems, the invention provides a method for reconstructing a true 3D image in a complex scene, the method obtains a parallax image sequence by shooting a 3D scene in an object space through a camera array, then generates a space-time saliency map of the parallax image sequence by using a space-time consistency saliency detection algorithm, and performs normalization processing on a gray value of each saliency map to obtain a pixel mapping weight of each parallax image, so that the interference of a complex background pixel on the imaging of a 3D salient object can be filtered by the parallax image of the 3D object in the complex scene through the corresponding pixel mapping weight to obtain a micro-image array of the 3D salient object, and finally, a 3D image of the salient object is reconstructed by using a lens array.
The method comprises the following four steps.
Firstly, acquiring a parallax image sequence of an object space scene, and recording the parallax image sequence as Ik(k=1,2……M*N)。
Secondly, obtaining a parallax image sequence I by utilizing a space-time consistency significance detection algorithmkAnd normalizing the gray value of each saliency map to obtain a pixel mapping weight of each parallax image, and marking as Maskk(x,y)(k=1,2……M*N)。
Thirdly, filtering the interference of the complex background pixels to the 3D obvious target imaging, namely Mask, from the parallax image of the 3D object in the complex scene through the corresponding pixel mapping weightsk(x,y)*IkAnd the M x N parallax maps obtained by multiplying the corresponding points and filtered out the complex background are used for generating a micro-image arrayIs Res (x, y).
Fourthly, Res (x, y) is reconstructed by the micro lens array, so that a 3D image of the salient object is obtained.
The first step is a step of acquiring a parallax image sequence of an object space scene. Firstly, a camera array is built, the number of cameras contained in the camera array is M x N, wherein M represents the number of cameras in the vertical direction, N represents the number of cameras in the horizontal direction, M and N respectively represent indexes of the cameras in the vertical direction and the horizontal direction, and the interval between adjacent cameras is d. Shooting an object space scene by a camera array to obtain M × N disparity map sequences marked as Ik(k=1,2……M*N)。
In the second step, a parallax image sequence I is obtained by utilizing a space-time consistency significance detection algorithmkAnd normalizing the gray value of each saliency map to obtain the pixel mapping weight of each parallax image. Considering the sequence of M × N disparity maps obtained in the first step as a segment of video, each disparity map is equivalent to each frame in the video, and therefore a spatio-temporal saliency detection algorithm is introduced to obtain a saliency map of each frame, i.e., each disparity map. Each frame of the video is excessively divided into superpixels by a simple linear iterative clustering algorithm
Figure BDA0003150656960000021
Representing a frame SkObtaining corresponding pixels
Figure BDA0003150656960000022
K frame S ofkEdge probability map of
Figure BDA0003150656960000023
And calculates the optical flow between adjacent frames. Let FkRepresenting a frame SkThe light flow, the gradient size of the light flow
Figure BDA0003150656960000024
Can be prepared fromkAnd (3) calculating:
Figure BDA0003150656960000025
let the edge map of the pixel be
Figure BDA0003150656960000026
Can be calculated as the average of the pixel values having the ten largest edge probabilities, thereby generating a superpixel edge map
Figure BDA0003150656960000027
Similarly, we use
Figure BDA0003150656960000028
To calculate an optical flow value map of superpixels
Figure BDA0003150656960000029
And the space-time edge probability PkThe following formula can be used for calculation:
Figure BDA0003150656960000031
spatio-temporal edge probabilities are obtained using equation (2) in combination with spatial and motion boundary edges. However, the generated spatio-temporal edge map only suggests foreground locations of salient objects. To highlight its foreground location, a geodesic distance algorithm is used to compute a target probability map. Geodesic distance Dg(v1,v2G) represents node v in diagram G1And v2Is a distance between is a connection v1And v2Where ω represents the weight function. Satisfies the following conditions:
Figure BDA0003150656960000032
wherein C represents a node v1And v2The route between.
For each frame SkCreating a weighted graph Gk={VkkIn which the super pixel YkAs node VkThe connection between adjacent nodes being an edge εk. So that adjacent superpixels
Figure BDA0003150656960000033
And
Figure BDA0003150656960000034
weight ω therebetweenkCan be prepared by:
Figure BDA0003150656960000035
is calculated to obtain wherein
Figure BDA0003150656960000036
And
Figure BDA0003150656960000037
respectively represent
Figure BDA0003150656960000038
And
Figure BDA0003150656960000039
the spatio-temporal boundary probability of (c). Probability of
Figure BDA00031506569600000310
Then it can be calculated from the minimum geodesic distance by using the following formula:
Figure BDA00031506569600000311
wherein QkRepresenting along the frame SkThe four boundary superpixels of (1). Using the obtained foreground probability map PkNamely, the salient object can be detected, the gray values of the M × N detected salient images are normalized to obtain the pixel mapping weight of each parallax image, and the pixel mapping weight is marked as Maskk(x,y)(k=1,2……M*N)。
And thirdly, filtering the interference of the complex background pixels to the 3D obvious target imaging, namely Mask, from the parallax image of the 3D object in the complex scene through the corresponding pixel mapping weightsk(x,y)*IkAnd the M x N parallax maps with filtered backgrounds obtained by multiplying the corresponding points are used for generating a micro-image array. And (3) filtering out complex backgrounds from the M-N disparity maps obtained by dot multiplication and only retaining the salient objects, and generating a micro-image array of the 3D salient objects by using the M-N disparity maps, wherein the micro-image array is marked as Res (x, y).
And in the fourth step, Res (x, y) is reconstructed by a micro lens array so as to obtain a 3D image of the salient object. Rays of the object space are reconstructed using the lens array, and a 3D image is formed in the reconstruction space by intersecting a large number of rays at each 3D image point. The method can provide a correct idea for reconstructing the salient object in a complex background.
Description of the drawings
The foregoing aspects and advantages of the invention will become further apparent and more readily appreciated from the following detailed description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a method for reconstructing a true 3D image under a complex scene in a salient manner according to an embodiment of the present application.
Fig. 2 is a 3D object space scene according to an embodiment of the present application.
FIG. 3 is a flow chart of a spatiotemporal consistency saliency detection algorithm according to an embodiment of the present application.
FIG. 4 is a diagram of a micro image array Res (x, y) generated according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a 3D image reconstruction apparatus according to an embodiment of the present application.
The reference numbers in the figures are: 13D salient object, 2 complex background, 3 camera array, 4 micro image array, 5 optical diffuser screen, 6 lens array, 72D display panel.
It should be understood that the above-described figures are merely schematic and are not drawn to scale.
Fifth, detailed description of the invention
In order to facilitate understanding of the present application, an exemplary embodiment of a method for reconstructing a true 3D image under a complex scene according to the present invention will be described in detail below. It is to be noted, however, that the following described embodiments are exemplary and intended to illustrate the invention further and should not be construed as limiting the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
As used herein, the directional terms "vertical," "horizontal," and the like are used for purposes of illustration only and are not intended to be limiting.
The following describes in detail a method for reconstructing a true 3D image under a complex scene proposed in the present application with reference to the embodiments and drawings disclosed in the present application.
Fig. 1 illustrates a method for reconstructing a true 3D image under a complex scene, which includes the following steps.
Step S100, acquiring a parallax image sequence of an object space scene, and recording the parallax image sequence as Ik(k=1,2……M*N)。
Step S101, obtaining a parallax image sequence I by utilizing a space-time consistency significance detection algorithmkAnd normalizing the gray value of each saliency map to obtain a pixel mapping weight of each parallax image, and marking as Maskk(x,y)(k=1,2……M*N)。
Step S102, filtering out interference of complex background pixels to 3D significant target imaging, namely Mask, from parallax images of 3D objects in complex scenes through corresponding pixel mapping weightsk(x,y)*IkThe M x N parallax maps with the complex background filtered out obtained by multiplying the corresponding points are used to generate the micro-image array, which is denoted as Res (x,y)。
in step S103, Res (x, y) is reconstructed by the microlens array, so as to obtain a 3D image of the salient object.
In one embodiment, the first step is a step of acquiring a sequence of parallax images of the object space scene. Firstly, a camera array is built, wherein the number M N of cameras contained in the camera array can be 50X 50, and the number of vertical cameras and the number of horizontal cameras are the same. The adjacent cameras may be spaced apart by a distance d of 100 mm. The camera array photographs an object space scene, and 50 × 50 disparity map sequences are obtained. In one embodiment, the 3D salient object 1 is a magic cube and a car.
In one embodiment, the second step obtains the parallax image sequence I by using a space-time consistent saliency detection algorithmkAnd normalizing the gray value of each saliency map to obtain the pixel mapping weight of each parallax image. Considering the 50 by 50 disparity map sequences obtained in the first step as a video, each disparity map is equivalent to each frame in the video, and therefore a spatio-temporal saliency detection algorithm is introduced to obtain a saliency map of each frame, i.e. each disparity map. Each frame of the video is excessively segmented into superpixels by a simple linear iterative clustering algorithm, and
Figure BDA0003150656960000051
representing a frame Sk( k 1,2 … … 50 x 50) super pixel element. When k is 1, a pixel corresponding to the pixel is obtained
Figure BDA0003150656960000052
1 st frame S of1Edge probability map of
Figure BDA0003150656960000053
Let F1Representing a frame S1The light flow, the gradient size of the light flow
Figure BDA0003150656960000054
Can be prepared from1And (3) calculating:
Figure BDA0003150656960000055
let the edge map of the pixel be
Figure BDA0003150656960000061
Can be calculated as the average of the pixel values having the ten largest edge probabilities, thereby generating a superpixel edge map
Figure BDA0003150656960000062
Similarly, we use
Figure BDA0003150656960000063
To calculate an optical flow value map of superpixels
Figure BDA0003150656960000064
And the space-time edge probability P1The following formula can be used for calculation:
Figure BDA0003150656960000065
to highlight its foreground location, a geodesic distance algorithm is used to compute a target probability map. Geodesic distance Dg(v1,v2G) represents node v in diagram G1And v2Is a distance between is a connection v1And v2Is calculated as the integral of the minimization of ω over all the lines of (a), where ω represents the weight function. Satisfies the following conditions:
Figure BDA0003150656960000066
wherein C represents a node v1And v2The route between.
S for the first frame1Creating a weighted graph G1={V11In which the super pixel Y1As node V1The connection between adjacent nodes being an edge ε1. So that adjacent superpixels
Figure BDA0003150656960000067
And
Figure BDA0003150656960000068
weight ω therebetween1Can be prepared by:
Figure BDA0003150656960000069
is calculated to obtain wherein
Figure BDA00031506569600000610
And
Figure BDA00031506569600000611
respectively represent
Figure BDA00031506569600000612
And
Figure BDA00031506569600000613
the spatio-temporal boundary probability of (c). Probability of
Figure BDA00031506569600000614
Then it can be calculated from the minimum geodesic distance by using the following formula:
Figure BDA00031506569600000615
wherein Q1Representing along the frame S1The four boundary superpixels of (1). Using the obtained foreground probability map P1That is, the first frame S can be detected1Is a significant goal of (1). And repeating the process until k is 2 and 3 … 50 is 50, detecting 50 saliency maps, normalizing the gray value of the saliency maps to obtain the pixel mapping weight of each parallax image, and marking the pixel mapping weight as Maskk(x,y)(k=1,2……50*50)。
In one embodiment, the third step is to pass the parallax image of the 3D object in the complex scene through the pairThe corresponding pixel mapping weight filters the interference of the complex background pixel to the 3D obvious target imaging, namely Maskk(x,y)*IkAnd the M x N parallax maps obtained by multiplying the corresponding points and filtered out the complex background are used for generating a micro-image array. After the point multiplication, 50 × 50 disparity maps are obtained, the complex background is filtered out, and only the salient object is remained, and then a micro image array of a 3D salient object is generated by using the disparity maps, which is denoted as Res (x, y), and fig. 4 shows a schematic diagram of a micro image array Res (x, y) according to an embodiment of the present application. Where denotes dot multiplication.
In one embodiment, the fourth step, Res (x, y), is a step of reconstructing by a microlens array to obtain a 3D image of the salient object. According to the principle that the optical path is reversible, light rays emitted by pixels on each unit image pass through the lens array to restore the optical path and converge into a focus, and a 3D object image of a significant target is reconstructed within a certain depth of a central plane, and fig. 5 shows a schematic diagram of a 3D image reconstruction device according to an embodiment of the application.

Claims (2)

1. A method for reconstructing a true 3D image in a complex scene is characterized in that a parallax image sequence is obtained by shooting a 3D scene in an object space through a camera array, space-time saliency maps of the parallax image sequence are generated by utilizing a space-time consistency saliency detection algorithm, the gray value of each saliency map is normalized to obtain a pixel mapping weight of each parallax image, the interference of a complex background pixel on the imaging of a 3D salient object in the 3D image in the complex scene can be filtered through the corresponding pixel mapping weight of the parallax image to obtain a micro image array of the 3D salient object, and finally, a 3D image of the salient object is reconstructed by utilizing a lens array; the method comprises the following four steps: firstly, acquiring a parallax image sequence of an object space scene, and recording the parallax image sequence as Ik(k ═ 1,2 … … M × N); secondly, obtaining a parallax image sequence I by utilizing a space-time consistency significance detection algorithmkAnd normalizing the gray value of each saliency map to obtain a pixel mapping weight of each parallax image, and marking as Maskk(x, y) (k ═ 1,2 … … M × N); thirdly, filtering the interference of the complex background pixels to the 3D obvious target imaging, namely Mask, from the parallax image of the 3D object in the complex scene through the corresponding pixel mapping weightsk(x,y)*IkThe M × N parallax maps with the complex background filtered and obtained by multiplying the corresponding points are used for generating a micro-image array which is marked as Res (x, y); fourthly, Res (x, y) is reconstructed by the micro lens array, so that a 3D image of the salient object is obtained.
2. The method according to claim 1, wherein in the third step, the interference of the complex background pixels to the 3D salient object imaging, namely Mask, is filtered out from the parallax image of the 3D object in the complex scene through the corresponding pixel mapping weights of the parallax imagek(x,y)*IkAnd the M x N parallax graphs with the complex background filtered and obtained after the corresponding point multiplication are used for generating a micro image array, the complex background filtered and only the significant target reserved in the M x N parallax graphs with the complex background filtered and obtained after the point multiplication, and the micro image array with the 3D significant target generated by the M x N parallax graphs is marked as Res (x, y).
CN202110765330.XA 2021-07-06 2021-07-06 True 3D image significant reconstruction method under complex scene Active CN113593008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110765330.XA CN113593008B (en) 2021-07-06 2021-07-06 True 3D image significant reconstruction method under complex scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110765330.XA CN113593008B (en) 2021-07-06 2021-07-06 True 3D image significant reconstruction method under complex scene

Publications (2)

Publication Number Publication Date
CN113593008A true CN113593008A (en) 2021-11-02
CN113593008B CN113593008B (en) 2023-07-07

Family

ID=78246038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110765330.XA Active CN113593008B (en) 2021-07-06 2021-07-06 True 3D image significant reconstruction method under complex scene

Country Status (1)

Country Link
CN (1) CN113593008B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060221030A1 (en) * 2005-03-30 2006-10-05 Ming-Chia Shih Displaying method and image display device
CN104504671A (en) * 2014-12-12 2015-04-08 浙江大学 Method for generating virtual-real fusion image for stereo display
CN107016707A (en) * 2017-04-13 2017-08-04 四川大学 A kind of integration imaging super large three-dimensional scenic shooting image bearing calibration
CN107204010A (en) * 2017-04-28 2017-09-26 中国科学院计算技术研究所 A kind of monocular image depth estimation method and system
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN110111357A (en) * 2019-04-03 2019-08-09 天津大学 A kind of saliency detection method
CN110879478A (en) * 2019-11-28 2020-03-13 四川大学 Integrated imaging 3D display device based on compound lens array
CN111383340A (en) * 2018-12-28 2020-07-07 成都皓图智能科技有限责任公司 Background filtering method, device and system based on 3D image
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 Super-pixel image segmentation method based on PCBA

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060221030A1 (en) * 2005-03-30 2006-10-05 Ming-Chia Shih Displaying method and image display device
CN104504671A (en) * 2014-12-12 2015-04-08 浙江大学 Method for generating virtual-real fusion image for stereo display
CN107016707A (en) * 2017-04-13 2017-08-04 四川大学 A kind of integration imaging super large three-dimensional scenic shooting image bearing calibration
CN107204010A (en) * 2017-04-28 2017-09-26 中国科学院计算技术研究所 A kind of monocular image depth estimation method and system
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN111383340A (en) * 2018-12-28 2020-07-07 成都皓图智能科技有限责任公司 Background filtering method, device and system based on 3D image
CN110111357A (en) * 2019-04-03 2019-08-09 天津大学 A kind of saliency detection method
CN110879478A (en) * 2019-11-28 2020-03-13 四川大学 Integrated imaging 3D display device based on compound lens array
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 Super-pixel image segmentation method based on PCBA

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAO-MIN LIU等: "The rebuilding 3D of the contact surface of electrical apparatus", 《PROCEEDINGS. INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS》, pages 2045 *
甄琰明: "基于OSEM的PET 3D图像重建算法研究及优化", 《CNKI优秀硕士学位论文全文库 医疗卫生科技辑》, no. 02, pages 060 - 61 *

Also Published As

Publication number Publication date
CN113593008B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN108986136B (en) Binocular scene flow determination method and system based on semantic segmentation
Häne et al. Real-time direct dense matching on fisheye images using plane-sweeping stereo
CN105023275B (en) Super-resolution optical field acquisition device and its three-dimensional rebuilding method
US8928737B2 (en) System and method for three dimensional imaging
CN111343367B (en) Billion-pixel virtual reality video acquisition device, system and method
WO2017176484A1 (en) Efficient determination of optical flow between images
CN107767339B (en) Binocular stereo image splicing method
CN101729920B (en) Method for displaying stereoscopic video with free visual angles
CN107798704B (en) Real-time image superposition method and device for augmented reality
CN105635808B (en) A kind of video-splicing method based on bayesian theory
CN113436130B (en) Intelligent sensing system and device for unstructured light field
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
Angot et al. A 2D to 3D video and image conversion technique based on a bilateral filter
US20230245277A1 (en) Image restoration method and device
CN113593008B (en) True 3D image significant reconstruction method under complex scene
EP3229106A1 (en) Efficient determination of optical flow between images
Kim et al. Light field angular super-resolution using convolutional neural network with residual network
TW201911239A (en) Method and apparatus for generating three-dimensional panoramic video
CN114663599A (en) Human body surface reconstruction method and system based on multiple views
TW201426634A (en) Target image generation utilizing a functional based on functions of information from other images
CN108460747B (en) Sub-aperture synthesis unblocking method of light field camera
JP2013200840A (en) Video processing device, video processing method, video processing program, and video display device
CN109379577B (en) Video generation method, device and equipment of virtual viewpoint
Satyawan et al. Scene flow from stereo fisheye images
TW201322732A (en) Method for adjusting moving depths of video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant