CN109559271A - The method and apparatus that depth image is optimized - Google Patents
The method and apparatus that depth image is optimized Download PDFInfo
- Publication number
- CN109559271A CN109559271A CN201710883474.9A CN201710883474A CN109559271A CN 109559271 A CN109559271 A CN 109559271A CN 201710883474 A CN201710883474 A CN 201710883474A CN 109559271 A CN109559271 A CN 109559271A
- Authority
- CN
- China
- Prior art keywords
- depth image
- optimization
- input
- image
- camera posture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000005457 optimization Methods 0.000 claims abstract description 74
- 238000010276 construction Methods 0.000 claims abstract description 14
- 230000008859 change Effects 0.000 claims description 7
- 230000001052 transient effect Effects 0.000 claims description 2
- 230000006870 function Effects 0.000 description 19
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000006854 communication Effects 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
- G06T3/067—Reshaping or unfolding 3D tree structures onto 2D planes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to the method and apparatus optimized to depth image.This includes: input step to the method that depth image optimizes, and inputs multiple color images and corresponding depth image about a scene;Camera Attitude estimation step estimates camera posture as initial camera posture based on the depth image of input;Camera pose refinement step optimizes the initial camera posture based on multiple color images to obtain the camera posture of optimization;Threedimensional model construction step constructs threedimensional model based on multiple color images and the camera posture of optimization obtained;Constructed threedimensional model is projected to two-dimensional coordinate space to generate the corresponding Projection Depth image of each color image by projection step, the camera posture based on optimization;And depth image Optimization Steps, the depth image of Projection Depth image generated and corresponding input is merged into the depth image optimized.Available more accurate depth image according to the method for the present invention.
Description
Technical field
The present invention relates to field of image processing, relate more specifically to the method and dress that a kind of pair of depth image optimizes
It sets.
Background technique
It is universal with consumer level RGB-D camera, such as the ASUS Xtion PRO LIVE of the kinect of Microsoft, Asus,
Three-dimensional reconstruction starts to be widely used.RGB-D camera can obtain color image and depth image simultaneously.But due to hard
The limitation of part cost and equipment volume, depth image that RGB-D camera is got can there are many noise, be unable to satisfy it is many before
Along the demand of application, such as self-navigation.
Complex scene or object are scanned when using RGB-D camera, the depth image of acquisition can include many
" black hole ".As shown in figures 1 a-1d, Figure 1A and Fig. 1 C is the color image obtained by camera, and Figure 1B and Fig. 1 D is to obtain
Depth image.With ellipse, iris out is black hole in Figure 1B and Fig. 1 D, and the point in these black holes does not all collect depth letter
Breath, i.e. depth value are 0, so being just shown as " black hole ".In order to solve the problems, such as " black hole ", there has been proposed many depth images are excellent
Change method.Under normal circumstances, main stream approach is to enhance depth with the two-dimentional local geometric constraint condition of the color image after registration
Spend image.Combine in bilateral up-sampling (JHU) method, depth and color continuity are occurred simultaneously as local geometric constraining item
Part.And for global optimization approach such as Markov random field (MRF), then use image gradient, color images, edge
Conspicuousness, non-local mean etc. are used as local constraint.
In conclusion the precision that current depth image strengthens algorithm is still difficult to meet the requirements.Therefore, it is necessary to one kind can
The method and apparatus that depth image is optimized to solve the above problems.
Summary of the invention
Brief summary of the present invention is given below, in order to provide the basic reason about certain aspects of the invention
Solution.It should be appreciated that this summary is not an exhaustive overview of the invention.It is not intended to determine key of the invention
Or pith, nor is it intended to limit the scope of the present invention.Its purpose only provides certain concepts in simplified form, with
This is as the preamble in greater detail discussed later.
A primary object of the present invention is, provides the method that a kind of pair of depth image optimizes, comprising: input
Step inputs multiple color images and corresponding depth image about a scene;Camera Attitude estimation step, based on defeated
The depth image entered estimates the camera posture as initial camera posture;Camera pose refinement step is based on multiple color images
Initial camera posture is optimized to obtain the camera posture of optimization;Threedimensional model construction step is based on multiple color images
Threedimensional model is constructed with the camera posture of optimization obtained;Projection step, the camera posture based on optimization, will be constructed
Threedimensional model projects to two-dimensional coordinate space to generate the corresponding Projection Depth image of each color image;And depth image is excellent
Change step, the depth image of Projection Depth image generated and corresponding input is merged into the depth map optimized
Picture.
According to an aspect of the present invention, the device that a kind of pair of depth image optimizes is provided, comprising: input unit,
It is configured as input to the multiple color images and corresponding depth image about a scene;Camera Attitude estimation unit, quilt
The depth image of input is configured to estimate camera posture as initial camera posture;Camera pose refinement unit, is matched
It is set to and initial camera posture is optimized based on multiple color images to obtain the camera posture of optimization;Threedimensional model building is single
Member is configured as constructing threedimensional model based on multiple color images and the camera posture of optimization obtained;Projecting cell, quilt
It is configured to the camera posture of optimization, constructed threedimensional model is projected into two-dimensional coordinate space to generate each cromogram
As corresponding Projection Depth image;And depth image optimizes unit, is configured as Projection Depth image generated and right
The depth image for the input answered is merged the depth image optimized.
In addition, the embodiments of the present invention also provide the computer programs for realizing the above method.
In addition, the embodiments of the present invention also provide the computer program product of at least computer-readable medium form,
Upper record has the computer program code for realizing the above method.
By the detailed description below in conjunction with attached drawing to highly preferred embodiment of the present invention, these and other of the invention is excellent
Point will be apparent from.
Detailed description of the invention
Below with reference to the accompanying drawings illustrate embodiments of the invention, the invention will be more easily understood it is above and its
Its objects, features and advantages.Component in attached drawing is intended merely to show the principle of the present invention.In the accompanying drawings, identical or similar
Technical characteristic or component will be indicated using same or similar appended drawing reference.
Figure 1A and Fig. 1 C shows the color image obtained by camera, Figure 1B and Fig. 1 D shows the depth image of acquisition;
Fig. 2 shows the exemplary of the method 200 according to an embodiment of the invention optimized to depth image
The flow chart of process;
Fig. 3 shows using Bundler the schematic diagram optimized to camera posture;
Fig. 4, which is shown, projects to two-dimensional coordinate space for threedimensional model to generate the schematic diagram of Projection Depth image;
Fig. 5 shows the flow chart that optimization is iterated to depth image;
Fig. 6 is the example for showing the device 600 optimized to depth image according to another embodiment of the invention
Property configuration block diagram;And
Fig. 7 is to show the calculating equipment that can be used for implementing the method and apparatus of the invention optimized to depth image
Exemplary block diagram.
Specific embodiment
Exemplary embodiment of the invention is described hereinafter in connection with attached drawing.For clarity and conciseness,
All features of actual implementation mode are not described in the description.It should be understood, however, that developing any this actual implementation
Much decisions specific to embodiment must be made during example, to realize the objectives of developer, for example, symbol
Restrictive condition those of related to system and business is closed, and these restrictive conditions may have with the difference of embodiment
Changed.In addition, it will also be appreciated that although development is likely to be extremely complex and time-consuming, to having benefited from the disclosure
For those skilled in the art of content, this development is only routine task.
Here, and also it should be noted is that, in order to avoid having obscured the present invention because of unnecessary details, in the accompanying drawings
Illustrate only with closely related device structure and/or processing step according to the solution of the present invention, and be omitted and the present invention
The little other details of relationship.
The present invention proposes a kind of color image based on after registration, is carried out using three-dimensional global information to depth image excellent
The method and apparatus of change.
The depth image optimization algorithm of traditional two-dimensional signal using part be still unable to satisfy current three-dimensional modeling and
The demand of algorithm for reconstructing.The method according to the present invention optimized to depth image takes full advantage of when estimating camera posture
The information of depth image and color image.Projection Depth image is calculated again by first constructing 3D model, has obtained three-dimensional global letter
Breath.So as to obtain more accurate optimization depth image.
The method optimized to depth image and dress of embodiment according to the present invention are described in detail with reference to the accompanying drawing
It sets.It is discussed below to carry out in the following order:
1. the method that pair depth image optimizes
2. the device that pair depth image optimizes
3. to the calculating equipment for implementing the present processes and device
[method that 1. pairs of depth images optimize]
Fig. 2 shows the exemplary of the method 200 according to an embodiment of the invention optimized to depth image
The flow chart of process.The process of the method 200 optimized to depth image is illustrated below in conjunction with Fig. 2.
Firstly, inputting the multiple color images and corresponding depth image about a scene in step S202.
Then, in step S204, estimate camera posture as initial camera posture based on the depth image of input.
In one example, camera posture is estimated based on the depth image of input by using KINFU.KINFU's is complete
Referred to as KinectFusion is commonly used to carry out 3D modeling to object, and the depth map for inputting object can export the camera of depth map
The 3D model of posture and object.
Then, in step S206, initial camera posture is optimized based on multiple color images to obtain optimization
Camera posture.
In one example, initial camera posture is optimized by using Bundler.Fig. 3 shows use
Bundler is come the schematic diagram that optimizes to camera posture.Bundler algorithm is commonly used to the camera posture of estimation image, input
The corresponding camera posture of color image can be calculated in color image.
It will be understood by those skilled in the art that three-dimensional rebuilding method is not limited to above-mentioned KINFU and Bundler, it is any similar
Method can realize step S104 and S106.
Then, in step S208, three-dimensional is constructed based on multiple color images and the camera posture of optimization obtained
Model.
In one example, threedimensional model is constructed using multi-viewpoint three-dimensional visible sensation method, such as (is based on face using PMVS
The three-dimensional multi-angle of view stereoscopic vision of piece) algorithm.
In step S210, constructed threedimensional model is projected to two-dimensional coordinate space by the camera posture based on optimization
To generate the corresponding Projection Depth image of each color image.
Specifically, after obtaining 3D model, perspective projection can be carried out, is obtained using the camera posture after optimizing as fulcrum
New Projection Depth image in two-dimensional space.
Fig. 4, which is shown, projects to two-dimensional coordinate space for threedimensional model to generate the schematic diagram of Projection Depth image.In Fig. 4
In, it is based on camera posture and threedimensional model, generation depth image can be projected.In Fig. 4, Cam#1 indicates first camera posture,
And so on, it repeats no more.
Finally, the depth image of Projection Depth image generated and corresponding input is melted in step S212
Close the depth image optimized.
That is, Projection Depth image is used to optimize the depth image of input.
Preferably, optimization can be iterated to depth image.When the depth image of the optimization obtained in step S210
When difference between the depth image of input is greater than predetermined threshold, the depth image of optimization can be used as the depth of new input
Image carrys out iteration and executes above-mentioned steps S204, S206, S208, S210 and S212.
As shown in figure 5, showing the flow chart for being iterated optimization to depth image.In Fig. 5, the rectangle frame table of overstriking
Show that action-item, parallelogram frame indicate data item.Input data includes color image and depth image.
In iteration shown in Fig. 5, depth image after using every suboptimization as new input depth image data, together
When color image remain unchanged, come iteration execute estimation camera posture, camera posture is optimized, construct threedimensional model, throwing
The step of shadow depth image and depth image optimize.
The termination condition of iteration is: the difference between depth image that iteration obtains twice is less than predetermined threshold.
By process shown in fig. 5, optimization can be iterated to depth image.
In order to obtain optimize after depth image, in one example, by the depth image of optimization respectively with the depth of input
Spend the difference of image and Projection Depth image square weighted sum as majorized function, which is executed to each pixel of image
Function, the depth image optimized by being minimized majorized function.
Specifically, the majorized function as follows based on least square constraint is proposed:
Wherein, j represents j-th of pixel,Depth image after representing optimized,It is the depth image of input,It indicates
Projection Depth image,Indicate the weight parameter of j-th of pixel of the depth image of input.
It can be by there is the pixel of depth value around the depth image of statistics input and j-th of pixel of Projection Depth image
The ratio of the number of point calculatesBy minimizing the constraint functionDepth image after available optimization.
If the gap for optimizing depth image and inputting between depth image is greater than given threshold value, can be after optimization
Depth image continues interative computation as the depth image of new input, to be iterated optimization to depth image.
[device that 2. pairs of depth images optimize]
Fig. 6 is the example for showing the device 600 optimized to depth image according to another embodiment of the invention
Property configuration block diagram.
As shown in fig. 6, including input unit 602, camera Attitude estimation unit to the device 600 that depth image optimizes
604, camera pose refinement unit 606, threedimensional model construction unit 608, projecting cell 610 and depth image optimize unit 612.
Input unit 602 is configured as input to multiple color images and corresponding depth image about a scene.
Camera Attitude estimation unit 604 is configured as the depth image based on input to estimate camera posture as initial phase
Machine posture.
Camera pose refinement unit 606 is configured as optimizing initial camera posture based on multiple color images to obtain
The camera posture that must optimize.
Threedimensional model construction unit 608 be configured as the camera posture based on multiple color images and optimization obtained come
Construct threedimensional model.
Projecting cell 610 is configured as the camera posture based on optimization, and constructed threedimensional model is projected to two-dimentional seat
Space is marked to generate the corresponding Projection Depth image of each color image.
Depth image optimization unit 612 is configured as the depth map of Projection Depth image and corresponding input generated
As being merged the depth image optimized.
It in one example, further include iteration unit (not shown) to the device 600 that depth image optimizes, repeatedly
It is configured as unit: when depth image and the depth map of corresponding input of the optimization obtained in depth image optimization unit
When difference as between is greater than predetermined threshold, the depth image for using the depth image of optimization as new input carrys out iteration and executes camera
The place of Attitude estimation unit, camera pose refinement unit, threedimensional model construction unit, projecting cell and depth image optimization unit
Reason.
Wherein, depth image optimization unit 612 be configured to: by the depth image of optimization respectively with the depth of input
Spend the difference of image and Projection Depth image square weighted sum be used as majorized function, each pixel of image is executed and optimizes letter
Number, the depth image optimized by being minimized majorized function.
Wherein, majorized function specifically:
Wherein,Indicate that projection is deep
Image is spent,Indicate the depth image of input,Indicate that the depth image of optimization, j indicate j-th of pixel, αjIt is j-th of pixel
Weight parameter.
Wherein, weight parameter is that have depth according to around the depth image of input and j-th of pixel of Projection Depth image
The ratio calculation of the number of the pixel of angle value.
Wherein, camera Attitude estimation unit 604 is configured to the depth image by using KINFU based on input
To estimate camera posture.
Wherein, camera pose refinement unit 606 is configured to come by using Bundler to initial camera posture
It optimizes.
Wherein, threedimensional model construction unit 608 is configured to construct threedimensional model using PMVS.
The details of the operations and functions of various pieces about the device 600 optimized to depth image is referred to tie
The embodiment for closing the method for the invention optimized to depth image that Fig. 1-5 is described, is not detailed herein.
It should be noted that the knot of the device 600 and its component units shown in fig. 6 optimized to depth image
Structure is only exemplary, and those skilled in the art, which can according to need, modifies to structural block diagram shown in fig. 6.
The method and apparatus according to the present invention optimized to depth image take full advantage of when estimating camera posture
The information of depth image and color image.Projection Depth image is calculated again by first constructing 3D model, it is available three-dimensional global
Information, so as to obtain more accurate optimization depth image.
[the 3. calculating equipment to implement the present processes and device]
Basic principle of the invention is described in conjunction with specific embodiments above, however, it is desirable to, it is noted that this field
For those of ordinary skill, it is to be understood that the whole or any steps or component of methods and apparatus of the present invention, Ke Yi
Any computing device (including processor, storage medium etc.) perhaps in the network of computing device with hardware, firmware, software or
Their combination is realized that this is that those of ordinary skill in the art use them in the case where having read explanation of the invention
Basic programming skill can be achieved with.
Therefore, the purpose of the present invention can also by run on any computing device a program or batch processing come
It realizes.The computing device can be well known fexible unit.Therefore, the purpose of the present invention can also include only by offer
The program product of the program code of the method or device is realized to realize.That is, such program product is also constituted
The present invention, and the storage medium for being stored with such program product also constitutes the present invention.Obviously, the storage medium can be
Any well known storage medium or any storage medium that developed in the future.
In the case where realizing the embodiment of the present invention by software and/or firmware, from storage medium or network to having
The computer of specialized hardware structure, such as the installation of general purpose computer shown in Fig. 7 700 constitute the program of the software, the computer
When being equipped with various programs, it is able to carry out various functions etc..
In Fig. 7, central processing unit (CPU) 701 is according to the program stored in read-only memory (ROM) 702 or from depositing
The program that storage part 708 is loaded into random access memory (RAM) 703 executes various processing.In RAM 703, also according to need
Store the data required when CPU 701 executes various processing etc..CPU 701, ROM 702 and RAM 703 are via bus
704 links each other.Input/output interface 705 also link to bus 704.
Components described below link is to input/output interface 705: importation 706 (including keyboard, mouse etc.), output section
Divide 707 (including display, such as cathode-ray tube (CRT), liquid crystal display (LCD) etc. and loudspeakers etc.), storage section
708 (including hard disks etc.), communications portion 709 (including network interface card such as LAN card, modem etc.).Communications portion 709
Communication process is executed via network such as internet.As needed, driver 710 can also link to input/output interface 705.
Detachable media 711 such as disk, CD, magneto-optic disk, semiconductor memory etc. is installed in driver 710 as needed
On, so that the computer program read out is mounted to as needed in storage section 708.
It is such as removable from network such as internet or storage medium in the case where series of processes above-mentioned by software realization
Unload the program that the installation of medium 711 constitutes software.
It will be understood by those of skill in the art that this storage medium be not limited to it is shown in Fig. 7 be wherein stored with program,
Separately distribute with equipment to provide a user the detachable media 711 of program.The example of detachable media 711 includes disk
(including floppy disk (registered trademark)), CD (including compact disc read-only memory (CD-ROM) and digital versatile disc (DVD)), magneto-optic disk
(including mini-disk (MD) (registered trademark)) and semiconductor memory.Alternatively, storage medium can be ROM 702, storage section
Hard disk for including in 708 etc., wherein computer program stored, and user is distributed to together with the equipment comprising them.
The present invention also proposes a kind of program product of instruction code for being stored with machine-readable.Instruction code is read by machine
When taking and executing, can be performed it is above-mentioned according to the method for the embodiment of the present invention.
Correspondingly, it is also wrapped for carrying the storage medium of the program product of the above-mentioned instruction code for being stored with machine-readable
It includes in disclosure of the invention.Storage medium includes but is not limited to floppy disk, CD, magneto-optic disk, storage card, memory stick etc..
It should be appreciated by those skilled in the art that being exemplary what this was enumerated, the present invention is not limited thereto.
In the present specification, the statements such as " first ", " second " and " n-th " be in order to by described feature in text
On distinguish, the present invention is explicitly described.Therefore, it should not serve to that there is any limited meaning.
As an example, each step of the above method and all modules and/or unit of above equipment can
To be embodied as software, firmware, hardware or combinations thereof, and as a part in relevant device.Each composition mould in above-mentioned apparatus
Block, unit when being configured by way of software, firmware, hardware or combinations thereof workable specific means or mode be ability
Known to field technique personnel, details are not described herein.
It as an example, can be from storage medium or network to having in the case where being realized by software or firmware
Computer (such as general purpose computer 700 shown in Fig. 7) installation of specialized hardware structure constitutes the program of the software, the computer
When being equipped with various programs, it is able to carry out various functions etc..
In the description above to the specific embodiment of the invention, for the feature a kind of embodiment description and/or shown
It can be used in one or more other embodiments in a manner of same or similar, with the feature in other embodiments
It is combined, or the feature in substitution other embodiments.
It should be emphasized that term "comprises/comprising" refers to the presence of feature, element, step or component when using herein, but simultaneously
It is not excluded for the presence or additional of other one or more features, element, step or component.
In addition, method of the invention be not limited to specifications described in time sequencing execute, can also according to it
His time sequencing, concurrently or independently execute.Therefore, the execution sequence of method described in this specification is not to this hair
Bright technical scope is construed as limiting.
The present invention and its advantage it should be appreciated that without departing from the essence of the invention being defined by the claims appended hereto
Various changes, substitution and transformation can be carried out in the case where mind and range.Moreover, the scope of the present invention is not limited only to specification institute
The specific embodiment of the process of description, equipment, means, method and steps.One of ordinary skilled in the art is from of the invention
Disclosure it will be readily understood that can be used according to the present invention execute the function essentially identical to corresponding embodiment in this or
Obtain the result essentially identical with it, existing and to be developed in the future process, equipment, means, method or step.Cause
This, the attached claims are intended in the range of them include such process, equipment, means, method or step.
Based on above explanation, it is known that open at least to disclose following technical scheme:
The method that note 1, a kind of pair of depth image optimize, comprising:
Input step inputs multiple color images and corresponding depth image about a scene;
Camera Attitude estimation step estimates camera posture as initial camera posture based on the depth image of input;
Camera pose refinement step optimizes the initial camera posture based on the multiple color image to obtain
The camera posture of optimization;
Threedimensional model construction step constructs three based on the multiple color image and the camera posture of optimization obtained
Dimension module;
Constructed threedimensional model is projected to two-dimensional coordinate space by projection step, the camera posture based on the optimization
To generate the corresponding Projection Depth image of each color image;And
Depth image Optimization Steps merge the depth image of Projection Depth image generated and corresponding input
The depth image optimized.
2, the method according to note 1, further includes:
When depth image and the depth image of corresponding input of the optimization obtained in the depth image Optimization Steps
Between difference be greater than predetermined threshold when, the depth image for using the depth image of the optimization as new input come iteration execute institute
State camera Attitude estimation step note, the camera pose refinement step, the threedimensional model construction step, the projection step
With the depth image Optimization Steps.
3, the method according to note 1 or 2, wherein the depth image Optimization Steps include:
By the depth image of optimization respectively with the difference of the depth image of input and Projection Depth image square weighted sum
As majorized function, the majorized function is executed to each pixel of image, by be minimized the majorized function come
To the depth image of optimization.
4, according to method described in note 3, wherein the majorized function are as follows:
Wherein,Indicate that projection is deep
Image is spent,Indicate the depth image of input,Indicate that the depth image of optimization, j indicate j-th of pixel, αjIt is j-th of pixel
Weight parameter.
5, method according to the attached note 4, wherein the weight parameter is according to the depth image and Projection Depth figure of input
There is the ratio of the number of the pixel of depth value to calculate around j-th of pixel of picture.
6, the method according to note 1 or 2, wherein phase is estimated based on the depth image of input by using KINFU
Machine posture.
7, the method according to note 1 or 2, wherein to carry out initial camera posture by using Bundler excellent
Change.
8, the method according to note 1 or 2, wherein threedimensional model is constructed using PMVS.
9, the device that a kind of pair of depth image optimizes, comprising:
Input unit is configured as input to multiple color images and corresponding depth image about a scene;
Camera Attitude estimation unit is configured as the depth image based on input to estimate camera posture as initial camera
Posture;
Camera pose refinement unit is configured as carrying out the initial camera posture based on the multiple color image excellent
Change to obtain the camera posture of optimization;
Threedimensional model construction unit is configured as the camera posture based on the multiple color image and optimization obtained
To construct threedimensional model;
Projecting cell, is configured as the camera posture based on the optimization, and constructed threedimensional model is projected to two dimension
Coordinate space generates the corresponding Projection Depth image of each color image;And
Depth image optimizes unit, is configured as the depth image of Projection Depth image and corresponding input generated
Merged the depth image optimized.
10, further include iteration unit according to device described in note 9, the iteration unit is configured as:
When depth image and the depth image of corresponding input of the optimization obtained in depth image optimization unit
Between difference be greater than predetermined threshold when, the depth image for using the depth image of the optimization as new input come iteration execute institute
State camera Attitude estimation unit, the camera pose refinement unit, the threedimensional model construction unit, the projecting cell and institute
State the processing of depth image optimization unit.
11, the device according to note 9 or 10, wherein the depth image optimization unit is configured to:
By the depth image of optimization respectively with the difference of the depth image of input and Projection Depth image square weighted sum
As majorized function, the majorized function is executed to each pixel of image, by be minimized the majorized function come
To the depth image of optimization.
12, the device according to note 11, wherein the majorized function are as follows:
Wherein,Indicate that projection is deep
Image is spent,Indicate the depth image of input,Indicate that the depth image of optimization, j indicate j-th of pixel, αjIt is j-th of pixel
Weight parameter.
13, the device according to note 12, wherein the weight parameter is deep according to the depth image of input and projection
The ratio calculation for the number of pixel for having depth value is spent around j-th of pixel of image.
14, the device according to note 9 or 10, wherein the camera Attitude estimation unit is configured to lead to
It crosses and camera posture is estimated based on the depth image of input using KINFU.
15, the device according to note 9 or 10, wherein the camera pose refinement unit is configured to lead to
It crosses using Bundler and initial camera posture is optimized.
16, the device according to note 9 or 10, wherein the threedimensional model construction unit is configured to adopt
Threedimensional model is constructed with PMVS.
17, a kind of non-transient computer readable storage medium, wherein program is stored with, when described program is executed by computer
When, so that the computer executes the method according to any one of note 1-8.
Claims (10)
1. the method that a kind of pair of depth image optimizes, comprising:
Input step inputs multiple color images and corresponding depth image about a scene;
Camera Attitude estimation step estimates camera posture as initial camera posture based on the depth image of input;
Camera pose refinement step optimizes the initial camera posture based on the multiple color image to be optimized
Camera posture;
Threedimensional model construction step constructs three-dimensional mould based on the multiple color image and the camera posture of optimization obtained
Type;
Constructed threedimensional model is projected to two-dimensional coordinate space next life by projection step, the camera posture based on the optimization
At the corresponding Projection Depth image of each color image;And
The depth image of Projection Depth image generated and corresponding input is merged to obtain by depth image Optimization Steps
The depth image of optimization.
2. according to the method described in claim 1, further include:
When between the depth image and the depth image of corresponding input of the optimization obtained in the depth image Optimization Steps
Difference when being greater than predetermined threshold, the depth image for using the depth image of the optimization as new input carrys out iteration and executes the phase
Machine Attitude estimation step, the camera pose refinement step, the threedimensional model construction step, the projection step and the depth
Spend image optimization step.
3. method according to claim 1 or 2, wherein the depth image Optimization Steps include:
Using the depth image of optimization respectively with the difference of the depth image of input and Projection Depth image square weighted sum as
Majorized function executes the majorized function to each pixel of image, excellent to obtain by being minimized the majorized function
The depth image of change.
4. according to the method described in claim 3, wherein, the majorized function are as follows:
Wherein,Indicate Projection Depth figure
Picture,Indicate the depth image of input,Indicate that the depth image of optimization, j indicate j-th of pixel, αjIt is the power of j-th of pixel
Weight parameter.
5. according to the method described in claim 4, wherein, the weight parameter is according to the depth image and Projection Depth figure of input
There is the ratio of the number of the pixel of depth value to calculate around j-th of pixel of picture.
6. method according to claim 1 or 2, wherein estimate phase based on the depth image of input by using KINFU
Machine posture.
7. method according to claim 1 or 2, wherein to carry out initial camera posture by using Bundler excellent
Change.
8. method according to claim 1 or 2, wherein construct threedimensional model using PMVS.
9. the device that a kind of pair of depth image optimizes, comprising:
Input unit is configured as input to multiple color images and corresponding depth image about a scene;
Camera Attitude estimation unit is configured as the depth image based on input to estimate camera posture as initial camera appearance
State;
Camera pose refinement unit, be configured as based on the multiple color image to the initial camera posture optimize come
Obtain the camera posture of optimization;
Threedimensional model construction unit is configured as the camera posture based on the multiple color image and optimization obtained come structure
Build threedimensional model;
Projecting cell, is configured as the camera posture based on the optimization, and constructed threedimensional model is projected to two-dimensional coordinate
Space generates the corresponding Projection Depth image of each color image;And
Depth image optimizes unit, is configured as carrying out Projection Depth image generated and the depth image of corresponding input
Merge the depth image optimized.
10. a kind of non-transient computer readable storage medium, wherein it is stored with program, when described program is executed by computer,
So that the computer executes method described in any one of -8 according to claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710883474.9A CN109559271B (en) | 2017-09-26 | 2017-09-26 | Method and device for optimizing depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710883474.9A CN109559271B (en) | 2017-09-26 | 2017-09-26 | Method and device for optimizing depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109559271A true CN109559271A (en) | 2019-04-02 |
CN109559271B CN109559271B (en) | 2023-02-28 |
Family
ID=65862968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710883474.9A Active CN109559271B (en) | 2017-09-26 | 2017-09-26 | Method and device for optimizing depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109559271B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113875219A (en) * | 2019-08-27 | 2021-12-31 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010043738A1 (en) * | 2000-03-07 | 2001-11-22 | Sawhney Harpreet Singh | Method of pose estimation and model refinement for video representation of a three dimensional scene |
US20120306876A1 (en) * | 2011-06-06 | 2012-12-06 | Microsoft Corporation | Generating computer models of 3d objects |
CN103150544A (en) * | 2011-08-30 | 2013-06-12 | 精工爱普生株式会社 | Method and apparatus for object pose estimation |
CN103914874A (en) * | 2014-04-08 | 2014-07-09 | 中山大学 | Compact SFM three-dimensional reconstruction method without feature extraction |
US20150109416A1 (en) * | 2013-10-23 | 2015-04-23 | Google Inc. | Depth map generation |
CN105164726A (en) * | 2013-01-24 | 2015-12-16 | 微软技术许可有限责任公司 | Camera pose estimation for 3d reconstruction |
US20160171703A1 (en) * | 2013-07-09 | 2016-06-16 | Samsung Electronics Co., Ltd. | Camera pose estimation apparatus and method |
WO2017007166A1 (en) * | 2015-07-08 | 2017-01-12 | 고려대학교 산학협력단 | Projected image generation method and device, and method for mapping image pixels and depth values |
US20170046868A1 (en) * | 2015-08-14 | 2017-02-16 | Samsung Electronics Co., Ltd. | Method and apparatus for constructing three dimensional model of object |
US20170091996A1 (en) * | 2015-09-25 | 2017-03-30 | Magic Leap, Inc. | Methods and Systems for Detecting and Combining Structural Features in 3D Reconstruction |
CN106934827A (en) * | 2015-12-31 | 2017-07-07 | 杭州华为数字技术有限公司 | The method for reconstructing and device of three-dimensional scenic |
US20170200317A1 (en) * | 2016-01-12 | 2017-07-13 | Siemens Healthcare Gmbh | Perspective representation of a virtual scene component |
-
2017
- 2017-09-26 CN CN201710883474.9A patent/CN109559271B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010043738A1 (en) * | 2000-03-07 | 2001-11-22 | Sawhney Harpreet Singh | Method of pose estimation and model refinement for video representation of a three dimensional scene |
US20120306876A1 (en) * | 2011-06-06 | 2012-12-06 | Microsoft Corporation | Generating computer models of 3d objects |
CN103150544A (en) * | 2011-08-30 | 2013-06-12 | 精工爱普生株式会社 | Method and apparatus for object pose estimation |
CN105164726A (en) * | 2013-01-24 | 2015-12-16 | 微软技术许可有限责任公司 | Camera pose estimation for 3d reconstruction |
US20160171703A1 (en) * | 2013-07-09 | 2016-06-16 | Samsung Electronics Co., Ltd. | Camera pose estimation apparatus and method |
US20150109416A1 (en) * | 2013-10-23 | 2015-04-23 | Google Inc. | Depth map generation |
CN103914874A (en) * | 2014-04-08 | 2014-07-09 | 中山大学 | Compact SFM three-dimensional reconstruction method without feature extraction |
WO2017007166A1 (en) * | 2015-07-08 | 2017-01-12 | 고려대학교 산학협력단 | Projected image generation method and device, and method for mapping image pixels and depth values |
US20170046868A1 (en) * | 2015-08-14 | 2017-02-16 | Samsung Electronics Co., Ltd. | Method and apparatus for constructing three dimensional model of object |
US20170091996A1 (en) * | 2015-09-25 | 2017-03-30 | Magic Leap, Inc. | Methods and Systems for Detecting and Combining Structural Features in 3D Reconstruction |
CN106934827A (en) * | 2015-12-31 | 2017-07-07 | 杭州华为数字技术有限公司 | The method for reconstructing and device of three-dimensional scenic |
US20170200317A1 (en) * | 2016-01-12 | 2017-07-13 | Siemens Healthcare Gmbh | Perspective representation of a virtual scene component |
Non-Patent Citations (2)
Title |
---|
刘星明等: "基于计算机视觉的三维重建技术研究", 《深圳信息职业技术学院学报》 * |
李昕昕等: "三维人脸建模及在跨姿态人脸匹配中的有效性验证", 《计算机应用》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113875219A (en) * | 2019-08-27 | 2021-12-31 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN113875219B (en) * | 2019-08-27 | 2023-08-15 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109559271B (en) | 2023-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10803546B2 (en) | Systems and methods for unsupervised learning of geometry from images using depth-normal consistency | |
US8655052B2 (en) | Methodology for 3D scene reconstruction from 2D image sequences | |
US20190251401A1 (en) | Image composites using a generative adversarial neural network | |
CN109859296A (en) | Training method, server and the storage medium of SMPL parametric prediction model | |
Popa et al. | Globally consistent space‐time reconstruction | |
CN104616286B (en) | Quick semi-automatic multi views depth restorative procedure | |
CN111340867A (en) | Depth estimation method and device for image frame, electronic equipment and storage medium | |
EP3367334B1 (en) | Depth estimation method and depth estimation apparatus of multi-view images | |
US20030206652A1 (en) | Depth map creation through hypothesis blending in a bayesian framework | |
US10025754B2 (en) | Linear FE system solver with dynamic multi-grip precision | |
Adato et al. | A polar representation of motion and implications for optical flow | |
CN113140034A (en) | Room layout-based panoramic new view generation method, device, equipment and medium | |
CN111868738A (en) | Cross-equipment monitoring computer vision system | |
CN115797561A (en) | Three-dimensional reconstruction method, device and readable storage medium | |
CN116797768A (en) | Method and device for reducing reality of panoramic image | |
CN107680073A (en) | The method and apparatus of geometrical reconstruction object | |
CN111127649A (en) | Method and device for constructing three-dimensional block model and server | |
Sun et al. | Sequential fusion of multi-view video frames for 3D scene generation | |
Li et al. | Topology-change-aware volumetric fusion for dynamic scene reconstruction | |
CN104183002A (en) | Three-dimensional model change method and device | |
CN109559271A (en) | The method and apparatus that depth image is optimized | |
Gois et al. | Generalized hermitian radial basis functions implicits from polygonal mesh constraints | |
CN115349140A (en) | Efficient positioning based on multiple feature types | |
CN115375847B (en) | Material recovery method, three-dimensional model generation method and model training method | |
CN116863078A (en) | Three-dimensional human body model reconstruction method, three-dimensional human body model reconstruction device, electronic equipment and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |