CN102724528A - Depth map generation device - Google Patents
Depth map generation device Download PDFInfo
- Publication number
- CN102724528A CN102724528A CN2012101577017A CN201210157701A CN102724528A CN 102724528 A CN102724528 A CN 102724528A CN 2012101577017 A CN2012101577017 A CN 2012101577017A CN 201210157701 A CN201210157701 A CN 201210157701A CN 102724528 A CN102724528 A CN 102724528A
- Authority
- CN
- China
- Prior art keywords
- depth
- data
- gray
- neighbor
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000006243 chemical reaction Methods 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 15
- 238000004364 calculation method Methods 0.000 claims abstract description 7
- 238000007670 refining Methods 0.000 claims description 20
- 230000001360 synchronised effect Effects 0.000 claims description 16
- 241001269238 Data Species 0.000 claims description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000000034 method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
The invention provides a depth map generation device, including an input module used for setting video synchronization signals, user variable input parameters and multi-channel image data, a gray scale conversion module used for calculating gray scale data of each pixel; a minimum depth path acquisition module used for generating a minimum depth path according to the gray scale data, a depth refinement module used for acquiring refined depth data and smooth depth data according to the minimum depth path, a user variable setting module used for carrying out updating calculation on the smooth depth data according to user variable input parameters to generate final depth data; a synchronization signal processing module used for performing output synchronization processing on the video synchronization signals, and an output module used for setting output parameters, wherein the output parameters include final depth data of each pixel and the updated video synchronization signals. The device according to the embodiment of the invention has the advantages of fast calculation speed and ability of direct real time conversion.
Description
Technical field
The present invention relates to technical field of computer vision, particularly a kind of depth map generating apparatus.
Background technology
Three-dimensional video-frequency has visual effect true to nature, on the spot in person, in a plurality of fields such as advertisement, video display, recreation, sports presentations application is arranged all, particularly has broad application prospects in the stereoscopic TV field.In stereoscopic video content manufacturing process, need convert planar video into three-dimensional video-frequency usually.
Usually, at first depth map to be generated, the stereoscopic video images at required visual angle can be generated by depth map.Prior art generates depth map and mainly contains three kinds of approach, is respectively that artificial drafting, the generation of semi-automatic depth map and full-automatic depth map generate.Wherein, artificial drafting is mainly participated in that through manual work image is carried out Region Segmentation and again zones of different is carried out degree of depth assignment, and this method is strong to manual work participation dependence, and human cost is high, and is consuming time huge.It is to utilize artificial participation and area of computer aided to accomplish depth map jointly to draw that semi-automatic depth map generates, and mainly contains at present based on machine, based on the image segmentation scheduling algorithm, and these method computation complexities are high, and real-time is poor, artificial supplementary costs and higher.Full-automatic depth map generates and mainly calculates through computer, generates through image understanding being carried out depth map, if realize calculating in real time Hardware configuration is had relatively high expectations, portable devices property is poor, with housed device complex interfaces such as TV or STBs.
Summary of the invention
The present invention is intended to one of solve the problems of the technologies described above at least.For this reason, it is fast to the objective of the invention is to propose a kind of computational speed, but the depth map generating apparatus of real time direct conversion.
To achieve these goals, depth map generating apparatus according to the present invention comprises: input module, be used to be provided with input parameter, and wherein, said input parameter comprises video synchronization signal, user-variable input parameter and multichannel image data; The gradation conversion module is used for the gradation data according to said each pixel of multichannel image data computation; Degree of depth minimal path is asked for module, is used for generating according to the said gradation data of said each pixel the minimum-depth path of said each pixel; The degree of depth module of refining is used for obtaining the depth data of refining of said each pixel, and generating the depth of smoothness data of said each pixel according to the said minimum-depth path of said each pixel value of refining; User-variable is provided with module, is used for according to said user-variable input parameter, and the said depth of smoothness data of said each pixel are carried out update calculation, to generate the ultimate depth data of said each pixel; The synchronizing signal processing module is used for said video synchronization signal under the various resolution is exported Synchronous Processing the video synchronization signal after obtaining upgrading; And output module, be used to be provided with output parameter, wherein, said output parameter comprises the said ultimate depth data of said each pixel and the video synchronization signal after the said renewal.
Depth map generating apparatus according to the embodiment of the invention has following advantage:
1. this device volume is little, integrated level is high, hardware compatibility is strong, can place on the multiple hardwares platforms such as FPGA hardware and special chip.
2. this device is a hardware bottom layer signal processing implement device, carries out real-time deep figure based on the single width planar video and generates, and can be provided with according to the user simultaneously and export, and has the advantage that computation capability is strong, conversion speed is fast, fabrication cycle is short, cost of manufacture is low.
In one embodiment of the invention, also comprise: internal memory on the sheet, go up internal memory for said and comprise gray value line data spatial cache and depth value line data spatial cache.
In one embodiment of the invention, said user-variable input parameter comprises the depth adjustment signal.
In one embodiment of the invention, the said multichannel image data of input are 32 rgb image datas or 32 YUV view data.
In one embodiment of the invention; Said gradation conversion module is used for when said multichannel image data are the RGB data format; Obtain said gradation data through each passage numerical value weighting integer; When said multichannel image data are the yuv data form, get luminance channel numerical value as gradation data.
In one embodiment of the invention; Said gradation conversion module is calculated the gradation data of said each pixel according to following formula: when said multichannel image data are the RGB data format; Gray={ [(G+R)+(4R+2B)]+8G}>> 4, wherein, R, G, B represent red channel, green channel, the blue channel components of image data value of said multichannel image data; 4 the expression move to right four, gray representes the gradation data of current pixel; And when said multichannel image data are the yuv data form, gray=Y, wherein Y representes the luminance channel components of image data value of said multichannel image data.
In one embodiment of the invention, said degree of depth minimal path is asked for module and is used for: the gray scale difference ratio that calculates current pixel and all directions neighbor; According to the degree of depth path data between said current pixel of said gray scale difference ratio calculation and the said all directions neighbor; And according to the said degree of depth path data of said all directions; Calculate said minimum-depth path; Wherein, neighbor directly over said all directions neighbor comprises, positive left neighbor and upper right side neighbor are directly over said all directions comprise, positive left and upper right side.
In one embodiment of the invention, the computing formula of the gray scale difference ratio of said calculating current pixel and all directions neighbor is: gray-top=(gray
I, j>(gray
I, j-1+ 1)) (gray-gray
I, j-1): d; Gray-left=(gray
I, j>(gray
I-1, j+ 1)) (gray – gray
I-1, j): d; Gray-top-right=(gray
I, j>(gray
I+1, j-1+ 1)) (gray – gray
I+1, j-1): d; Wherein, gray-top be current pixel with directly over neighbor gray scale difference reduced value, gray
I, jBe current pixel gray value, gray
I, j-1For directly over the neighbor gray value, d is the grayscale restraint value; Gray-left is current pixel and positive left neighbor gray scale difference reduced value, gray
I-1, jBe positive left neighbor gray value; Gray-top-right is current pixel and upper right side neighbor gray scale difference reduced value, gray
I+1, j-1Be upper right side neighbor gray value.
In one embodiment of the invention, the computing formula of the degree of depth path data of said calculating current pixel and all directions neighbor is: Depth-top=Depth
I, j-1+ gray-top; Depth-left=Depth
I-1, j+ gray-left; Depth-top-right=Depth
I+1, j-1+ gray-top-right; Wherein, Depth-top be current pixel with directly over the degree of depth path data of neighbor, Depth
I, j-1For directly over the neighbor depth value; Depth-left is current pixel and positive left neighbor degree of depth path data, Depth
I-1, jBe positive left neighbor depth value; Depth-top-right is current pixel and upper right side neighbor degree of depth path data, Depth
I+1, j-1Be upper right side neighbor depth value.
In one embodiment of the invention; The computing formula in the said minimum-depth of said calculating path is: Depth-min=min (Depth-top; Depth-left, Depth-top-right), wherein; Depth-min is the said minimum-depth path of current pixel, and min representes relatively big or small and gets its minimum value.
In one embodiment of the invention, the said degree of depth module of refining comprises: the depth data computing unit is used for the said minimum-depth path value of refining is obtained the said depth data of refining; And the depth data judging unit, be used for the refine high frequency component data of data of the said degree of depth is blocked, obtain said depth of smoothness data.
In one embodiment of the invention, said depth data and the said depth of smoothness data computing formula of refining is respectively: Depth-min-final=(Depth-min>> 4)+(Depth-min>> 4)+(Depth-min>> 4); Depth-final=(Depth-min-final t) and Depth-min-final:255, wherein, Depth-min-final is the said depth data of refining of current pixel, and Depth-final is the said depth of smoothness data of current pixel, and t is level and smooth threshold value.
In one embodiment of the invention; Said user-variable is provided with module and comprises: depth data convergent-divergent computing unit; Be used for many grades of depth datas being set according to user inputted variable; And according to user's input variable feedback said depth of smoothness data are upgraded, and with depth of smoothness Data Update numerical value as said output depth data.
In one embodiment of the invention, said synchronizing signal processing module comprises the synchronous computing unit of frame synchronizing signal, the synchronous computing unit of line synchronizing signal, the synchronous computing unit of data useful signal and the synchronous computing unit of field sync signal.
The adjustable multiresolution depth map of a kind of user generating apparatus through the present invention's proposition; Can planar video be carried out the real time direct conversion through the hardware bottom layer signal and generate corresponding depth map, compatible row resolution with interior multiple resolution format, comprises 1920*1080 at 2047 pixels; 1024*768; 720*480, picture formats such as 1440*900 simultaneously can be provided with to feed back and carry out depth data and ask for according to the user.This device volume is little, hardware is realized by force compatible and cost is low, can satisfy the demand that the real-time stereo content of multiple hardwares platform is made, and makes planar video efficiently be converted into three-dimensional video-frequency.
Aspect that the present invention adds and advantage part in the following description provide, and part will become obviously from the following description, or recognize through practice of the present invention.
Description of drawings
Above-mentioned and/or the additional aspect of the present invention and advantage from below in conjunction with accompanying drawing to becoming the description of embodiment obviously and understanding easily, wherein,
Fig. 1 is the structured flowchart of depth map generating apparatus according to an embodiment of the invention; And
Fig. 2 is a degree of depth path computing sketch map according to an embodiment of the invention.
Embodiment
Describe embodiments of the invention below in detail, the example of said embodiment is shown in the drawings, and wherein identical from start to finish or similar label is represented identical or similar elements or the element with identical or similar functions.Be exemplary through the embodiment that is described with reference to the drawings below, only be used to explain the present invention, and can not be interpreted as limitation of the present invention.On the contrary, embodiments of the invention comprise and fall into appended spirit that adds the right claim and all changes, modification and the equivalent in the intension scope.
Depth map generating apparatus according to the embodiment of the invention is described with reference to the drawings below.
Fig. 1 is the structured flowchart of depth map generating apparatus according to an embodiment of the invention.As shown in Figure 1; This depth map generating apparatus comprises: internal memory 200, gradation conversion module 300, degree of depth minimal path are asked for refine module 500, user-variable of module 400, the degree of depth module 600, synchronizing signal processing module 700 are set on input module 100, the sheet, and output module 800.Particularly:
In one embodiment of the invention, adopt formula to calculate the gradation data of each pixel:
When the multichannel image data are the RGB data format; Gray={ [(G+R)+(4R+2B)]+8G}>> 4; Wherein, R, G, B represent red channel, green channel, the blue channel components of image data value of multichannel image data,>> 4 the expression move to right four, gray representes the gradation data of current pixel; And
When the multichannel image data are the yuv data form, gray=Y, wherein Y representes the luminance channel components of image data value of multichannel image data.
The gradation data that degree of depth minimal path is asked for each pixel that module 400 is used for obtaining according to gradation conversion module 300 generates the minimum-depth path of each pixel.In conjunction with Fig. 2, degree of depth minimal path is asked for module 400 and is asked for the process in minimum-depth path and comprise the steps:
Steps A. calculate current pixel with directly over the gray scale difference ratio of neighbor, positive left neighbor, upper right side neighbor.
In one embodiment of the invention, the computing formula of calculating the gray scale difference ratio of current pixel and all directions neighbor is:
gray-top=(gray?
i,j>(gray?
i,j-1+1))?(gray-gray?
i,j-1):d;
gray-left=(gray?
i,j>(gray?
i-1,j+1))?(gray-gray?
i-1,j):d;
gray-top-right=(gray?
i,j>(gray?
i+1,j-1+1))?(gray-gray
i+1,j-1):d。
Wherein, gray-top be current pixel with directly over neighbor gray scale difference reduced value, gray
I, jBe current pixel gray value, gray
I, j-1For directly over the neighbor gray value, d is the grayscale restraint value, alternatively, 1<d<3, preferably, the d value is 2; Gray-left is current pixel and positive left neighbor gray scale difference reduced value, gray
I-1, jBe positive left neighbor gray value; Gray-top-right is current pixel and upper right side neighbor gray scale difference reduced value, gray
I+1, j-1Be upper right side neighbor gray value.
Step B. according to current pixel with directly over the gray scale difference ratio of neighbor, positive left neighbor, upper right side neighbor, calculate current pixel with directly over neighbor, the degree of depth path data between left neighbor, the upper right side neighbor just.
In one embodiment of the invention, the computing formula of the degree of depth path data of calculating current pixel and all directions neighbor is:
Depth-top=Depth?
i,j-1+gray-top;
Depth-left=Depth?
i-1,j+gray-left;
Depth-top-right=Depth?
i+1,j-1+gray-top-right。
Wherein, Depth-top be current pixel with directly over the degree of depth path data of neighbor, Depth
I, j-1For directly over the neighbor depth value; Depth-left is current pixel and positive left neighbor degree of depth path data, Depth
I-1, jBe positive left neighbor depth value; Depth-top-right is current pixel and upper right side neighbor degree of depth path data, Depth
I+1, j-1Be upper right side neighbor depth value.
Step C. according to current pixel and directly over degree of depth path data between the neighbor, positive left neighbor, upper right side neighbor, calculate the minimum-depth path.
In one embodiment of the invention, the computing formula in calculating minimum-depth path is:
Depth-min=min(Depth-top,Depth-left,Depth-top-right)。
Wherein, Depth-min is the minimum-depth path of current pixel, and min representes relatively big or small and gets its minimum value.
The degree of depth module 500 of refining is used to comprise: depth data computing unit 510 (not shown)s and depth data judging unit 520 (not shown)s.Wherein, depth data computing unit 510 is used for the minimum-depth path value of refining is obtained the depth data of refining; Depth data judging unit 520 is used for the refine high frequency component data of data of the degree of depth is blocked, and obtains the depth of smoothness data.
In one embodiment of the invention, the refine computing formula of depth data and level and smooth depth data is respectively:
Depth-min-final=(Depth-min>>4)+(Depth-min>>4)+(Depth-min>>4);
Depth-final=(Depth-min-final<t)?Depth-min-final:255。
Wherein, Depth-min-final is the depth data of refining of current pixel, and Depth-final is the depth of smoothness data of current pixel, and t is level and smooth threshold value, alternatively, 251 < t < 261, preferably, the t value is 256.Through this computing formula, can carry out figure place control and high fdrequency component is blocked to depth data, the depth data number range is controlled between the 0-255 the most at last.
User-variable is provided with module 600 and is used for according to the user-variable input parameter, and the depth of smoothness data of each pixel are carried out update calculation, to generate the ultimate depth data of each pixel.Particularly; User-variable is provided with module 600 and comprises: depth data convergent-divergent computing unit; This unit is used for according to user inputted variable many grades of depth datas being set, and (for example 0 grade to the N-1 shelves; N shelves altogether), and the depth of smoothness data are upgraded according to user's input variable feedback (being that the user specifies certain grade of gear), and with depth of smoothness Data Update numerical value as the output depth data.
Particularly; In one embodiment of the invention; Allow user inputted variable 0~7 that a certain shelves output in 8 grades of outputs is set altogether, the depth data unit for scaling multiply by a certain coefficient according to the depth of smoothness data that obtain and is updated to a certain shelves output depth data subsequently.In view of the hardware characteristics of binary computations, between can the coefficient of each grade 1/2 power function form, the output depth value that 0 grade of output depth value can be set be 0, the 1 grade is 0.5
7* (Depth-final), the 2nd grade output depth value is 0.5
6* (Depth-final), the 3rd grade output depth value is 0.5
5* (Depth-final) ..., the 7th grade output depth value is 0.5
5* (Depth-final).To sum up, when the user had input, depth data was provided with gear according to the user and upgrades; And, for the situation of when operation no user input, (for example the 3rd grade of certain shelves that acquiescence is output as intergrade can be set.
Synchronizing signal processing module 700 is used for the video synchronization signal under the various resolution is exported Synchronous Processing, the video synchronization signal after obtaining upgrading.Wherein, Synchronizing signal processing module 700 comprises: the synchronous computing unit of frame synchronizing signal, the synchronous computing unit of line synchronizing signal, the synchronous computing unit of data useful signal, the synchronous computing unit of field sync signal; Be respectively applied for the data delay periodicity that generates according to depth data; Carry out the output computation of Period of frame synchronizing signal, line synchronizing signal, data useful signal and field sync signal; The output cycle synchronisation of maintenance and depth data, synchronous frame synchronizing signal, line synchronizing signal, data useful signal and field sync signal after finally obtaining upgrading.
At last, output module 800 is used to be provided with output parameter, and wherein output parameter comprises the ultimate depth data of each pixel and the video synchronization signal after the renewal.Wherein, the ultimate depth data are provided with module 600 through user-variable and provide, and the ultimate depth data are 8 bit depth data, and scope is 0-255; Video synchronization signal after the renewal is provided by synchronizing signal processing module 700, comprises frame synchronizing signal, line synchronizing signal, data useful signal and field sync signal after the renewal.Output module is a device external output signal interface, provides that to carry out direct communication interconnected with the hardware bottom layer signal.
Depth map generating apparatus according to the embodiment of the invention has following advantage:
1. this device volume is little, integrated level is high, hardware compatibility is strong, can place on the multiple hardwares platforms such as FPGA hardware and special chip.
2. this device is a hardware bottom layer signal processing implement device, carries out real-time deep figure based on the single width planar video and generates, and can be provided with according to the user simultaneously and export, and has the advantage that computation capability is strong, conversion speed is fast, fabrication cycle is short, cost of manufacture is low.
The adjustable multiresolution depth map of a kind of user generating apparatus through the present invention's proposition; Can planar video be carried out the real time direct conversion through the hardware bottom layer signal and generate corresponding depth map, compatible row resolution with interior multiple resolution format, comprises 1920*1080 at 2047 pixels; 1024*768; 720*480, picture formats such as 1440*900 simultaneously can be provided with to feed back and carry out depth data and ask for according to the user.This device volume is little, hardware is realized by force compatible and cost is low, can satisfy the demand that the real-time stereo content of multiple hardwares platform is made, and makes planar video efficiently be converted into three-dimensional video-frequency.
In the description of this specification, the description of reference term " embodiment ", " some embodiment ", " example ", " concrete example " or " some examples " etc. means the concrete characteristic, structure, material or the characteristics that combine this embodiment or example to describe and is contained at least one embodiment of the present invention or the example.In this manual, the schematic statement to above-mentioned term not necessarily refers to identical embodiment or example.And concrete characteristic, structure, material or the characteristics of description can combine with suitable manner in any one or more embodiment or example.
Although illustrated and described embodiments of the invention; For those of ordinary skill in the art; Be appreciated that under the situation that does not break away from principle of the present invention and spirit and can carry out multiple variation, modification, replacement and modification that scope of the present invention is accompanying claims and be equal to and limit to these embodiment.
Claims (14)
1. the generating apparatus of a depth map is characterized in that, comprising:
Input module is used to be provided with input parameter, and wherein, said input parameter comprises video synchronization signal, user-variable input parameter and multichannel image data;
The gradation conversion module is used for the gradation data according to said each pixel of multichannel image data computation;
Degree of depth minimal path is asked for module, is used for generating according to the said gradation data of said each pixel the minimum-depth path of said each pixel;
The degree of depth module of refining is used for obtaining the depth data of refining of said each pixel, and generating the depth of smoothness data of said each pixel according to the said minimum-depth path of said each pixel value of refining;
User-variable is provided with module, is used for according to said user-variable input parameter, and the said depth of smoothness data of said each pixel are carried out update calculation, to generate the ultimate depth data of said each pixel;
The synchronizing signal processing module is used for said video synchronization signal under the various resolution is exported Synchronous Processing the video synchronization signal after obtaining upgrading; And
Output module is used to be provided with output parameter, and wherein, said output parameter comprises the said ultimate depth data of said each pixel and the video synchronization signal after the said renewal.
2. the generating apparatus of depth map as claimed in claim 1 is characterized in that, also comprises: internal memory on the sheet, and go up internal memory for said and comprise gray value line data spatial cache and depth value line data spatial cache.
3. the generating apparatus of depth map as claimed in claim 1 is characterized in that, said user-variable input parameter comprises the depth adjustment signal.
4. the generating apparatus of depth map as claimed in claim 1 is characterized in that, the said multichannel image data of input are 32 rgb image datas or 32 YUV view data.
5. the generating apparatus of depth map as claimed in claim 1; It is characterized in that; Said gradation conversion module is used for when said multichannel image data are the RGB data format; Obtain said gradation data through each passage numerical value weighting integer, when said multichannel image data are the yuv data form, get luminance channel numerical value as gradation data.
6. the generating apparatus of depth map as claimed in claim 5 is characterized in that, said gradation conversion module is calculated the gradation data of said each pixel according to following formula:
When said multichannel image data are the RGB data format; Gray={ [(G+R)+(4R+2B)]+8G}>> 4; Wherein, R, G, B represent red channel, green channel, the blue channel components of image data value of said multichannel image data,>> 4 the expression move to right four, gray representes the gradation data of current pixel; And
When said multichannel image data are the yuv data form, gray=Y, wherein Y representes the luminance channel components of image data value of said multichannel image data.
7. the generating apparatus of depth map as claimed in claim 1 is characterized in that, said degree of depth minimal path is asked for module and is used for:
Calculate the gray scale difference ratio of current pixel and all directions neighbor;
According to the degree of depth path data between said current pixel of said gray scale difference ratio calculation and the said all directions neighbor; And
Said degree of depth path data according to said all directions calculates said minimum-depth path,
Wherein, neighbor directly over said all directions neighbor comprises, positive left neighbor and upper right side neighbor are directly over said all directions comprise, positive left and upper right side.
8. the generating apparatus of depth map as claimed in claim 7 is characterized in that, the computing formula of the gray scale difference ratio of said calculating current pixel and all directions neighbor is:
gray-top=(gray?
i,j>(gray?
i,j-1+1))?(gray-gray?
i,j-1):d;
gray-left=(gray?
i,j>(gray?
i-1,j+1))?(gray–gray?
i-1,j):d;
gray-top-right=(gray
i,j>(gray
i+1,j-1+1))?(gray–gray
i+1,j-1):d;
Wherein, gray-top be current pixel with directly over neighbor gray scale difference reduced value, gray
I, jBe current pixel gray value, gray
I, j-1For directly over the neighbor gray value, d is the grayscale restraint value; Gray-left is current pixel and positive left neighbor gray scale difference reduced value, gray
I-1, jBe positive left neighbor gray value; Gray-top-right is current pixel and upper right side neighbor gray scale difference reduced value, gray
I+1, j-1Be upper right side neighbor gray value.
9. the generating apparatus of depth map as claimed in claim 8 is characterized in that, the computing formula of the degree of depth path data of said calculating current pixel and all directions neighbor is:
Depth-top=Depth?
i,j-1+gray-top;
Depth-left=Depth?
i-1,j+gray-left;
Depth-top-right=Depth?
i+1,j-1+gray-top-right;
Wherein, Depth-top be current pixel with directly over the degree of depth path data of neighbor, Depth
I, j-1For directly over the neighbor depth value; Depth-left is current pixel and positive left neighbor degree of depth path data, Depth
I-1, jBe positive left neighbor depth value; Depth-top-right is current pixel and upper right side neighbor degree of depth path data, Depth
I+1, j-1Be upper right side neighbor depth value.
10. the generating apparatus of depth map as claimed in claim 9 is characterized in that, the computing formula in the said minimum-depth of said calculating path is:
Depth-min=min(Depth-top,Depth-left,Depth-top-right),
Wherein, Depth-min is the said minimum-depth path of current pixel, and min representes relatively big or small and gets its minimum value.
11. the generating apparatus of depth map as claimed in claim 1 is characterized in that, the said degree of depth module of refining comprises:
The depth data computing unit is used for the said minimum-depth path value of refining is obtained the said depth data of refining; And
The depth data judging unit is used for the refine high frequency component data of data of the said degree of depth is blocked, and obtains said depth of smoothness data.
12. the generating apparatus of depth map as claimed in claim 11 is characterized in that, said depth data and the said depth of smoothness data computing formula of refining is respectively:
Depth-min-final=(Depth-min>>4)+(Depth-min>>4)+(Depth-min>>4);
Depth-final=(Depth-min-final<t)?Depth-min-final:255,
Wherein, Depth-min-final is the said depth data of refining of current pixel, and Depth-final is the said depth of smoothness data of current pixel, and t is level and smooth threshold value.
13. the generating apparatus of depth map as claimed in claim 1; It is characterized in that; Said user-variable is provided with module and comprises: depth data convergent-divergent computing unit; Be used for many grades of depth datas being set, and said depth of smoothness data upgraded according to user's input variable feedback according to user inputted variable, and with depth of smoothness Data Update numerical value as said output depth data.
14. the generating apparatus of depth map as claimed in claim 1; It is characterized in that said synchronizing signal processing module comprises the synchronous computing unit of frame synchronizing signal, the synchronous computing unit of line synchronizing signal, the synchronous computing unit of data useful signal and the synchronous computing unit of field sync signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210157701.7A CN102724528B (en) | 2012-05-18 | 2012-05-18 | Depth map generation device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210157701.7A CN102724528B (en) | 2012-05-18 | 2012-05-18 | Depth map generation device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102724528A true CN102724528A (en) | 2012-10-10 |
CN102724528B CN102724528B (en) | 2015-01-14 |
Family
ID=46950172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210157701.7A Expired - Fee Related CN102724528B (en) | 2012-05-18 | 2012-05-18 | Depth map generation device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102724528B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103945204A (en) * | 2014-04-01 | 2014-07-23 | 青岛海信电器股份有限公司 | Image signal processing method and device |
CN109087235A (en) * | 2017-05-25 | 2018-12-25 | 钰立微电子股份有限公司 | Image processor and relevant picture system |
CN109375887A (en) * | 2017-08-03 | 2019-02-22 | 富泰华工业(深圳)有限公司 | Electronic equipment and video signal frame aligning method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101605270A (en) * | 2009-07-16 | 2009-12-16 | 清华大学 | Generate the method and apparatus of depth map |
CN101630408A (en) * | 2009-08-14 | 2010-01-20 | 清华大学 | Depth map treatment method and device |
CN101720480A (en) * | 2007-07-03 | 2010-06-02 | 皇家飞利浦电子股份有限公司 | Computing a depth map |
CN101729919A (en) * | 2009-10-30 | 2010-06-09 | 无锡景象数字技术有限公司 | System for full-automatically converting planar video into stereoscopic video based on FPGA |
US20100265316A1 (en) * | 2009-04-16 | 2010-10-21 | Primesense Ltd. | Three-dimensional mapping and imaging |
CN101938669A (en) * | 2010-09-13 | 2011-01-05 | 福州瑞芯微电子有限公司 | Self-adaptive video converting system for converting 2D into 3D |
CN102026012A (en) * | 2010-11-26 | 2011-04-20 | 清华大学 | Generation method and device of depth map through three-dimensional conversion to planar video |
-
2012
- 2012-05-18 CN CN201210157701.7A patent/CN102724528B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101720480A (en) * | 2007-07-03 | 2010-06-02 | 皇家飞利浦电子股份有限公司 | Computing a depth map |
US20100265316A1 (en) * | 2009-04-16 | 2010-10-21 | Primesense Ltd. | Three-dimensional mapping and imaging |
CN101605270A (en) * | 2009-07-16 | 2009-12-16 | 清华大学 | Generate the method and apparatus of depth map |
CN101630408A (en) * | 2009-08-14 | 2010-01-20 | 清华大学 | Depth map treatment method and device |
CN101729919A (en) * | 2009-10-30 | 2010-06-09 | 无锡景象数字技术有限公司 | System for full-automatically converting planar video into stereoscopic video based on FPGA |
CN101938669A (en) * | 2010-09-13 | 2011-01-05 | 福州瑞芯微电子有限公司 | Self-adaptive video converting system for converting 2D into 3D |
CN102026012A (en) * | 2010-11-26 | 2011-04-20 | 清华大学 | Generation method and device of depth map through three-dimensional conversion to planar video |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103945204A (en) * | 2014-04-01 | 2014-07-23 | 青岛海信电器股份有限公司 | Image signal processing method and device |
CN103945204B (en) * | 2014-04-01 | 2016-03-23 | 青岛海信电器股份有限公司 | A kind of image-signal processing method and device |
CN109087235A (en) * | 2017-05-25 | 2018-12-25 | 钰立微电子股份有限公司 | Image processor and relevant picture system |
CN109087235B (en) * | 2017-05-25 | 2023-09-15 | 钰立微电子股份有限公司 | Image processor and related image system |
CN109375887A (en) * | 2017-08-03 | 2019-02-22 | 富泰华工业(深圳)有限公司 | Electronic equipment and video signal frame aligning method |
CN109375887B (en) * | 2017-08-03 | 2022-04-26 | 富泰华工业(深圳)有限公司 | Electronic equipment and video frame arrangement method |
Also Published As
Publication number | Publication date |
---|---|
CN102724528B (en) | 2015-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101789223B (en) | Apparatus for generating over-drive values and method thereof | |
US9153032B2 (en) | Conversion method and apparatus with depth map generation | |
JP5594477B2 (en) | Image display device, image display method, and program | |
CN102741879B (en) | Method for generating depth maps from monocular images and systems using the same | |
US9041773B2 (en) | Conversion of 2-dimensional image data into 3-dimensional image data | |
US10192517B2 (en) | Method of adapting a source image content to a target display | |
JP2010200213A5 (en) | ||
CN104506872B (en) | A kind of method and device of converting plane video into stereoscopic video | |
CN1957603A (en) | Video signal transformation device, and video display device | |
US20180270400A1 (en) | Liquid crystal display device and image processing method for same | |
CN102447925A (en) | Method and device for synthesizing virtual viewpoint image | |
CN113496685A (en) | Display brightness adjusting method and related device | |
CN102724528A (en) | Depth map generation device | |
CN102026012B (en) | Generation method and device of depth map through three-dimensional conversion to planar video | |
CN103248910B (en) | Three-dimensional imaging system and image reproducing method thereof | |
CN102137267A (en) | Algorithm for transforming two-dimensional (2D) character scene into three-dimensional (3D) character scene | |
CN102647602A (en) | System for converting 2D (two-dimensional) video into 3D (three-dimensional) video on basis of GPU (Graphics Processing Unit) | |
KR101731113B1 (en) | 2d-3d image conversion method and stereoscopic image display using the same | |
TWI409717B (en) | Image transformation mtehod adapted to computer programming product and image display device | |
CN111161685A (en) | Virtual reality display equipment and control method thereof | |
CN112967365B (en) | Depth map generation method based on user perception optimization | |
CN101938661A (en) | Method and device for processing image and video image frame | |
CN112700485B (en) | Image depth information extraction method | |
KR101497933B1 (en) | System and Method for compositing various images using clustering technique | |
CN102271271A (en) | Multi-viewpoint video generation device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150114 |
|
CF01 | Termination of patent right due to non-payment of annual fee |