CN102868899A - Method for processing three-dimensional image - Google Patents
Method for processing three-dimensional image Download PDFInfo
- Publication number
- CN102868899A CN102868899A CN2012103278166A CN201210327816A CN102868899A CN 102868899 A CN102868899 A CN 102868899A CN 2012103278166 A CN2012103278166 A CN 2012103278166A CN 201210327816 A CN201210327816 A CN 201210327816A CN 102868899 A CN102868899 A CN 102868899A
- Authority
- CN
- China
- Prior art keywords
- picture element
- depth map
- information
- background
- dimensional image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a method for processing a three-dimensional image. The method is used for converting one first image of a first viewing angle into a plurality of second images of a plurality of second viewing angles, and comprises the following steps that a depth map is provided and corresponds to the first image, wherein the depth map comprises a plurality of gray scale pixels, and each gray scale pixel consists of a plurality of sub pixels; the capacity of the depth map is reduced to obtain a reduced depth map, and the reduced depth map only takes the single sub pixel to represent the single gray scale pixel; the first image and the reduced depth map are transferred to a decoder; and the second images are calculated in the decoder. By the method, the information compression ratio is increased, the purpose of reducing the information transmission quantity is realized, and moreover, a picture three-dimensional (3D) display effect is also optimized.
Description
Technical field
The present invention relates to a kind of image processing method, relate in particular to a kind of three dimensional image processing method.
Background technology
In three-dimensional (three-dimensional, 3D) bore hole display, 3D video coding technology is to realize the key technology of transmission various visual angles picture.Various visual angles video codings (Multi-view Video Coding, MVC) are compared to two dimension (2D) the video pictures image coding of traditional single visual angle, and data quantity is more huge, and computational complexity is high.In order to overcome the huge transmission quantity that produces the various visual angles picture with many 2D images, the various visual angles video coding utilizes the relevance between different visual angles, thereby is compressed with individually better compression ratio than single visual angle.Therefore, someone proposes to use distributed video coding technology (Distributed Video Coding, DVC) that the calculating of complexity is moved to the decoding end at present, to promote coding usefulness.
In order to reduce the transmission quantity of picture data of various visual angles, United States Patent (USP) the 5th, 929 has proposed the pixel shift device relevant with the parallax degree of depth No. 859, and it can simulate the picture of single visual angle the picture at other visual angles, realizes the transmission quantity of another visual angle picture of minimizing.Yet though said method has reduced the transmission quantity of picture data, the picture at other visual angles that calculate can have the black hole (regional area that does not have image) that does not have image and produce.For head it off, the picture element interpolation method is filled up the black hole around general the employing.Yet the picture element interpolation method can't reach gratifying effect around utilizing on the stereo display effect.
As from the foregoing, as want to reach the display effect that does not have traditionally by the coding techniques at other visual angles of emulation, then need increase the transmission quantity of various visual angles picture.Therefore, how to take into account coding usefulness and can reach again that good display effect is real to be one of difficult problem of being badly in need of in this stage overcoming.
In view of this, be necessary prior art is improved, to overcome the known problem that display effect is not good and transinformation is huge that causes by the homography conversion.
Summary of the invention
In order to overcome the deficiencies in the prior art, the purpose of this invention is to provide the three dimensional image processing method that a kind of display effect is good and transinformation is less.
For achieving the above object, technical scheme of the present invention is: a kind of three dimensional image processing method, and it is for will be in one of one first visual angle the first image transitions at a plurality of second images at a plurality of the second visual angles, it comprises the following steps:
One depth map is provided, and this depth map is to should the first image, and wherein this depth map has a plurality of GTG picture elements, and each GTG picture element all is comprised of a plurality of sub-picture element;
Reduce the capacity of this depth map to get a reduction depth map, this reduction depth map is only with single this GTG picture element of single this sub-picture element representative;
Transmit this first image and this reduction depth map to a decoder;
And in this decoder, calculate those second images.
Wherein the GTG value of the sub-picture element of this reduction depth map represents the depth displacement value of the corresponding picture element of this first image.
Wherein each GTG picture element of this depth map is comprised of the sub-picture element of RGB, and the GTG value of the sub-picture element of RGB in each GTG picture element is all identical.
The capacity that wherein should reduce depth map is 1/3rd of this depth map capacity.
Further comprise providing a background information figure, this background information figure represents the background that does not have the regional area of image in those second images.
Wherein this background information figure has a plurality of picture elements vector, and each picture element vector one of represents in this background position and the color of picture element.
Wherein each picture element vector is to represent with two adjacent picture elements among this background information figure.
Wherein these two adjacent picture elements have respectively a first information amount and one second amount of information, and this first information amount is used for this position of this picture element of this background of storage, and this second amount of information is used for this color of this picture element of this background of storage.
Wherein this first information amount and this second amount of information are all 24, the horizontal level of this position of this picture element in this background takies 11 of this first information amount, the upright position of this position of this picture element in this background takies 11 of this first information amount, and the GTG value of the red, green, blue of this color of this picture element in this background takies respectively 8 of this second amount of information.
Wherein transmit this first image and this reduction depth map to this decoder and comprise that also this background information of transmission figure is to this decoder.
Adopt above technical scheme, the present invention utilizes the capacity of reduction depth map, represent the depth displacement value of whole picture element with the GTG value of single sub-picture element, and then reduced and originally transmitted 1/3rd of the required amount of information of depth map, and improve the compression ratio of information, thereby realized reducing the purpose of transinformation.In addition, the present invention utilizes the interior contained background information of background information figure to fill up the black hole (regional area that does not have image) of the picture at other visual angles that calculate in the prior art in addition, thereby has optimized the 3D display effect of picture.
Description of drawings
The present invention is further detailed explanation below in conjunction with the drawings and specific embodiments:
Fig. 1 is that the 3-D view of preferred embodiment of the present invention is processed configuration diagram;
Fig. 2 is the flow chart of the three dimensional image processing method of preferred embodiment of the present invention;
Fig. 3 is the first image of preferred embodiment and the schematic diagram of depth map for this reason;
Fig. 4 is the schematic diagram of the first image, reduction depth map and second image of preferred embodiment for this reason;
Fig. 5 is the flow chart of the 3-dimensional image method of another preferred embodiment of the present invention;
Fig. 6 is the schematic diagram of the background information figure of preferred embodiment for this reason;
Fig. 7 for this reason preferred embodiment the first image, the reduction depth map, background information figure and the second image schematic diagram.
Embodiment
The lower preferred embodiment that will cooperate accompanying drawing to describe three dimensional image processing method of the present invention in detail.Please refer to Fig. 1, Fig. 1 is that the 3-D view of preferred embodiment of the present invention is processed configuration diagram.The three dimensional image processing method of preferred embodiment of the present invention is for the position is converted to the position at a plurality of second images 20 at a plurality of the second visual angles at one of one first visual angle the first image 10 by encoder 120 and decoder 140.Particularly, this first visual angle can be LOOK LEFT or LOOK RIGHT, if this first visual angle is LOOK LEFT, 10 of the first images are left-eye image; If this first visual angle is LOOK RIGHT, 10 of the first images are eye image.In fact, encoder 120 is to receive simultaneously left-eye image and eye image, after encoder 120 is processed left-eye image and eye image, reduce depth map (will describe in detail below) and generate one, then left-eye image or eye image one of them and this reduction depth map are transferred to decoder 140 and process, and can calculate the position at those second images 20 at the second visual angle.According to symmetry principle, be to convert left-eye image to other second images 20 or all be identical with the step that eye image converts other the second images 20 to, therefore only represent LOOK LEFT or LOOK RIGHT with the first visual angle at this, and represent left-eye image or eye image with the first image 10.
Describe the steps flow chart of the three dimensional image processing method of this embodiment in detail below with reference to Fig. 2 to Fig. 4.Please refer to Fig. 2 and Fig. 3, Fig. 2 illustrates the flow chart of the three dimensional image processing method of preferred embodiment of the present invention, and Fig. 3 illustrates the first image 10 of this preferred embodiment and the schematic diagram of depth map.This method starts from step S10, in step S10, one degree of depth Figure 101 is provided, corresponding this first image 10 of this degree of depth Figure 101, as shown in Figure 3, this first image 10 is actually colored, has far and near different object in figure, and degree of depth Figure 101 is the gray scale images for calculating according to the far and near difference of object in the content of 120 pairs of the first images 10 of encoder then.Wherein this degree of depth Figure 101 has a plurality of GTG picture element P, and each GTG picture element P all is comprised of a plurality of sub-picture elements (sub-pixel) SP.In this embodiment, each GTG picture element P of this degree of depth Figure 101 is comprised of three sub-picture element SP of RGB (RGB), and the GTG value of the RGB picture element SP among each GTG picture element P is all identical.
For instance, the flower among this degree of depth Figure 101 is positioned at the foremost, therefore represents with white picture element, and namely the GTG value of RGB picture element SP can be respectively 255,255,255.And therefore sky represents with the black picture element for backmost, and namely the GTG value of RGB picture element SP can be respectively 0,0,0.And tree is positioned at the centre, therefore represents with the grey picture element, and namely the GTG value of RGB picture element SP can be respectively 126,126,126.
Fig. 2 and Fig. 4, Fig. 4 illustrate the schematic diagram of the first image 10, reduction degree of depth Figure 102 and second image 20 of this preferred embodiment.In step S20, reduce the capacity of this degree of depth Figure 101 to get a reduction degree of depth Figure 102, this reduction degree of depth Figure 102 is only with single this GTG picture element P of single this sub-picture element (the sub-picture element SP of R, G or B) representative.In simple terms, because the GTG value of the RGB picture element SP among each the GTG picture element P among degree of depth Figure 101 is all identical, therefore only represent its corresponding GTG picture element P with single R, G or the sub-picture element SP of B.Further, the GTG value of the sub-picture element SP of this reduction degree of depth Figure 102 represents the depth displacement value of the picture element of these the first image 10 correspondences.For instance, the degree of depth of the content of this first image 10 can be divided into and have-127 to+127 displacements on totally 255 rank.Encoder 120 can reduce to the capacity of this degree of depth Figure 101 1/3rd of script by step S20, and the capacity that namely should reduce degree of depth Figure 102 is 1/3rd of this degree of depth Figure 101 capacity.
In step S30, transmit this first image 10 and this reduction degree of depth Figure 102 to this decoder 140.Because the capacity of this reduction degree of depth Figure 102 is 1/3rd of this degree of depth Figure 101 capacity, therefore the compression ratio of the present embodiment amount of information transmission is reduced to (1+1/3)/2=66.67 ﹪ compared to this first image 10 of transmission and this degree of depth Figure 101, thereby reaches the purpose of minimizing amount of information transmission of the present invention.
In step S40, in this decoder 140, calculate those the second images 20.Specifically, this decoder 140 is about to this reduction degree of depth Figure 102 and is reduced into degree of depth Figure 101, the again computing such as the homography by the technology usually known, displacement table (Shift Table), and those the second images 20(that calculates at other the second visual angles is actually cromogram).As mentioned above, similarly, those second images 20 also have the regional area 201 that does not have image and produce, and in order to fill up regional area 201, below will cooperate accompanying drawing to describe another preferred embodiment of three dimensional image processing method of the present invention in detail.
Please refer to Fig. 5, Fig. 5 is the flow chart of the 3-dimensional image method of another preferred embodiment of the present invention.Similarly, the three dimensional image processing method of this another preferred embodiment is for a plurality of second images 20 that will be converted at first image 10 at the first visual angle at a plurality of the second visual angles.
Please in the lump with reference to Fig. 3, as previously mentioned, in step S10 ˋ, provide a degree of depth Figure 101, corresponding this first image 10 of this degree of depth Figure 101.Because step S10 ˋ is identical with the step S10 of previous embodiment, therefore detailed description please refer to aforementionedly, does not repeat them here.
Please in the lump with reference to Fig. 6, Fig. 6 illustrates the schematic diagram of the background information figure of this preferred embodiment.In step S20 ˋ, a background information Figure 104 is provided, this background information Figure 104 represents the background that does not have the regional area 201 of image in those second images 20.Specifically, when S10 ˋ is by left-eye image and eye image compute depth figure in step, get final product to get the maximum displacement of the object in the frame out.Therefore, the background that the position that is produced by maximum displacement and background thereof then can obtain regional area 201.
Particularly, this background information Figure 104 has a plurality of picture element vector PV, and each picture element vector PV represents position and the color of the picture element in this background.Specifically, wherein each this picture element vector PV represents with two among this background information Figure 104 adjacent picture element P1, P2.
Please refer to following form: the data of picture element vector PV represents form.
Wherein, these two adjacent picture element P1, P2 have respectively a first information amount and one second amount of information, and this first information amount is used for this position (X of this picture element of this background of storage, Y), this second amount of information is used for this color (R, G, B) of this picture element of this background of storage.
Lifting common Full HD picture (1920*1080) is example, and this first information amount and this second amount of information of picture element P1, the P2 of background information Figure 104 are all 24 (bits).The horizontal level X(1 to 1920 of this position of this picture element in this background) takies 11 of this first information amount of picture element P1, namely 2
11=2048.The upright position Y of this position of this picture element in this background (1 to 1080) takies 11 of this first information amount of picture element P1, and namely 2
11=2048.Therefore, have more 2 and be useless data.
In addition, the GTG value of the red, green, blue of this color of this picture element in this background takies respectively 8 of this second amount of information of picture element P2.
If the picture element position that does not have image of regional area 201 appears at (1 of Full HD picture, 1), the i.e. point in the upper left corner, then the binary code of horizontal level X is 0, the binary code of upright position Y is 0, therefore in fact the picture element P1 of background information Figure 104 presents black (0,0,0).In addition, adjacent picture element P2 is the color of background.If the picture element position that does not have image of regional area 201 appears at (2 of Full HD picture, 1), then the binary code of horizontal level X is that 1(is positioned at the 13rd, the 4th of the sub-picture element of G who is actual picture element P1 is 1), the binary code of upright position Y is 0, therefore in fact the picture element P1 of background information Figure 104 presents green (0,16,0).Therefore, in fact this background information Figure 104 is colored assorted point of not having a rule.
Please in the lump with reference to Fig. 5, in step S30 ˋ, reduce the capacity of this degree of depth Figure 101 to get a reduction degree of depth Figure 102, this reduction degree of depth Figure 102 is only with single this GTG picture element of single this sub-picture element representative.Because step S30 ˋ is identical with the step S20 of previous embodiment, therefore detailed description please refer to aforementionedly, does not repeat them here.
Yet because regional area 201 accounts for the area of entire image and few, therefore available former degree of depth Figure 101 is reduced to amount of information (i.e. 2/3 former figure) that reduction degree of depth Figure 102 saves as the amount of information of noting down background information Figure 104.Again because background information Figure 104 is position and the colouring information that is stored in regional area 201 single picture elements with two picture element P1, P2, so in fact background information Figure 104 can record the regional area 201 of 1/3 size of former figure, and it has very abundant capacity.
Please in the lump with reference to Fig. 7, Fig. 7 illustrates the first image 10, reduction degree of depth Figure 102, background information Figure 104, and the second image 20 ˋ schematic diagrames of this preferred embodiment.In step S40 ˋ, transmit this first image 10, this reduction degree of depth Figure 102 and this background information Figure 104 to this decoder 140.
In step S50 ˋ, in this decoder 140, calculate those the second images 20.Because step S50 ˋ is identical with the step S40 of previous embodiment, therefore detailed description please refer to aforementionedly, does not repeat them here.
As shown in Figure 7, in step S60 ˋ, fill up the black hole, namely in this decoder 140, utilize background information Figure 104 to fill up the regional area 201 of the second image 20, with the second image 20 ˋ after must repairing, fill up several second images 20 ˋ in black hole and can improve the quality of viewing and admiring the 3D image and see through correct background frame.
In sum, the present invention is by the capacity of reduction degree of depth Figure 102, represent the depth displacement value of whole picture element with the GTG value of single RGB picture element, and then reduced and originally transmitted 1/3rd of the required amount of information of degree of depth Figure 101, and the compression ratio of the information of raising is reached the purpose of minimizing amount of information transmission of the present invention thus.In addition, the present invention also utilizes contained background information in this background information Figure 104, the black hole that has with the picture of filling up other visual angles that usually calculate, and optimized the 3D display effect.
Although the present invention with embodiment openly as above; yet it is not to limit the present invention; have in the technical field of the invention ordinary skill and know the knowledgeable; without departing from the spirit and scope of the present invention; can do various changes and conversion, so the protection range that protection scope of the present invention should define with the claim of application is as the criterion.
Claims (10)
1. three dimensional image processing method, it is for will be in one of one first visual angle the first image transitions to a plurality of the second images at a plurality of the second visual angles, it is characterized in that: it comprises the following steps:
One depth map is provided, and this depth map is to should the first image, and wherein this depth map has a plurality of GTG picture elements, and each GTG picture element all is comprised of a plurality of sub-picture element;
Reduce the capacity of this depth map to get a reduction depth map, this reduction depth map is only with single this GTG picture element of single this sub-picture element representative;
Transmit this first image and this reduction depth map to a decoder;
And in this decoder, calculate those second images.
2. a kind of three dimensional image processing method according to claim 1, it is characterized in that: wherein the GTG value of the sub-picture element of this reduction depth map represents the depth displacement value of the corresponding picture element of this first image.
3. a kind of three dimensional image processing method according to claim 1, it is characterized in that: wherein each GTG picture element of this depth map is comprised of the sub-picture element of RGB, and the GTG value of the sub-picture element of RGB in each GTG picture element is all identical.
4. a kind of three dimensional image processing method according to claim 3 is characterized in that: wherein the capacity of this reduction depth map is 1/3rd of this depth map capacity.
5. a kind of three dimensional image processing method according to claim 1, it is characterized in that: further comprise providing a background information figure, this background information figure represents the background that does not have the regional area of image in those second images.
6. a kind of three dimensional image processing method according to claim 5 is characterized in that: wherein this background information figure has a plurality of picture elements vectors, and each picture element vector one of represents in this background position and the color of picture element.
7. a kind of three dimensional image processing method according to claim 6, it is characterized in that: wherein each picture element vector is to represent with two adjacent picture elements among this background information figure.
8. a kind of three dimensional image processing method according to claim 7, it is characterized in that: wherein these two adjacent picture elements have respectively a first information amount and one second amount of information, this first information amount is used for this position of this picture element of this background of storage, and this second amount of information is used for this color of this picture element of this background of storage.
9. a kind of three dimensional image processing method according to claim 8, it is characterized in that: wherein this first information amount and this second amount of information are all 24, the horizontal level of this position of this picture element in this background takies 11 of this first information amount, the upright position of this position of this picture element in this background takies 11 of this first information amount, and the GTG value of the red, green, blue of this color of this picture element in this background takies respectively 8 of this second amount of information.
10. a kind of three dimensional image processing method according to claim 5 is characterized in that: wherein transmit this first image and this reduction depth map to this decoder and comprise that also this background information of transmission figure is to this decoder.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012103278166A CN102868899A (en) | 2012-09-06 | 2012-09-06 | Method for processing three-dimensional image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012103278166A CN102868899A (en) | 2012-09-06 | 2012-09-06 | Method for processing three-dimensional image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102868899A true CN102868899A (en) | 2013-01-09 |
Family
ID=47447462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012103278166A Pending CN102868899A (en) | 2012-09-06 | 2012-09-06 | Method for processing three-dimensional image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102868899A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103517059A (en) * | 2013-07-11 | 2014-01-15 | 福建华映显示科技有限公司 | Method for storing contents of 3D image |
EP2858359A1 (en) * | 2013-10-02 | 2015-04-08 | National Cheng Kung University | Unpacking method, unpacking device and unpacking system of packed frame |
EP2858361A1 (en) * | 2013-10-02 | 2015-04-08 | National Cheng Kung University | Method, device and system for restoring resized depth frame into original depth frame |
EP2858362A1 (en) * | 2013-10-02 | 2015-04-08 | National Cheng Kung University | Method, device and system for resizing original depth frame into resized depth frame |
EP2858360A1 (en) * | 2013-10-02 | 2015-04-08 | National Cheng Kung University | Method, device and system for packing color frame and original depth frame |
KR20150039570A (en) * | 2013-10-02 | 2015-04-10 | 웰추즈 테크놀로지 코., 엘티디 | Method, device and system for resizing and restoring original depth frame |
CN109257588A (en) * | 2018-09-30 | 2019-01-22 | Oppo广东移动通信有限公司 | A kind of data transmission method, terminal, server and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5929859A (en) * | 1995-12-19 | 1999-07-27 | U.S. Philips Corporation | Parallactic depth-dependent pixel shifts |
CN101312494A (en) * | 2007-05-21 | 2008-11-26 | 华为技术有限公司 | Method for computing camera response curve and synthesizing image with large dynamic range and apparatus therefor |
WO2009081335A1 (en) * | 2007-12-20 | 2009-07-02 | Koninklijke Philips Electronics N.V. | Image encoding method for stereoscopic rendering |
CN102495878A (en) * | 2011-12-05 | 2012-06-13 | 深圳市中钞科信金融科技有限公司 | File and method for storing machine vision detection result |
CN102523454A (en) * | 2012-01-02 | 2012-06-27 | 西安电子科技大学 | Method for utilizing 3D (three dimensional) dictionary to eliminate block effect in 3D display system |
-
2012
- 2012-09-06 CN CN2012103278166A patent/CN102868899A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5929859A (en) * | 1995-12-19 | 1999-07-27 | U.S. Philips Corporation | Parallactic depth-dependent pixel shifts |
CN101312494A (en) * | 2007-05-21 | 2008-11-26 | 华为技术有限公司 | Method for computing camera response curve and synthesizing image with large dynamic range and apparatus therefor |
WO2009081335A1 (en) * | 2007-12-20 | 2009-07-02 | Koninklijke Philips Electronics N.V. | Image encoding method for stereoscopic rendering |
CN102495878A (en) * | 2011-12-05 | 2012-06-13 | 深圳市中钞科信金融科技有限公司 | File and method for storing machine vision detection result |
CN102523454A (en) * | 2012-01-02 | 2012-06-27 | 西安电子科技大学 | Method for utilizing 3D (three dimensional) dictionary to eliminate block effect in 3D display system |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103517059A (en) * | 2013-07-11 | 2014-01-15 | 福建华映显示科技有限公司 | Method for storing contents of 3D image |
CN103517059B (en) * | 2013-07-11 | 2015-05-13 | 福建华映显示科技有限公司 | Method for storing contents of 3D image |
CN104519335A (en) * | 2013-10-02 | 2015-04-15 | 惟成科技有限公司 | Method, device and system for resizing original depth frame |
JP2015073272A (en) * | 2013-10-02 | 2015-04-16 | 惟成科技有限公司 | Method, device and system for resizing and restoring original depth frame |
EP2858360A1 (en) * | 2013-10-02 | 2015-04-08 | National Cheng Kung University | Method, device and system for packing color frame and original depth frame |
KR20150039570A (en) * | 2013-10-02 | 2015-04-10 | 웰추즈 테크놀로지 코., 엘티디 | Method, device and system for resizing and restoring original depth frame |
CN104519336A (en) * | 2013-10-02 | 2015-04-15 | 惟成科技有限公司 | Method, device and system for restoring resized depth frame |
CN104519289A (en) * | 2013-10-02 | 2015-04-15 | 惟成科技有限公司 | Unpacking method, device and system for packing picture frame |
EP2858361A1 (en) * | 2013-10-02 | 2015-04-08 | National Cheng Kung University | Method, device and system for restoring resized depth frame into original depth frame |
EP2858362A1 (en) * | 2013-10-02 | 2015-04-08 | National Cheng Kung University | Method, device and system for resizing original depth frame into resized depth frame |
EP2858359A1 (en) * | 2013-10-02 | 2015-04-08 | National Cheng Kung University | Unpacking method, unpacking device and unpacking system of packed frame |
KR101652583B1 (en) * | 2013-10-02 | 2016-08-30 | 내셔날 쳉쿵 유니버시티 | Method, device and system for resizing and restoring original depth frame |
KR20160102953A (en) * | 2013-10-02 | 2016-08-31 | 내셔날 쳉쿵 유니버시티 | Method, device and system for resizing and restoring original depth frame |
KR101684834B1 (en) | 2013-10-02 | 2016-12-08 | 내셔날 쳉쿵 유니버시티 | Method, device and system for resizing and restoring original depth frame |
US9521428B2 (en) | 2013-10-02 | 2016-12-13 | National Cheng Kung University | Method, device and system for resizing original depth frame into resized depth frame |
US9529825B2 (en) | 2013-10-02 | 2016-12-27 | National Cheng Kung University | Method, device and system for restoring resized depth frame into original depth frame |
CN104519289B (en) * | 2013-10-02 | 2018-06-22 | 杨家辉 | Unpacking method, device and system for packing picture frame |
CN109257588A (en) * | 2018-09-30 | 2019-01-22 | Oppo广东移动通信有限公司 | A kind of data transmission method, terminal, server and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102868899A (en) | Method for processing three-dimensional image | |
CN100496121C (en) | Image signal processing method of the interactive multi-view video system | |
US9185432B2 (en) | Method and system for encoding a 3D image signal, encoded 3D image signal, method and system for decoding a 3D image signal | |
KR20210028240A (en) | Display processing circuit | |
US10448030B2 (en) | Content adaptive light field compression | |
US20220368883A1 (en) | System and method for generating light field images | |
CN102325259A (en) | Method and device for synthesizing virtual viewpoints in multi-viewpoint video | |
TWI413405B (en) | Method and system for displaying 2d and 3d images simultaneously | |
CN101312540A (en) | Virtual visual point synthesizing method based on depth and block information | |
CN103828359A (en) | Representation and coding of multi-view images using tapestry encoding | |
TW201310973A (en) | System and method of handling data frames for stereoscopic display | |
KR20110058844A (en) | Method and system for encoding a 3d video signal, encoder for encoding a 3-d video signal, encoded 3d video signal, method and system for decoding a 3d video signal, decoder for decoding a 3d video signal | |
US20110074924A1 (en) | Video signal with depth information | |
CN103561255B (en) | A kind of Nakedness-yet stereoscopic display method | |
CN100596210C (en) | Method for extracting parallax of stereoscopic image based on sub-pixel | |
TW202127870A (en) | Multi-view 3d display screen and multi-view 3d display device | |
CN104506871B (en) | A kind of 3D video fast encoding methods based on HEVC | |
CN102164291B (en) | Method and display system for simultaneously displaying two-dimensional (2D) image and three-dimensional (3D) image | |
US20130050183A1 (en) | System and Method of Rendering Stereoscopic Images | |
CN103369336A (en) | Apparatus and method | |
US20130088485A1 (en) | Method of storing or transmitting auto-stereoscopic images | |
CN103379350B (en) | Virtual viewpoint image post-processing method | |
JP2010220186A (en) | Video converter, video output device, video conversion system, video image, recording medium and video conversion method | |
CN104038726A (en) | Method for achieving naked-eye 3D video conference | |
TWI508523B (en) | Method for processing three-dimensional images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130109 |