CN112015357A - Method for making 3D stereograph and product thereof - Google Patents

Method for making 3D stereograph and product thereof Download PDF

Info

Publication number
CN112015357A
CN112015357A CN202010804166.4A CN202010804166A CN112015357A CN 112015357 A CN112015357 A CN 112015357A CN 202010804166 A CN202010804166 A CN 202010804166A CN 112015357 A CN112015357 A CN 112015357A
Authority
CN
China
Prior art keywords
picture
depth
stereograph
pictures
black
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010804166.4A
Other languages
Chinese (zh)
Other versions
CN112015357B (en
Inventor
张靖
张佳鹏
高中宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaoxing Fast Real Electronic Technology Co ltd
Original Assignee
Shaoxing Fast Real Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoxing Fast Real Electronic Technology Co ltd filed Critical Shaoxing Fast Real Electronic Technology Co ltd
Priority to CN202010804166.4A priority Critical patent/CN112015357B/en
Publication of CN112015357A publication Critical patent/CN112015357A/en
Application granted granted Critical
Publication of CN112015357B publication Critical patent/CN112015357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/12Digital output to print unit, e.g. line printer, chain printer
    • G06F3/1201Dedicated interfaces to print systems
    • G06F3/1202Dedicated interfaces to print systems specifically adapted to achieve a particular effect
    • G06F3/1211Improving printing performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/12Digital output to print unit, e.g. line printer, chain printer
    • G06F3/1201Dedicated interfaces to print systems
    • G06F3/1223Dedicated interfaces to print systems specifically adapted to use a particular technique
    • G06F3/1237Print job management
    • G06F3/1242Image or content composition onto a page
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for manufacturing a 3D stereograph and a product thereof, which is characterized in that: the method comprises the following steps: s1, obtaining the depth information of the photo; s2, bisecting depth to obtain a pixel value corresponding to a depth value within a certain range of each appointed depth value and generating a gray picture; s3, filtering the original picture by taking the gray picture as a mask to obtain a depth of field picture; s4, setting different displacements for the photos with different depth values; s5, combining the depth photos at different positions into a combined picture; s6, acquiring combined pictures at different positions; s7, generating a color 3D stereograph; s8, generating a black-and-white 3D stereograph; and S9, generating the mixed 3D stereograph. According to the invention, the black-and-white 3D stereograph is additionally arranged on the surface of the color 3D stereograph, and the high-precision black-and-white picture is adopted to outline, so that better visual experience is brought.

Description

Method for making 3D stereograph and product thereof
Technical Field
The invention relates to the technical field of three-dimensional painting, in particular to a manufacturing method of a 3D three-dimensional painting and a product thereof.
Background
The 3D stereograph utilizes the visual difference of two eyes of people and the optical refraction principle to enable people to directly see a stereograph which highlights the three-dimensional relationship of the upper part, the lower part, the left part, the right part and the front part of an object in the graph in a plane, the object in the graph can be protruded out of the graph and can also be hidden in the graph, and the stereograph is vivid and lifelike and provides strong visual impact force for people. The existing 3D stereograph is divided into black and white and color, the black and white precision is high, but the display effect is poor, and the color precision is poor and is not clear enough.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an automatic color block positioning method of a color plate based on an image technology.
In order to achieve the purpose, the invention provides the following technical scheme to realize the purpose:
a method for manufacturing a 3D stereograph comprises the following steps:
s1, obtaining the depth information of the photo;
s2, dividing depth equally to obtain a plurality of designated depth values, taking a certain range near the designated depth values, obtaining pixel values corresponding to the depth values in the range and generating a gray-scale picture, and repeating the steps for the designated depth values to obtain a plurality of gray-scale pictures;
s3, filtering the original picture by taking the gray picture as a mask to obtain a depth of field picture;
s4, setting different displacements for the pictures with different depth values, and setting the maximum displacement d, wherein the displacements of the pictures from large to small are d/n × 1 and d/n × 2 … d/n × n in sequence;
s5, combining the depth photos at different positions into a combined picture;
s6, acquiring combined pictures at different positions;
and S7, processing the combined picture to generate a color 3D stereoscopic picture.
Preferably, the specific steps of step S1 are: s11a, acquiring original depth information of a photo, and comprising the following steps: s111a, creating an image source CGImageSource for inputting a photo;
s112a, copying auxiliary data from an image source;
s113a, extracting direction information of an image source by using a CGImageSourceCopyProperties AtIndex method;
s114a, creating an AVDepthData through the acquired auxiliary data and the direction information;
s115a, converting AVDepthData into original depth information, and finally obtaining a floating point type depth information array containing the depth information of each pixel position;
s12a, normalizing the original depth information to obtain depth information, and the method comprises the following steps: s121a, traversing each value in the depth information array obtained in S1;
s122a, finding out a maximum value max and a minimum value min;
s123a, the original depth information value of the pixel position is m1, and finally the depth value m2 of this pixel position is (m 1-min)/(max-min).
Preferably, the specific steps of step S1 are: s11b, reading binary data of the photo;
s12b, extracting an MPImage2 picture stored in the picture, wherein the MPImage2 picture is a depth picture of the picture;
s13b. depth values m2 for each pixel position are extracted from the depth picture.
Preferably, the specific steps of step S2 are: s21, obtaining pixel values corresponding to all depth values in the picture;
s22, giving a depth value m, taking a coordinate [ m,1], respectively displacing t to the left side and the right side at the point to obtain [ m-t,1], [ m + t,1], in [ m-t,1], a straight line L1 is formed by the slope s1, the intersection point of the straight line L1 and the X axis is [ m-t-1/s1, 0], in [ m + t,1], a slope s2 is taken as a straight line L2, values of s2 and s1 are opposite numbers, the intersection point of the straight line L2 and the X axis is [ m + t +1/(-s2),0], m-t-1/s1 is less than 0, then the depth values within a certain range of a given depth value m are 0 to m + t +1/(-s2), m-t-1/s1 > 0, then the depth values within a certain range of a given depth value m are m-t-1/s1 to m + t +1/(-s 2);
s23, dividing the depth information between 0 and 1 into n parts averagely to obtain designated depth values of 0, 1/n × 1, 1/n × 2 … 1/n × (n-1) and 1;
s24, traversing the specified depth value obtained in the step S23 to serve as the depth value m given in the step S22, and obtaining the depth value within a certain range of each specified depth value in the step S23;
s25, acquiring pixel values corresponding to depth values in a certain range of the specified depth values, and processing the pixel values into a gray picture to obtain n gray pictures;
preferably, the specific steps of step S3 are: s31, traversing each gray picture obtained in S2;
s32, acquiring CIImage of the original photo;
s33, acquiring CIImage of the gray level picture acquired in S31 as a mask;
s34, mixing the original image and the mask by using a CIBlendWithMask filtering rule to generate a new CIImage;
s35, generating pictures by using ConTEXT according to the newly generated CIImage to obtain n pictures with different specified depth values, namely n pictures with different depths of field;
preferably, the specific steps of step S5 are: s51, arranging the pictures obtained in the step S3 in sequence according to the depth values, wherein the depth value is larger before the depth value is smaller after the depth value is larger;
s52, moving the picture according to different displacements set in the step S4;
and S53, synthesizing the moved pictures to form a combined picture.
Preferably, the specific steps of step S6 are: s61, taking the maximum displacement values which can be reached by the left and right visual angles as-j and j;
s62, taking (i-1)/2 positions on the left, wherein the displacement of each position is as follows: -j/((i-1)/2), ((i-1)/2) ((i-1)/2-1) … -j/((i-1)/2) × 1;
s63, taking (i-1)/2 positions on the right side, wherein the displacement of each position is as follows: j/((i-1)/2) × 1, j/((i-1)/2) × 2 … j/((i-1)/2) ((i-1)/2-1);
and S64, taking each value in the steps S62 and S63 as the maximum distance d in the step S4, and generating i combined pictures at different positions according to the step S5.
Preferably, the specific steps of step S7 are:
s71a, compressing the i pictures in the step S6 to enable the pixel width of the i pictures to be consistent with the number of the gratings;
s72a, splicing the first row of pixels of the compressed i pictures together, using p [ n ] to represent the nth row of pixels, and the sequence after splicing is p1[1], p2[1], and p3[1] … pi [1 ];
s73a, splicing the second row of pixels of the i pictures, wherein the sequence after splicing is p1[2], p2[2], p3[2] … pi [2 ];
s74a, the sequence of the finally spliced 3D stereograph pictures is p1[1], p2[1], p3[1] … pi [1], p1[2], p2[2], p3[2] … pi [2], p1[ i ], p2[ i ], p3[ i ] … pi [ i ]. .
Preferably, the specific steps of step S7 are:
s71b, splicing i combined pictures, splicing a first column of pixels of a first picture, a second column of pixels of a second picture and a third column of pixels … of a third picture, and splicing the ith column of pixels of the ith picture, wherein p [ n ] represents the nth column of pixels, and the sequence after splicing is p1[1], p2[2], p3[3] and p4[4] … pi [ i ];
s72b, continuing to splice the (i + 1) th row of pixels of the first picture and the (i + 2) th row of pixels of the second picture, and so on, wherein the sequence after splicing is p1[ i +1], p2[ i +2] … pi [2i ];
s73b, the pixel width of the picture is w, and the finally spliced 3D stereograph pictures are p1[1], p2[2] … pi [ i ] … p1[ i +1], p2[ i +2] … pi [2i ] … p1[ w/i (i-1) ], and p2[ w/i (i-1) +1] … pi [ w ].
Preferably, the method further comprises the following steps: s8, generating a black-and-white waveform image 3D stereograph;
preferably, the specific steps of step S8 are:
s81a, converting the picture obtained in the step S7 into a gray scale image;
s82a, traversing pixel values of each point of a gray-scale image, wherein the pixel values are integer values between 0 and 255, converting the integer values into values between 0 and 31 in a manner that values from 8p to 8p +7 are converted into p, replacing the original point with one pixel value with 32 pixel values, filling colors from top to bottom in sequence, and filling black at the p pixels at the lowest point from top to bottom and filling white at the rest 32-p pixels by converting the pixel value at one point into p;
s83a, traversal and processing of each pixel value in step S82a, finally generating a black-and-white wave-form 3D stereogram.
Preferably, the specific steps of step S8 are:
s81b, converting the picture obtained in the step S7 into a gray scale image;
s82b, directly printing a gray picture by using a black-and-white printer to generate a black-and-white wave pattern 3D stereograph.
Preferably, the method further comprises the following steps: s9, generating color and black and white mixed 3D stereograph
Preferably, the specific steps of step S9 are:
s91a, printing a picture with a specified specification size by using a color printer after subtracting the gray information from the picture information of the color 3D stereoscopic picture obtained in the step S7, and printing an alignment mark at a proper position on the picture;
s92a, extracting the gray information of the color 3D stereograph obtained in the step S7 to generate a picture, generating a black-white oscillogram 3D stereograph according to the step S8, printing the black-white 3D picture with the same specification and size by using a black-white printer, and marking an alignment mark at a proper position on the picture;
s93a, placing black and white 3D pictures in front of color 3D pictures, and aligning the black and white 3D pictures together according to marks to form a mixed 3D stereograph.
Preferably, the specific steps of step S9 are: s91b, printing a picture with a specified specification size by using a color printer after subtracting high-frequency information from the picture information of the color 3D stereograph obtained in the step S7, and printing an alignment mark at a proper position on the picture;
s92b, extracting high-frequency information of the color 3D stereograph obtained in the step S7 to generate a picture, generating a black-white oscillogram 3D stereograph according to the step S8, printing the black-white 3D picture with the same specification and size by using a black-white printer, and marking an alignment mark at a proper position on the picture;
s93b, placing the black and white 3D picture in front of the color 3D picture, and aligning the black and white 3D picture and the color 3D picture together according to the marks to form a mixed 3D stereograph.
A stereograph is characterized in that; the method comprises the 3D stereograph manufactured by the method and the grating covered on the 3D stereograph.
Compared with the prior art, the invention has the beneficial effects that: the black-white picture and the color picture are combined, the black-white picture is used for outlining the color blocks on the color picture, the printing precision of the black-white picture is high, the scenery of the picture is clearer, the display effect is better, and better visual experience is brought.
Detailed Description
A method for manufacturing a 3D stereograph comprises the following steps:
and S1, acquiring the depth information of the photo to obtain the depth value and the pixel value of each pixel position of the photo. The depth information is distance information stored in the photograph, and the closer to the photographing apparatus, the larger the depth value of the pixel position. The farther the photographing equipment is away from the photographing equipment, the smaller the depth value of the pixel position is, and the specific steps are as follows: s11 11a, obtaining original depth information of a photo, where the step S11a is: s11a, creating an image source CGImageSource for inputting a photo;
s112a, copying auxiliary data from an image source;
s113a, extracting direction information of an image source by using a CGImageSourceCopyProperties AtIndex method;
s114a, creating an AVDepthData through the acquired auxiliary data and the direction information;
s115a, converting the AVDepthData into depth information, and finally obtaining a floating point type depth information array containing the depth information of each pixel position;
s12a, normalizing the depth information, where the depth information obtained in S11a does not fall completely between the range of 0 and 1, and for convenience of subsequent operations, the depth information is normalized to be distributed between the range of 0 and 1, and the method specifically includes:
s121a, traversing each value in the depth information array obtained in S1;
s122a, finding out a maximum value max and a minimum value min;
s123a, the original depth information value of the pixel position is m1, and finally the depth value m2 of this pixel position is (m 1-min)/(max-min).
In another mode of the step S1, the step S11b reads binary data of the photograph;
s12b, extracting an MPImage2 picture stored in the picture, wherein the MPImage2 picture is a depth picture of the picture;
s13b. extract the depth value m2 of each pixel position from the depth picture, and m2 obtained in this way is between 0 and 1.
S2, bisecting depth to obtain a plurality of designated depth values, taking a certain range near the designated depth values, obtaining pixel values corresponding to the depth values in the range and generating gray photos, repeating the steps on the designated depth values to obtain a plurality of gray pictures, and taking the gray photos as masks, wherein the specific steps are as follows:
s21, obtaining pixel values corresponding to all depth values in the picture;
s22, giving a depth value m, taking a coordinate [ m,1], respectively displacing t to the left side and the right side at the point to obtain [ m-t,1], [ m + t,1], in [ m-t,1], a straight line L1 is formed by the slope s1, the intersection point of the straight line L1 and the X axis is [ m-t-1/s1, 0], in [ m + t,1], a slope s2 is taken as a straight line L2, values of s2 and s1 are opposite numbers, the intersection point of the straight line L2 and the X axis is [ m + t +1/(-s2),0], m-t-1/s1 is less than 0, then the depth values within a certain range of a given depth value m are 0 to m + t +1/(-s2), m-t-1/s1 > 0, then the depth values within a certain range of a given depth value m are m-t-1/s1 to m + t +1/(-s 2);
s23, dividing the depth information between 0 and 1 into n parts averagely to obtain designated depth values of 0, 1/n × 1, 1/n × 2 … 1/n × (n-1) and 1;
s24, traversing the specified depth value obtained in the step S23 to serve as the depth value m given in the step S22, and obtaining the depth value within a certain range of each specified depth value in the step S23;
and S25, acquiring pixel values corresponding to depth values in a certain range of the specified depth values, and processing the pixel values into a gray picture to obtain n gray pictures.
S3, taking the gray level picture as a mask to filter the original picture to obtain a depth of field picture, wherein the gray level picture consisting of pixel values corresponding to depth values within a certain range of different depth values reflects which objects are contained in a certain depth of field, and the objects with different depth of field can be filtered out by filtering the original picture by using the gray level picture obtained in S2, wherein the specific steps are that S31, each gray level picture obtained in S25 is traversed;
s32, acquiring CIImage of the original photo;
s33, acquiring CIImage of the gray level picture acquired in S31 as a mask;
s34, mixing the original image and the mask by using a CIBlendWithMask filtering rule to generate a new CIImage;
the CIBlendWithMask filtering rule is: taking out the part of the original image with the mask display gray value not being 0 (not being black), wherein the transparency is consistent with the gray value of the mask, and the transparency of the black part is deleted to be 0;
and S35, generating pictures by using ConTEXT according to the newly generated CIImage to obtain n pictures with different specified depth values, namely n pictures with different depths of field.
S4, different displacements are set for the photos with different depth values, and the photos with different depth values should be moved by different displacements and finally combined to generate the 3D effect. And setting displacements of different layer depths according to a certain linear rule. The closer the depth of field is to the camera, the greater the displacement. The farther the depth of field is from the camera, the smaller the displacement is, the maximum displacement d is set, and the displacement of the picture from large to small according to the depth value is d/n × 1, d/n × 2 … d/n × n in sequence;
s5, combining the depth photos at different positions into a combined picture, which comprises the following specific steps:
s51, arranging the pictures obtained in the step S3 in sequence according to the depth values, wherein the depth value is larger before the depth value is smaller after the depth value is larger;
s52, moving the picture according to different displacements set in the step S4;
and S53, synthesizing the moved pictures to form a combined picture.
S6, acquiring a combined picture at different positions, wherein pictures at different view angle positions are required to generate a 3D stereoscopic effect, and S5 generates a combined picture, i.e., a combined picture at a view angle position. For a plurality of viewing angles, a plurality of combined pictures are required, and the method specifically comprises the following steps:
s61, taking the maximum displacement values which can be reached by the left and right visual angles as-j and j;
s62, taking (i-1)/2 positions on the left, wherein the displacement of each position is as follows: -j/((i-1)/2), ((i-1)/2) ((i-1)/2-1) … -j/((i-1)/2) × 1;
s63, taking (i-1)/2 positions on the right side, wherein the displacement of each position is as follows: j/((i-1)/2) × 1, j/((i-1)/2) × 2 … j/((i-1)/2) ((i-1)/2-1);
and S64, taking each value in the steps S62 and S63 as the maximum distance d in the step S4, and generating i combined pictures at different positions according to the step S5.
Wherein i is an odd number greater than 1 and less than 17.
S7, processing the combined picture to generate a color 3D stereograph;
the specific steps of step S7 are:
s71a, compressing the i pictures in the S6 to enable the pixel width of the i pictures to be consistent with the number of the gratings, wherein each inch is 40 gratings, if 10 inches exist, 400 gratings exist, and the width of the pictures is compressed to 400;
s72a, splicing the first row of pixels of the compressed i pictures together, using p [ n ] to represent the nth row of pixels, and the sequence after splicing is p1[1], p2[1], and p3[1] … pi [1 ];
s73a, splicing the second row of pixels of the i pictures, wherein the sequence after splicing is p1[2], p2[2], p3[2] … pi [2 ];
s74a, the sequence of the finally spliced 3D stereograph pictures is p1[1], p2[1], p3[1] … pi [1], p1[2], p2[2], p3[2] … pi [2], p1[ i ], p2[ i ], p3[ i ] … pi [ i ].
An alternative specific step of step S7 is:
s71b, splicing i combined pictures, splicing a first column of pixels of a first picture, a second column of pixels of a second picture and a third column of pixels … of a third picture, and splicing the ith column of pixels of the ith picture, wherein p [ n ] represents the nth column of pixels, and the sequence after splicing is p1[1], p2[2], p3[3] and p4[4] … pi [ i ];
s72b, continuing to splice the (i + 1) th row of pixels of the first picture and the (i + 2) th row of pixels of the second picture, and so on, wherein the sequence after splicing is p1[ i +1], p2[ i +2] … pi [2i ];
s73b, the pixel width of the picture is w, and the finally spliced 3D stereograph pictures are p1[1], p2[2] … pi [ i ] … p1[ i +1], p2[ i +2] … pi [2i ] … p1[ w/i (i-1) ], and p2[ w/i (i-1) +1] … pi [ w ].
The method is adopted to manufacture the color 3D stereograph.
And S8, generating the black-and-white wave form picture 3D stereograph, wherein the printing precision of the color pictures is not as high as that of the black-and-white pictures. The picture is made in black and white with more dots per inch printed. Resulting in a better visual experience. Printing a black-and-white picture using the waveform image, printing the picture using only black and white dots, more precisely than the grayscale image, and the specific steps of step S8 are:
s81a, converting the picture obtained in the step S7 into a gray scale image;
s82a, traversing pixel values of each point of a gray-scale image, wherein the pixel values are integer values between 0 and 255, converting the integer values into values between 0 and 31 in a manner that values from 8p to 8p +7 are converted into p, replacing the original point with one pixel value with 32 pixel values, filling colors from top to bottom in sequence, and filling black at the p pixels at the lowest point from top to bottom and filling white at the rest 32-p pixels by converting the pixel value at one point into p;
s83a, traversing and processing each pixel value according to step S92, and finally generating a black-and-white wave-shaped 3D stereograph.
An alternative specific step of step S8 is:
s81b, converting the picture obtained in the step S7 into a gray scale image;
s82b, directly printing a gray picture by using a black-and-white printer to generate a black-and-white wave pattern 3D stereograph.
The black-white 3D stereograph is manufactured by adopting the method.
S9, a 3D stereoscopic image in which color and black-and-white are mixed is generated, and a color 3D stereoscopic image is generated in S7 and printed using a color printer. In S8, a black-and-white 3D stereoscopic image is generated and printed using a black-and-white laser printer. Stereographs printed using black and white laser light are more accurate than those printed using color printers. The black and white laser printer precision was 2540 dots per inch. The accuracy of a color printer is 600 dots per inch. Therefore, a 3D stereoscopic picture with high-precision color display can be made by combining the advantages of the three, i.e., the high-precision advantage of black-and-white printing is combined, and the color can be displayed as a color by combining the color printing, and the specific steps of step S9 are as follows:
s91a, printing a picture with a specified specification size by using a color printer after subtracting the gray information from the picture information of the color 3D stereoscopic picture obtained in the step S7, and printing an alignment mark at a proper position on the picture;
s92a, extracting the gray information of the color 3D stereograph obtained in the step S7 to generate a picture, generating a black-white oscillogram 3D stereograph according to the step S8, printing the black-white 3D picture with the same specification and size by using a black-white printer, and marking an alignment mark at a proper position on the picture;
s93a, placing black and white 3D pictures in front of color 3D pictures, and aligning the black and white 3D pictures together according to marks to form a mixed 3D stereograph.
An alternative specific step of step S9 is:
s91b, after subtracting high-frequency information from the picture information of the color 3D stereograph obtained in the step S7, printing a picture with a specified specification size by using a color printer, namely only printing the low-frequency information of the picture, and printing an alignment mark at a proper position on the picture;
s92b, extracting high-frequency information of the color 3D stereograph obtained in the step S7 to generate a picture, generating a black-white oscillogram 3D stereograph according to the step S8 from the generated picture, printing the black-white 3D picture with the same specification and size by using a black-white printer, and printing an alignment mark at a proper position on the picture;
s93b, placing the black and white 3D picture in front of the color 3D picture, and aligning the black and white 3D picture and the color 3D picture together according to the marks to form a mixed 3D stereograph.
The method is adopted to manufacture the color and black and white mixed 3D stereograph.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (16)

1. A method for manufacturing a 3D stereograph is characterized by comprising the following steps: the method comprises the following steps:
s1, obtaining the depth information of the photo;
s2, dividing depth equally to obtain a plurality of designated depth values, taking a certain range near the designated depth values, obtaining pixel values corresponding to the depth values in the range and generating a gray-scale picture, and repeating the steps for the designated depth values to obtain a plurality of gray-scale pictures;
s3, filtering the original picture by taking the gray picture as a mask to obtain a depth of field picture;
s4, setting different displacements for the pictures with different depth values, and setting the maximum displacement d, wherein the displacements of the pictures from large to small are d/n × 1 and d/n × 2 … d/n × n in sequence;
s5, combining the depth photos at different positions into a combined picture;
s6, acquiring combined pictures at different positions;
and S7, processing the combined picture to generate a color 3D stereoscopic picture.
2. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S1 are: s11a, acquiring original depth information of a photo, and comprising the following steps: s111a, creating an image source CGImageSource for inputting a photo;
s112a, copying auxiliary data from an image source;
s113a, extracting direction information of an image source by using a CGImageSourceCopyProperties AtIndex method;
s114a, creating an AVDepthData through the acquired auxiliary data and the direction information;
s115a, converting AVDepthData into original depth information, and finally obtaining a floating point type depth information array containing the depth information of each pixel position;
s12a, normalizing the original depth information to obtain depth information, and the method comprises the following steps: s121a, traversing each value in the depth information array obtained in S1;
s122a, finding out a maximum value max and a minimum value min;
s123a, the original depth information value of the pixel position is m1, and finally the depth value m2 of this pixel position is (m 1-min)/(max-min).
3. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S1 are: s11b, reading binary data of the photo;
s12b, extracting an MPImage2 picture stored in the picture, wherein the MPImage2 picture is a depth picture of the picture;
s13b. depth values m2 for each pixel position are extracted from the depth picture.
4. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S2 are: s21, obtaining pixel values corresponding to all depth values in the picture;
s22, giving a depth value m, taking a coordinate [ m,1], respectively displacing t to the left side and the right side at the point to obtain [ m-t,1], [ m + t,1], in [ m-t,1], a straight line L1 is formed by the slope s1, the intersection point of the straight line L1 and the X axis is [ m-t-1/s1, 0], in [ m + t,1], a slope s2 is taken as a straight line L2, values of s2 and s1 are opposite numbers, the intersection point of the straight line L2 and the X axis is [ m + t +1/(-s2),0], m-t-1/s1 is less than 0, then the depth values within a certain range of a given depth value m are 0 to m + t +1/(-s2), m-t-1/s1 > 0, then the depth values within a certain range of a given depth value m are m-t-1/s1 to m + t +1/(-s 2);
s23, dividing the depth information between 0 and 1 into n parts averagely to obtain designated depth values of 0, 1/n × 1, 1/n × 2 … 1/n × (n-1) and 1;
s24, traversing the specified depth value obtained in the step S23 to serve as the depth value m given in the step S22, and obtaining the depth value within a certain range of each specified depth value in the step S23;
and S25, acquiring pixel values corresponding to depth values in a certain range of the specified depth values, and processing the pixel values into a gray picture to obtain n gray pictures.
5. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S3 are: s31, traversing each gray picture obtained in S2;
s32, acquiring CIImage of the original photo;
s33, acquiring CIImage of the gray level picture acquired in S31 as a mask;
s34, mixing the original image and the mask by using a CIBlendWithMask filtering rule to generate a new CIImage;
and S35, generating pictures by using ConTEXT according to the newly generated CIImage to obtain n pictures with different specified depth values, namely n pictures with different depths of field.
6. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S5 are: s51, arranging the pictures obtained in the step S3 in sequence according to the depth values, wherein the depth value is larger before the depth value is smaller after the depth value is larger;
s52, moving the picture according to different displacements set in the step S4;
and S53, synthesizing the moved pictures to form a combined picture.
7. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S6 are: s61, taking the maximum displacement values which can be reached by the left and right visual angles as-j and j;
s62, taking (i-1)/2 positions on the left, wherein the displacement of each position is as follows: -j/((i-1)/2), ((i-1)/2) ((i-1)/2-1) … -j/((i-1)/2) × 1;
s63, taking (i-1)/2 positions on the right side, wherein the displacement of each position is as follows: j/((i-1)/2) × 1, j/((i-1)/2) × 2 … j/((i-1)/2) ((i-1)/2-1);
and S64, taking each value in the steps S62 and S63 as the maximum distance d in the step S4, and generating i combined pictures at different positions according to the step S5.
8. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S7 are:
s71a, compressing the i pictures in the step S6 to enable the pixel width of the i pictures to be consistent with the number of the gratings;
s72a, splicing the first row of pixels of the compressed i pictures together, using p [ n ] to represent the nth row of pixels, and the sequence after splicing is p1[1], p2[1], and p3[1] … pi [1 ];
s73a, splicing the second row of pixels of the i pictures, wherein the sequence after splicing is p1[2], p2[2], p3[2] … pi [2 ];
s74a, the sequence of the finally spliced 3D stereograph pictures is p1[1], p2[1], p3[1] … pi [1], p1[2], p2[2], p3[2] … pi [2], p1[ i ], p2[ i ], p3[ i ] … pi [ i ].
9. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S7 are:
s71b, splicing i combined pictures, splicing a first column of pixels of a first picture, a second column of pixels of a second picture and a third column of pixels … of a third picture, and splicing the ith column of pixels of the ith picture, wherein p [ n ] represents the nth column of pixels, and the sequence after splicing is p1[1], p2[2], p3[3] and p4[4] … pi [ i ];
s72b, continuing to splice the (i + 1) th row of pixels of the first picture and the (i + 2) th row of pixels of the second picture, and so on, wherein the sequence after splicing is p1[ i +1], p2[ i +2] … pi [2i ];
s73b, the pixel width of the picture is w, and the finally spliced 3D stereograph pictures are p1[1], p2[2] … pi [ i ] … p1[ i +1], p2[ i +2] … pi [2i ] … p1[ w/i (i-1) ], and p2[ w/i (i-1) +1] … pi [ w ].
10. The method for making the 3D stereograph according to claim 1, wherein: also comprises the following steps: and S8, generating a black-and-white waveform picture 3D stereograph.
11. The method for making a 3D stereograph according to claim 10, wherein: the specific steps of step S8 are:
s81a, converting the picture obtained in the step S7 into a gray scale image;
s82a, traversing pixel values of each point of a gray-scale image, wherein the pixel values are integer values between 0 and 255, converting the integer values into values between 0 and 31 in a manner that values from 8p to 8p +7 are converted into p, replacing the original point with one pixel value with 32 pixel values, filling colors from top to bottom in sequence, and filling black at the p pixels at the lowest point from top to bottom and filling white at the rest 32-p pixels by converting the pixel value at one point into p;
s83a, traversal and processing of each pixel value in step S82a, finally generating a black-and-white wave-form 3D stereogram.
12. The method for making a 3D stereograph according to claim 10, wherein: the specific steps of step S8 are:
s81b, converting the picture obtained in the step S7 into a gray scale image;
s82b, directly printing a gray picture by using a black-and-white printer to generate a black-and-white wave pattern 3D stereograph.
13. The method for making a 3D stereograph according to claim 10, wherein: also comprises the following steps: and S9, generating the 3D stereoscopic picture with the mixed color and black and white.
14. The method for making a 3D stereograph according to claim 13, wherein: the specific steps of step S9 are:
s91a, printing a picture with a specified specification size by using a color printer after subtracting the gray information from the picture information of the color 3D stereoscopic picture obtained in the step S7, and printing an alignment mark at a proper position on the picture;
s92a, extracting the gray information of the color 3D stereograph obtained in the step S7 to generate a picture, generating a black-white oscillogram 3D stereograph according to the step S8, printing the black-white 3D picture with the same specification and size by using a black-white printer, and marking an alignment mark at a proper position on the picture;
s93a, placing black and white 3D pictures in front of color 3D pictures, and aligning the black and white 3D pictures together according to marks to form a mixed 3D stereograph.
15. The method for making the 3D stereograph according to claim 1, wherein: the specific steps of step S9 are: s91b, printing a picture with a specified specification size by using a color printer after subtracting high-frequency information from the picture information of the color 3D stereograph obtained in the step S7, and printing an alignment mark at a proper position on the picture;
s92b, extracting high-frequency information of the color 3D stereograph obtained in the step S7 to generate a picture, generating a black-white oscillogram 3D stereograph according to the step S8, printing the black-white 3D picture with the same specification and size by using a black-white printer, and marking an alignment mark at a proper position on the picture;
s93b, placing the black and white 3D picture in front of the color 3D picture, and aligning the black and white 3D picture and the color 3D picture together according to the marks to form a mixed 3D stereograph.
16. A stereograph is characterized in that; comprising a 3D-stereogram made by a method according to any of claims 1-15 and a grating overlaid on the 3D-stereogram.
CN202010804166.4A 2020-08-12 2020-08-12 Manufacturing method of 3D stereograph and product thereof Active CN112015357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010804166.4A CN112015357B (en) 2020-08-12 2020-08-12 Manufacturing method of 3D stereograph and product thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010804166.4A CN112015357B (en) 2020-08-12 2020-08-12 Manufacturing method of 3D stereograph and product thereof

Publications (2)

Publication Number Publication Date
CN112015357A true CN112015357A (en) 2020-12-01
CN112015357B CN112015357B (en) 2023-05-05

Family

ID=73504523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010804166.4A Active CN112015357B (en) 2020-08-12 2020-08-12 Manufacturing method of 3D stereograph and product thereof

Country Status (1)

Country Link
CN (1) CN112015357B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1444184A (en) * 2002-03-12 2003-09-24 镇毓科技股份有限公司 Stereoimage making system and its method
CN1641702A (en) * 2004-01-13 2005-07-20 邓兴峰 Method for designing stereo image from planar image
CN102693552A (en) * 2011-03-24 2012-09-26 雷欧尼斯(北京)信息技术有限公司 Method and apparatus for converting two-dimensional mode of digital content into three-dimensonal mode
CN102798980A (en) * 2012-08-16 2012-11-28 林文友 Manufacturing method of 3D (three-dimensional) plane drawing board
WO2013024847A1 (en) * 2011-08-18 2013-02-21 シャープ株式会社 Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program
US20130063419A1 (en) * 2011-09-08 2013-03-14 Kyoung Ho Lim Stereoscopic image display device and method of displaying stereoscopic image
CN204955890U (en) * 2015-08-26 2016-01-13 东莞市富立信影像科技有限公司 Combination stereograph
WO2016010246A1 (en) * 2014-07-16 2016-01-21 삼성전자주식회사 3d image display device and method
CN105323573A (en) * 2014-07-16 2016-02-10 北京三星通信技术研究有限公司 Three-dimensional image display device and three-dimensional image display method
CN105791803A (en) * 2016-03-16 2016-07-20 深圳创维-Rgb电子有限公司 Display method and system capable of converting two-dimensional image into multi-viewpoint image
US20180018829A1 (en) * 2016-07-13 2018-01-18 Samsung Electronics Co., Ltd. Method and apparatus for processing three-dimensional (3d) image

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1444184A (en) * 2002-03-12 2003-09-24 镇毓科技股份有限公司 Stereoimage making system and its method
CN1641702A (en) * 2004-01-13 2005-07-20 邓兴峰 Method for designing stereo image from planar image
CN102693552A (en) * 2011-03-24 2012-09-26 雷欧尼斯(北京)信息技术有限公司 Method and apparatus for converting two-dimensional mode of digital content into three-dimensonal mode
WO2013024847A1 (en) * 2011-08-18 2013-02-21 シャープ株式会社 Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program
US20130063419A1 (en) * 2011-09-08 2013-03-14 Kyoung Ho Lim Stereoscopic image display device and method of displaying stereoscopic image
CN102798980A (en) * 2012-08-16 2012-11-28 林文友 Manufacturing method of 3D (three-dimensional) plane drawing board
WO2016010246A1 (en) * 2014-07-16 2016-01-21 삼성전자주식회사 3d image display device and method
CN105323573A (en) * 2014-07-16 2016-02-10 北京三星通信技术研究有限公司 Three-dimensional image display device and three-dimensional image display method
CN204955890U (en) * 2015-08-26 2016-01-13 东莞市富立信影像科技有限公司 Combination stereograph
CN105791803A (en) * 2016-03-16 2016-07-20 深圳创维-Rgb电子有限公司 Display method and system capable of converting two-dimensional image into multi-viewpoint image
US20180018829A1 (en) * 2016-07-13 2018-01-18 Samsung Electronics Co., Ltd. Method and apparatus for processing three-dimensional (3d) image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孟祥钊: "立体印刷中图像合成", 《广东印刷》 *

Also Published As

Publication number Publication date
CN112015357B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
EP2553517B1 (en) Method for producing a three-dimensional image on the basis of calculated image rotations
CN105627992B (en) A kind of method that ancient building is surveyed and drawn in quick high accuracy noncontact
US9992473B2 (en) Digital multi-dimensional image photon platform system and methods of use
CN100384220C (en) Video camera rating data collecting method and its rating plate
CN100390606C (en) Stereoscopic image producing method and stereoscopic image display device
US7570260B2 (en) Tiled view-maps for autostereoscopic interdigitation
DE602004008001T2 (en) Image display method and image display system
JP6278323B2 (en) Manufacturing method of autostereoscopic display
US20090219383A1 (en) Image depth augmentation system and method
KR20150056568A (en) Pixel mapping, arranging, and imaging for round and square-based micro lens arrays to achieve full volume 3d and multi-directional motion
GB2227334A (en) Three-dimensional display device
WO2004021151A2 (en) Multi-dimensional image system for digital image input and output
KR20160068758A (en) Pixel mapping and printing for micro lens arrays to achieve dual-axis activation of images
US20180288241A1 (en) Digital multi-dimensional image photon platform system and methods of use
CN101796849A (en) Be used to make the method for parallax barrier screen aligning screen
JP2006058091A (en) Three-dimensional image measuring device and method
JP2008304225A (en) Painting surface measuring apparatus and its measuring method
US20110058254A1 (en) Integral photography plastic sheet by special print
CN105931177B (en) Image acquisition processing device and method under specific environment
CN112015357A (en) Method for making 3D stereograph and product thereof
WO1994027198A1 (en) A system and a method for the reproduction of three-dimensional objects
JP2004012221A (en) Surveyed two-dimensional figure forming method and system of ruins legacy or the like
US8760368B2 (en) Three-dimensional display device, image producing device and image display system
CN104375275A (en) Method and device capable of simultaneously displaying 2D and 3D dynamic images
CN1517786A (en) Digital sampling picture of tetra-dimensional stereoscopic picture and digital code systesis metod

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant