KR20130005148A - Depth adjusting device and depth adjusting method - Google Patents

Depth adjusting device and depth adjusting method Download PDF

Info

Publication number
KR20130005148A
KR20130005148A KR1020110066552A KR20110066552A KR20130005148A KR 20130005148 A KR20130005148 A KR 20130005148A KR 1020110066552 A KR1020110066552 A KR 1020110066552A KR 20110066552 A KR20110066552 A KR 20110066552A KR 20130005148 A KR20130005148 A KR 20130005148A
Authority
KR
South Korea
Prior art keywords
image
pixel
depth value
stereoscopic
depth
Prior art date
Application number
KR1020110066552A
Other languages
Korean (ko)
Inventor
임원길
Original Assignee
(주) 인디에스피
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주) 인디에스피 filed Critical (주) 인디에스피
Priority to KR1020110066552A priority Critical patent/KR20130005148A/en
Publication of KR20130005148A publication Critical patent/KR20130005148A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

PURPOSE: A stereoscopic perception controlling device and stereoscopic perception adjusting method are provided to produce 3D image of desired depth value automatically by converting source image without 2D or 3D photography. CONSTITUTION: A control unit(300) selects either the left eye image or right eye image as the reference image of the first image. The unselected image is selected as the second image by the control unit. The control unit calculates the first depth value having specific pixel or object of the second image according to the comparison between the first and the second image. The control unit produces the third image having the second depth value by the modulation of the first depth value. The control unit indicates the new stereoscopic image formed by the first and the third image on the display unit(200). [Reference numerals] (200) Display unit; (210) Depth value control menu; (300) Control unit

Description

Depth Adjustment Device and Depth Adjustment Method {DEPTH ADJUSTING DEVICE AND DEPTH ADJUSTING METHOD}

The present invention relates to a three-dimensional control device and a three-dimensional control method for adjusting the depth value to enable the three-dimensional control in the three-dimensional stereoscopic image.

3D stereoscopic imaging technology has various applications such as information communication, broadcasting, medical, education and training, military, game, animation, virtual reality, CAD, and industrial technology. The three-dimensional sense perceived by a person includes the degree of change in the thickness of the lens depending on the position of the object to be observed, the difference in angle between the two eyes and the object, the difference in the position and shape of the left and right eyes, and the parallax caused by the movement of the object. It is a complex action.

Among them, binocular disparity, which is caused by two or more human eyes or two or more cameras positioned in a horizontal direction, is the most important factor of stereoscopic feeling. The difference in the angle of the object due to binocular disparity has different images in each retina or each camera, and when the difference between these two images is transmitted to the brain, the brain accurately fuses the two pieces of information together to produce the original 3D stereoscopic image. I can feel it.

The 3D stereoscopic image is composed of a left eye image recognized by the left eye and a right eye image recognized by the right eye. The 3D display apparatus expresses a stereoscopic sense of an image by using a parallax between a left eye image and a right eye image.

However, the three-dimensional effect is formed by the difference between the depth value of the left eye image and the right eye image, and even if the same depth value difference, a large difference occurs in the three-dimensional effect according to the display screen size or the density of the object. In this case, once the stereoscopic image is made, the depth value of the left eye image or the right eye image cannot be adjusted from a non-expert user's point of view, and there is a problem of newly photographing the 3D photorealistic image or generating the 3D converted image.

For example, displaying a stereoscopic image optimized for a large-screen theater screen on a mobile phone screen of 4 inches or less feels the deterioration of the stereoscopic effect, and it is impossible for the user to immediately adjust the stereoscopic effect and to display the stereoscopic image for the small screen on the content provider side. There is a problem that must be made separately.

In addition, when a 3D image captured by two or more cameras is deficient in stereoscopic effect on an editing device or a display device, it is necessary to discard the existing 3D stereoscopic image and newly shoot a stereoscopic image having an increased depth value. There is a very difficult problem to regain.

Therefore, even a non-expert should develop a three-dimensional control device or a three-dimensional control method of controlling the three-dimensional effect immediately by simply setting a new depth value in the editing device or display device.

SUMMARY OF THE INVENTION The present invention has been made in view of the above-described necessity, and an object of the present invention is to provide a three-dimensional image adjusting device and a method of controlling a three-dimensional image, in which a three-dimensional image is increased or decreased by only a simple operation of inputting a new depth value for a left-eye image or a right-eye image constituting the three-dimensional image. .

In one embodiment, the three-dimensional control device of the present invention, the display unit for displaying a stereoscopic image consisting of a left eye image and a right eye image; One of the left eye image and the right eye image is selected as a first image as a reference image, and the other is selected as a second image, and a specific pixel of the second image is compared by comparing the first image and the second image. After calculating a first depth value of a specific object, a third image having a second depth value of which the first depth value is adjusted is generated, and a new stereoscopic image composed of the first image and the third image is displayed. A control unit to be displayed on the unit; It includes.

In one embodiment, the three-dimensional image adjusting apparatus of the present invention, to display a three-dimensional image consisting of a left eye image and a right eye image, and when a three-dimensional increase and decrease command is input, the depth value of each pixel for any one of the left eye image and the right eye image or The depth value of each object is newly calculated to display a new stereoscopic image corresponding to the stereoscopic increase / decrease command.

In one embodiment, according to the method of adjusting the three-dimensional effect of the present invention, by comparing the first image and the second image to recognize each object, calculates the depth value for each object, the first image and the second image If the 3D image is required to be adjusted in the 3D image composed of the 3D image, a new 3D image having a new depth value input from the depth adjustment menu is newly generated and the new 3D image is made by the new 3D image composed of the first image and the third image. Express

In one embodiment, according to the stereoscopic 3D control method of the present invention, the first image and the second image selected from the left eye image or the right eye image are obtained respectively, and the depth value 0 means the same position in the left eye image and the right eye image. Recognizing a pixel as a reference point for adjusting the depth value, generating a third image in which the depth value of the reference point is maintained at 0 and adjusting a depth value of a pixel corresponding to a near or far distance from the reference point, and generating the third image. And a new stereoscopic image composed of the third image.

In one embodiment, according to the method of adjusting the stereoscopic effect of the present invention, the first image and the second image selected from the left eye image or the right eye image are respectively obtained, and a pixel having a depth value within the threshold is used as a reference point for the depth value adjustment. Recognize and maintain a depth value of the reference point as it is, and generates a third image by adjusting the depth value of the pixel corresponding to the near or far distance than the reference point, and a new stereoscopic image composed of the first image and the third image Display.

In one embodiment, according to the stereoscopic 3D control method of the present invention, the first pixel and the first pixel of the first image, the characteristic value of each pixel in the first image and the second characteristic is substantially matched; And recognize the second pixel of the second image as a pixel belonging to the same object as the same pixel having a different depth value, calculating a difference between depth values of the second pixel with respect to the first pixel, and calculating the difference with respect to the first image. A depth value for each object of the second image is calculated, a third image having a change in the depth value for each object of the second image is generated, and a new stereoscopic image composed of the first image and the third image is displayed. .

According to an embodiment of the present disclosure, if the characteristic value of the first pixel belonging to the first image and the characteristic value of the second pixel belonging to the second image substantially coincide with each other, the first pixel and the first The second pixel is a pixel belonging to the same object and is recognized as the same pixel spaced in the horizontal direction by a first depth value, and the third pixel in which the second pixel is moved to have a second depth value multiplied by a three-dimensional increase rate. A pixel is calculated and the first pixel and the third pixel are combined to display a new stereoscopic image.

In one embodiment, according to the three-dimensional control method of the present invention, when the three-dimensional increase command is received, the object located closer to the nearer than the reference object is moved in the near direction to the right, the farther object is farther left than the reference object Direction is moved, and when a 3D reduction command is received, an object located closer to the reference object is moved to the left direction, which is a position closer to the reference object, and an object located farther than the reference object is closer to the reference object. Is moved in the right direction.

The present invention does not need to take images of different depth values in the camera due diligence step, and an apparatus and method for automatically generating a 3D image having a desired depth value by converting the source image only when the source image is given regardless of whether 2D or 3D is taken. To provide.

On the other hand, when the 3D effect is unnatural, a new 3D image may be generated by a simple operation of increasing or decreasing the depth value in the depth value adjustment menu. For example, if a left eye image and a right eye image, which are suited for a large screen for a theater, are played as they are on a portable smart device having a small screen, the stereoscopic effect is remarkably degraded. At this time, if you run the app application running on the portable smart device, select the three-dimensional increase menu, and then enter the desired three-dimensional increase rate has the advantage that the new three-dimensional image is simply reproduced in spite of the instruction of the non-expert.

FIG. 1 illustrates a virtual embodiment for comparison with the present invention, in which a plurality of cameras photographing a stereoscopic image obtains a left-eye image or a right-eye image having different depth values, and adjusts a three-dimensional effect.
2 illustrates, for example, a left eye image as a reference first image.
3 illustrates a second image, which is a right eye image in which a specific object is spaced apart by a first depth value, compared to the first image of FIG. 2.
4 illustrates a third image in which a near object is adjusted closer and a far object is adjusted farther than the second image of FIG. 3.
5 is a block diagram of a three-dimensional control device of the present invention.
6 shows the depth profile of each pixel.

FIG. 1 illustrates a virtual embodiment for comparing the present invention, in which a plurality of cameras photographing a stereoscopic image obtains a plurality of right eye images having different depth values with respect to the left eye image, and adjusts the stereoscopic feeling.

Referring to FIG. 1, a first camera 11 photographing a first image 21, a second camera 12 spaced apart from the first camera 11 by a first distance, and the first camera 11. A third camera 13 spaced apart from each other by a second separation distance is provided. The second camera 12 captures a first image 21 and a second image 22 having a difference in depth value for each object, and the third camera 13 also takes a different depth value from the second image 22. The third image 23 having the image is taken.

The first image 21 photographed by the first camera 11 becomes a left eye image, and the second image 22 photographed by the second camera 12 corresponds to each object with respect to the first image 21. The right image is spaced apart by the depth value, and the third image 23 captured by the third camera 13 is another right image in which each object is spaced apart by the second depth value with respect to the first image 21. .

The left eye image or the right eye image photographed by each camera is stored as a stereoscopic image through image correction operations including noise removal or color correction, for example. In the stereoscopic images stored in the recording medium, the stereoscopic image proportional to the first depth value is displayed as the left eye image, and the second image 22 having the first depth value difference is displayed as the right eye image. When expressing a three-dimensional effect proportional to the second depth value, the first image 21 is shown as a left eye image, and the third image 23 is shown as a right eye image.

As described above, when a plurality of cameras are installed and a plurality of images having various depth value differences are photographed and a desired right eye image is selected, a three-dimensional image editor may express various three-dimensional feelings. However, according to such an embodiment, a plurality of images must be photographed to have various depth values, and for this purpose, a large number of cameras must be installed at the photographing site, and the amount of image editing work is complicated and complicated. Above all, if the depth value of the preceding picture and the depth value of the following picture are abruptly changed in the continuous scenes in time, the image quality of the stereoscopic image may be unnatural and the pain or dizziness may be caused by the sudden change in the stereoscopic sense.

Compared with the above-described comparative embodiment, the present invention does not need to photograph images having different depth values in the camera-realization step, and converts the three-dimensional image of the desired depth value if only the source image is given regardless of whether 2D or 3D is photographed. Provided are an apparatus and method for automatically generating. On the other hand, when the 3D effect is unnatural, a new 3D image may be generated by a simple operation of increasing or decreasing the depth value in the depth value adjustment menu.

Hereinafter, with reference to the accompanying drawings will be described an embodiment according to the present invention in more detail. The sizes and shapes of the components shown in the drawings may be exaggerated for clarity and convenience. In addition, terms defined in consideration of the configuration and operation of the present invention may be changed according to the intention or custom of the user, the operator. Definitions of these terms should be based on the content of this specification.

2 to 4 show an embodiment of the present invention. 2 illustrates, for example, a left eye image as a first image 100a as a reference. 3 illustrates a second image 100b which is a right eye image in which a specific object is spaced apart by a first depth value L1 compared to the first image 100a of FIG. 2. 4 illustrates the third image 100c in which the near object 102 is adjusted closer and the far object 103 is adjusted farther than the second image 100b of FIG. 3. The stereoscopic effect of the stereoscopic image of FIGS. 2 and 4 is increased in proportion to the depth value increase / decrease rather than the stereoscopic sense of the stereoscopic image of FIGS. 2 and 3.

According to the present invention, each object is distinguished and recognized in the left eye image or the right eye image. For example, when the left eye image of FIG. 2 is set as the first image 100a, a car object that is the reference object 101 of the first image 100a, and a short distance object that is recognized as being further ahead of the car object ( The eagle object 102 is distinguished from the horizon object, which is the far object 103 that is recognized as being farther behind the vehicle.

Identification of each object is performed by a control unit and is achieved by manual teaching or edge detecting. Manual teaching (manual teaching) is by the movement of the cursor or the operation of the input device, including the mouse / keyboard. Edge detecting automatically calculates the boundary of each object or the polygon or closed curve connecting the boundary points by calculating the trend of the characteristic value of each pixel.

As an example, if a mouse click or keyboard input occurs after the cursor is moved to the boundary points 110a, 110b, 110c, 110d, 110e, 110f, 110g, and 110h of the horizon object, the boundary points 110a, 110b, 110c, 110d, Pixel coordinates of 110e, 110f, 110g, and 110h are input to the controller, and the controller recognizes a specific object (for example, a horizon object) through recognition of boundary points 110a, 110b, 110c, 110d, 110e, 110f, 110g, and 110h. Recognize.

In an embodiment, the feature value for each pixel may include at least one of a contrast value for each pixel, an RGB value for each pixel, a saturation value for each pixel, and a luminance value for each pixel.

In one embodiment, the control unit reads the characteristic value of each pixel of the first image 100a and automatically recognizes a position where a sudden change in the characteristic value of each pixel occurs as a boundary point of a specific object, and based on the recognition of the boundary point Each object including the object 101, the near object 102, and the far object 103 is distinguished and recognized.

Referring to FIG. 2, the controller calculates a feature value for each pixel in the first image 100a and observes a change in the feature value for each pixel to observe a change in the boundary value of the horizon object, which is the remote object 103, and the reference object 101. The boundary point of the car object and the boundary point of the eagle object which is the near object 102 are recognized.

The controller recognizes the inner region of the polygon or closed curve connecting the boundary points as an independent object. For example, the inner region of the polygon or closed curve connecting the boundary points of the reference object 101 is recognized as a car object, which is the reference object 101, and the inner region of the polygon or closed curve connecting the boundary points of the horizon object is a remote object ( 103 is recognized as a horizon object.

Next, the controller compares the first image 100a of FIG. 2 and the second image 100b of FIG. 3 to calculate a depth value for each object. That is, when adjustment of the stereoscopic effect is required in the previous stereoscopic image composed of the first image 100a and the second image 100b, the controller generates a new third image 100c and generates the first image 100a and the second image. A new stereoscopic feeling is expressed by the new stereoscopic image composed of the third image 100c. The third image 100c is regenerated to have the new depth value selected in the depth value adjustment menu.

2 to 4, the reference object 101 to which the reference point 120 of the depth value control belongs, the reference object 120 or the near object 102 that is recognized as being located closer than the reference object 101, A far object 103 is shown that is recognized as being farther than the reference point 120 or the reference object 101. The reference object 101 is a car object, the near object 102 is an eagle object, and the far object 103 is a horizon object.

The reference point 120 of the depth value adjustment is preferably a pixel having no objection when the left eye image and the right eye image are combined in the brain, and are located where the gaze is most concentrated.

In one embodiment, the reference point 120 of the depth adjustment is preferably selected as a pixel having a depth value of zero. A pixel having a depth value of 0 means the same position in the left eye image and the right eye image. For example, a depth value 3 may be set as a threshold and a pixel having a depth value within the threshold may be selected as the reference point 120. Accordingly, there is no eye fatigue and the three-dimensional adjustment can be made smoothly.

The controller calculates a feature value for each pixel in the first image 100a of FIG. 2 and the second image 100b of FIG. 3. The controller may include a first pixel 131 and a second pixel of the second image 100b of the first image 100a having substantially the same characteristic value for each pixel in the first image 100a and the second image 100b. Recognizing 132 as a pixel belonging to the same object, only the depth value is the same pixel. The controller calculates a difference between depth values of the second pixel 132 with respect to the first pixel 131, and calculates a depth value of each object of the second image 100b with respect to the first image 100a.

Referring to FIG. 5, when a depth value adjustment menu is selected through a display unit provided in the 3D control apparatus and a 10% 3D increase command is selected, the controller may determine a depth value of each object calculated in the second image 100b by 10. The third image 100c of FIG. 4 is obtained by increasing%. When a stereoscopic image composed of the first image 100a and the third image 100c is displayed on the display unit, a stereoscopic sense of 10% improvement is realized than the stereoscopic image composed of the first image 100a and the second image 100b.

The specific values are explained as follows.

The controller distinguishes and recognizes each object in the first image 100a or the second image 100b by a method such as manual teaching or edge detection.

The controller calculates a feature value for each pixel in the first image 100a and the second image 100b. For example, an RGB value of the first pixel 131 as a characteristic value of the first pixel 131 that is an arbitrary pixel belonging to the first image 100a, and a second pixel that is an arbitrary pixel belonging to the second image 100b. When the RGB values of the pixels 132 substantially match, the first pixel 131 of the first image 100a and the second pixel 132 of the second image 100b are the same object (eg, an eagle object). As a pixel belonging to, the same pixel spaced apart in the horizontal direction by the first depth value L1 is recognized. For example, it is assumed that the first depth value L1 is +10. That is, the first pixel 131 of the first image 100a and the second pixel 132 of the second image 100b, which are pixels corresponding to each other, are spaced apart by +10, which is the first depth value L1 in the horizontal direction. The same pixel belongs to the same object but has the same characteristic value per pixel.

When a 10% increase in depth command is input through the depth adjustment menu, a third pixel 133 having a depth value of +11, which is a second depth value L2 increased by 10% from the first depth value L1, is calculated. The specific object is moved in the horizontal direction.

In the case of an eagle object located closer than the reference object 101, a pixel having a depth value of +10 in the second image 100b is moved to the right to have a depth value of +11 in the third image 100c, and as a result, the eagle object The object is moved in a more near right direction.

In the case of a horizon object located farther than the reference object 101, a pixel having a depth value of -10 in the second image 100b is moved horizontally to have a depth value of -11 in the third image 100c and the horizon object is It is moved further to the far left side.

6 shows a depth profile of each pixel. The vertical direction represents pixel numbers and the horizontal direction represents depth values. Reference numeral C1 denotes a distribution of depth values of the first image 100a serving as the reference image. Reference numeral C2 denotes a depth value distribution of the second image 100b. Reference numeral C3 denotes a distribution of depth values of the third image 100c. The first depth value L1 of the second image 100b with respect to the specific pixel or the specific object is changed from the third image 100c to the second depth value L2.

When a depth value increase / decrease rate (for example, a 10% increase / decrease command) is input through the depth value adjustment menu, a depth value of +10 in the second image 100b is changed to a depth value of +11 in the third image 100c. A depth value of −10 in the second image 100b is changed to a depth value of −11 in the third image 100c.

That is, when a specific pixel or a specific object of the second image 100b has the first depth value L1, when a depth value increase / decrease rate is input through the depth value adjustment menu, the specific pixel or the specific object of the second image 100b The second depth value L2 obtained by adjusting the absolute value of the first depth value L1 in proportion to the depth increase / decrease rate is set as the depth value of the specific pixel or the specific object of the third image 100c.

Therefore, when the 3D increase command is input through the depth adjustment menu, the near object 102 located nearer than the reference object 101 in the first image 100a is located closer and farther than the reference object 101. The third image 100c is regenerated so that the object 103 is located farther.

When the 3D reduction command is input through the depth adjustment menu, the third image 100c is regenerated such that the near object 102 and the far object 103 are located closer to the reference object 101.

Although embodiments according to the present invention have been described above, these are merely exemplary, and it will be understood by those skilled in the art that various modifications and equivalent embodiments of the present invention are possible therefrom. Accordingly, the true scope of the present invention should be determined by the following claims.

100a ... first video 100b ... second video
100c ... 3rd image 101 ... reference object
102 ... near objects 103 ... far objects
110a, 110b, 110c, 110d, 110e, 110f, 110g, 110h ... boundary point
120 ... reference point 131 ... first pixel
132 ... second pixel 133 ... third pixel
200 Display ... 210 Depth adjustment menu
300 ... control unit 400 ... stereoscopic control device
L1 ... 1st depth value L2 ... 2nd depth value

Claims (12)

A display unit configured to display a stereoscopic image including a left eye image and a right eye image;
One of the left eye image and the right eye image is selected as a first image as a reference image, and the other is selected as a second image, and a specific pixel of the second image is compared by comparing the first image and the second image. After calculating a first depth value of a specific object, a third image having a second depth value of which the first depth value is adjusted is generated, and a new stereoscopic image composed of the first image and the third image is displayed. A control unit to be displayed on the unit; Stereoscopic adjustment device comprising a.
The method of claim 1,
When a mouse click or a keyboard input is generated after the cursor is moved to the boundary point of the specific object, pixel coordinates of the boundary point are input to the control unit, and the control unit recognizes and recognizes the specific object by recognizing the boundary point. 3D control apparatus for generating the third image by adjusting the depth value for each specific object.
The method of claim 1,
The control unit,
The specific object is distinguished and recognized by edge detection, which automatically recognizes a boundary point of each object or a polygon or a closed curve connecting the boundary points by calculating a change trend of the characteristic value for each pixel, and adjusts a depth value for each specific object. 3D control apparatus for generating the third image.
The method of claim 1,
The controller reads the characteristic value of each pixel of the first image, and automatically recognizes the position where the sudden change of the characteristic value of each pixel occurs as the boundary point of the specific object, and recognizes the reference object and the near distance through the recognition of the boundary point. Stereoscopic control device that distinguishes and recognizes each object, including objects and distant objects.
The method of claim 1,
The control unit,
Selecting a reference point for adjusting the depth value in the first image or the second image,
Distinguishes and recognizes a reference object to which the reference point belongs, a near object recognized to be located ahead of the reference point or the reference object, the reference point or a far object recognized to be located behind the reference object,
When the 3D increase command is received, the near object is moved forward and the far object is moved further backward to generate the third image,
And receiving the 3D reduction command to generate the third image in which the near object and the far object are moved closer to the reference object.
Displaying a stereoscopic image consisting of a left eye image and a right eye image,
And a new depth image corresponding to the depth reduction command by calculating a new depth value for each pixel or a depth value of each object for one of the left eye image and the right eye image when a 3D reduction command is input.
Comparing and recognizing each object by comparing the first image and the second image,
Calculating a depth value for each object,
When the 3D image needs to be adjusted in the 3D image including the first image and the second image, a third image having a new depth value input from a depth value adjustment menu is newly generated and the first image and the third image are generated. 3D control method of expressing a new three-dimensional effect by a new three-dimensional image consisting of.
Obtain a first image and a second image respectively selected from a left eye image or a right eye image,
Recognizing a pixel having a depth value of 0, which means the same position in the left eye image and the right eye image, as a reference point for depth value adjustment,
Maintaining a depth value of the reference point as 0 and generating a third image in which a depth value of a pixel corresponding to a near or far distance is adjusted from the reference point;
And displaying a new stereoscopic image composed of the first image and the third image.
Obtain a first image and a second image respectively selected from a left eye image or a right eye image,
A pixel having a depth value within the threshold is recognized as a reference point for adjusting the depth value.
Maintaining a depth value of the reference point as it is and generating a third image in which a depth value of a pixel corresponding to a near or far distance is adjusted from the reference point;
And displaying a new stereoscopic image composed of the first image and the third image.
Computing a characteristic value for each pixel in the first image and the second image,
Recognizing the first pixel of the first image and the second pixel of the second image, the characteristic values of each pixel substantially coincide, as pixels belonging to the same object as the same pixel having only a different depth value,
Calculating a difference in depth value of the second pixel with respect to the first pixel and calculating a depth value for each object of the second image with respect to the first image,
Generating a third image in which a depth value of each object of the second image is varied;
And displaying a new stereoscopic image composed of the first image and the third image.
If the characteristic value of the first pixel belonging to the first image and the characteristic value of the second pixel belonging to the second image are substantially the same, the first pixel and the second pixel belong to the same object as the first depth value. The same pixel spaced horizontally,
A method of adjusting a 3D effect of calculating a third pixel in which the second pixel is moved to have a second depth value multiplied by a 3D increase rate by the first depth value and displaying the new 3D image by combining the first pixel and the third pixel .
When a three-dimensional increase command is received, an object located nearer than the reference object is moved in the near direction to the right, and an object located farther than the reference object is moved in the far left direction,
When a 3D reduction command is received, an object located closer to the reference object moves to the left direction, which is closer to the reference object, and an object located farther than the reference object moves to the right direction, which is closer to the reference object. How to adjust the three-dimensional effect.
KR1020110066552A 2011-07-05 2011-07-05 Depth adjusting device and depth adjusting method KR20130005148A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110066552A KR20130005148A (en) 2011-07-05 2011-07-05 Depth adjusting device and depth adjusting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110066552A KR20130005148A (en) 2011-07-05 2011-07-05 Depth adjusting device and depth adjusting method

Publications (1)

Publication Number Publication Date
KR20130005148A true KR20130005148A (en) 2013-01-15

Family

ID=47836548

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110066552A KR20130005148A (en) 2011-07-05 2011-07-05 Depth adjusting device and depth adjusting method

Country Status (1)

Country Link
KR (1) KR20130005148A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101509993B1 (en) * 2013-11-29 2015-04-16 영산대학교산학협력단 Method of estimating perceived depth according to difference of color
KR20150141019A (en) * 2014-06-09 2015-12-17 삼성전자주식회사 Apparatas and method for using a depth information in an electronic device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101509993B1 (en) * 2013-11-29 2015-04-16 영산대학교산학협력단 Method of estimating perceived depth according to difference of color
WO2015080322A1 (en) * 2013-11-29 2015-06-04 영산대학교 산학협력단 Method for estimating perceived depth through differences in color
KR20150141019A (en) * 2014-06-09 2015-12-17 삼성전자주식회사 Apparatas and method for using a depth information in an electronic device

Similar Documents

Publication Publication Date Title
EP2701390B1 (en) Apparatus for adjusting displayed picture, display apparatus and display method
US6175379B1 (en) Stereoscopic CG image generating apparatus and stereoscopic TV apparatus
TWI523488B (en) A method of processing parallax information comprised in a signal
TWI558164B (en) Method and apparatus for generating a signal for a display
US9307224B2 (en) GUI providing method, and display apparatus and 3D image providing system using the same
US8514225B2 (en) Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
US9224232B2 (en) Stereoscopic image generation device, stereoscopic image display device, stereoscopic image adjustment method, program for causing computer to execute stereoscopic image adjustment method, and recording medium on which the program is recorded
US20110228051A1 (en) Stereoscopic Viewing Comfort Through Gaze Estimation
US20140028810A1 (en) Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
CN105894567B (en) Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene
US9754379B2 (en) Method and system for determining parameters of an off-axis virtual camera
TWI504232B (en) Apparatus for rendering 3d images
WO2017141511A1 (en) Information processing apparatus, information processing system, information processing method, and program
US20120075291A1 (en) Display apparatus and method for processing image applied to the same
CN110915206A (en) Systems, methods, and software for generating a virtual three-dimensional image that appears to be projected in front of or above an electronic display
KR101270025B1 (en) Stereo Camera Appratus and Vergence Control Method thereof
US9082210B2 (en) Method and apparatus for adjusting image depth
US20120121163A1 (en) 3d display apparatus and method for extracting depth of 3d image thereof
US9918067B2 (en) Modifying fusion offset of current, next, second next sequential frames
TW201733351A (en) Three-dimensional auto-focusing method and the system thereof
Tseng et al. Automatically optimizing stereo camera system based on 3D cinematography principles
CN104168469A (en) Stereo preview apparatus and stereo preview method
KR20130005148A (en) Depth adjusting device and depth adjusting method
KR101121979B1 (en) Method and device for stereoscopic image conversion
Mangiat et al. Disparity remapping for handheld 3D video communications

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application