CN1945629A - Method for calculating image object size constance - Google Patents

Method for calculating image object size constance Download PDF

Info

Publication number
CN1945629A
CN1945629A CN 200610113910 CN200610113910A CN1945629A CN 1945629 A CN1945629 A CN 1945629A CN 200610113910 CN200610113910 CN 200610113910 CN 200610113910 A CN200610113910 A CN 200610113910A CN 1945629 A CN1945629 A CN 1945629A
Authority
CN
China
Prior art keywords
image
depth
straight line
size
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200610113910
Other languages
Chinese (zh)
Inventor
须德
吴爱民
郎丛妍
李兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN 200610113910 priority Critical patent/CN1945629A/en
Publication of CN1945629A publication Critical patent/CN1945629A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a calculating method for the size constancy of object image,belonging to the computer technology fields of vision, image recognition and mode discrimination. The sentience constancy is the most important and specific aspect in human perception world and the size constancy is one of the most important sentience constancy. The invention can make computer perceive the size constancy for each object of single 2D image as human, because it completely simulate the mechanism of size constancy for human visual system. The steps include: calculating the median line of image by the technique of sky detection, calculating the linear parameters in the direction of depth with most rapid change form the image bottom line to the midline in the part of image floor, calculating the apperceive depth in the middle of each image object, and calculating the constancy finally.

Description

A kind of method for calculating image object size constance
Technical field
The present invention relates to a kind of method for calculating image object size constance, belong to the technical field of computer vision, image understanding and pattern-recognition.
Background technology
According to geometrical optics knowledge, object is different from the profile of object at amphiblestroid reflection haze, can be along with human and environment constantly change, and almost all changing all the time.But extraneous object looks it all is the same, and shape, size, color, lightness and the position relation of standard arranged.For example, along with the relative motion of observer and desk or the variation of illumination, very big variation has taken place in the retina of desk reflection, but we are to its perception not variation basically.This phenomenon is called perceptual constancy.Studies show that of psychology of vision: although the size of object retina reflection looks that becoming its size is constant substantially, this phenomenon is called as size constancy.
Perceptual constancy is most important, the most outstanding aspect, the human perception world, it makes the human visual system can surmount incomplete, that be easy to distortion, fuzzy, two-dimentional retina reflection, and set up abundant, stable, common correct, three-dimensional objective world presentation, identification has the meaning of particular importance to the shape constancy theory to image object.Because variation along with the imaging viewpoint, arbitrary object in the objective world can produce unlimited a plurality of two dimensional image projection, so from two dimensional image, identifying corresponding objective world object is the mathematical problem of one-to-many, also is one of difficult problem in the computer vision.The attractive spot of shape constancy theory is: in the face of the continually varying stimulation characteristic, object can be stabilized, perception uniquely.So the shape constancy theory helps to solve the constant difficult problem of viewpoint in the object identification especially.
Shape constancy mainly comprises following kind: size constancy, form constancy, brightness constancy and color constancy etc.Size is an important attribute of sign object.For example, in daily life, the probability that the short person is perceived as child is bigger, and it is bigger that the tall person is perceived as adult probability.And the size of correct perceptual object has important biological significance.For many carnivores, tigerkin is their possible tasty foods, and "big tiger" then is their killer.So the normal size of automatic computed image object is crucial beyond doubt for image object identification, this is the meaning calculated of image object size constance and use the place just also.
Although psychology of vision has disclosed the theory of computation of human visual system's size constancy already, but for many years, the computing machine scholar does not use this achievement and solves computer vision problem, so computing machine also just fails to obtain the ability of image object size constance perception always.The present invention proposes a kind of computing method of image object size constance, attempt to make computing machine, can realize the perception of relative size shape constancy each object in the single width two dimensional image as the people.
Summary of the invention
The objective of the invention is to be achieved through the following technical solutions, method for calculating image object size constance comprises that step is as follows:
(1) publishes picture as medium line with the sky detection technique computes;
(2) at the image above ground portion, calculate from sideline, image bottom to the fastest direction straight line of the change in depth of medium line, obtain its slope;
(3) calculate the relative perceived depth of each image object midpoint;
(4) calculate the shape constancy size of each image object.
In the step of said method (1), the colour consistency of image sky part (comprising ceiling) is better, and layout is simpler, utilizes this characteristic, goes out medium line L with the sky detection technique computes 1, the image above ground portion is separated from entire image.In the step of said method (2), calculate the fastest direction straight line of ground change in depth L with linear perspective and two kinds of degree of depth clues of texture gradient 2, and two kinds of methods that degree of depth clue merges have been proposed.In the step of said method (3), L 2With L 1Intersection point V (V x, V y) be called vanishing point for the point of the perceived depth maximum in the image.L 2Intersection point U (U with sideline, bottom, image ground x, U y), the point for the perceived depth minimum in the image is referred to as near point U.The perceived depth minimum of near point U is made as D U, its value equal camera from the distance of the nearest imaging point of objective world scene divided by the camera imaging coefficient B.Changing Pattern at image above ground portion each point perceived depth is: near point U to the image medium line, along the fastest direction straight line of change in depth L 2, the image depth values linear increment reaches maximum until vanishing point V; With the fastest direction straight line of change in depth L 2On the perpendicular straight line have the identical degree of depth (sea-bottom contour) a little.Straight line L for example 3Cross some P (m, n) and and L 2Vertically, L then 3On the perceived depth of being had a few with the some P identical.So the relative perceived depth of some P can be with near point U to L 3Distance D U-L3Expression.So just can calculate the relative perceived depth of image ground each point.In the step of said method (4), image object perception size computing formula: S=B * A * D has been proposed.S is the perception size of object, and A is the imaging visual angle of object, and D is the perceived depth (also claiming perceived distance) of object, promptly human visual system perceives to image on object when the imaging from the distance of camera, B is the imaging coefficient relevant with eyes (camera).
Technique effect of the present invention is: this method has been simulated the realization principle of human visual system's size constancy fully.Another characteristics of the present invention are, try hard to use simple mathematical to set up complicated shape constancy computation model, and this also is to be consistent with human visual system's mechanism,
Description of drawings
Fig. 1 is the treatment scheme synoptic diagram of method for calculating image object size constance of the present invention;
Fig. 2 is that image object perceived depth of the present invention calculates synoptic diagram.
Embodiment
Below in conjunction with the drawings and specific embodiments the present invention is further described.
As shown in Figure 1, the input of method for calculating image object size constance is a single width two dimension erect(ing) image; Output be in the image each object on the one dimension dimension with assigned direction on the relative perception size of (generally being horizontal or vertical direction); Camera model is the pin-hole imaging model.Erect(ing) image is meant: the image sky be positioned at the image medium line above, image ground be positioned at the image medium line below.
According to the size constancy theory, realize each object relative size shape constancy perception in the image is needed the imaging visual angle A and relative perceived depth D of correct calculation image object.The one dimension size expression of imaging visual angle A available object in image, the promptly available pixel quantitaes that it covers along a certain direction in image.To the image object of given profile, calculate function and finish this calculation task easily.We suppose that the profile of image object all is artificial given.In calculating, the parameter of all images object is to use the function manual interaction of Ginput (n) that the MATLAB environment provides and Imcrop (I) to realize.
Remaining now work is the relative depth D of computed image object.From the relevant conclusion of psychology of vision about human visual perception degree of depth clue, we have proposed a kind of simple, effective method for solving, and it calculates principle as shown in Figure 2.At first, utilize object height and two kinds of degree of depth clues of aerial perspective in image, go out medium line L with the sky detection technique computes 1, the image above ground portion is separated from entire image.Secondly,, utilize two kinds of degree of depth clues of linear perspective and texture gradient, can calculate the fastest direction straight line of change in depth L from sideline, image bottom to medium line at the image above ground portion 2L 2With L 1Intersection point V (V x, V y) be the point of the perceived depth maximum in the image, i.e. vanishing point.L 2Intersection point U (U with sideline, bottom, image ground x, U y), the point for the perceived depth minimum in the image is referred to as near point.Psychology studies show that to the human visual system's in certain scope, the picture depth perception is a linear change.Die near point U to the image medium line, along the fastest direction straight line of change in depth L 2, the image depth values linear increment reaches maximum until vanishing point V.At last, the relative perceived depth figure in computed image ground.With L 2On the perpendicular straight line have the identical degree of depth a little.As straight line L 3Cross some P (m, n) and and L 2Vertically, L then 3On the perceived depth of being had a few with the some P identical.So the relative perceived depth of some P can be with near point U to L 3Distance D U-L3Expression.So just can calculate the relative perceived depth of image ground each point automatically, and then form dense phase perceived depth figure.
Obtained the imaging visual angle A and relative perceived depth D of each object, computing machine just can realize that image object relative size shape constancy calculates, and computing formula is as follows:
S=B×A×D (1)
S is the perception size of object, A is the imaging visual angle of object, D is the perceived depth (also claiming perceived distance) of object, be human visual system perceives to image on object when the imaging from the distance of camera, B is the imaging coefficient (for a together imaging, B value to all objects all be identical) relevant with eyes (camera).The one dimension size of imaging visual angle A available object in image of object represented.
The computation process of relative size shape constancy is elaborated to wherein key step as shown in Figure 1 below.
1, calculates medium line L 1
Outdoor depth image generally comprises the above ground portion of lower and the sky part of eminence simultaneously, and indoor depth image generally also comprises the ceiling portion of lower ground plate portion and eminence simultaneously.The sky part that we are referred to as outdoor images respectively is the image sky with the ceiling portion of off-the-air picture, and above ground portion is image ground with the ground plate portion, and claims that the separatrix on image sky and image ground is a medium line.Image does not have medium line sometimes yet, has only above ground portion this moment.
The colour consistency of image sky part (comprising ceiling) is better, and layout is simpler.Utilize this characteristic, use image Segmentation Technology sky can be separated.Because of tone Hue (H) component the most approaching with people's vision comparatively speaking to colored descriptive power, so earlier rgb space is converted to the HSI space.Because of pending image all is upright,,, calculate the one dimension color histogram so only the first half of image is added up so must have a day dummy section in the first half of image.Having the corresponding H value of peaked Nogata bar (Bin) is exactly the H value of sky, and note is made H SKYIn order to improve computing velocity and to avoid the single-point of above ground portion to be mistaken for sky, image is divided into the fritter of 2*2, and its H value is the mean value of 4 pixels.If W is arbitrary image fritter, its H value is designated as H wIf, | H SKY-H w|<=T I* H SKY, then piece W belongs to sky.T IBe the similarity threshold value, the experiment value is 0.05.The calculating of sky is carried out on entire image.If the area that calculates sky is less than 5% of image, we just think and do not comprise sky in this image.Being positioned in the every row of image, the sky ignore of below forms separatrix, the world.With least square method separatrix, the world is fitted to horizontal linear, this horizontal linear is exactly medium line L 1
When not comprising sky in the image, medium line generally moves back to one of the sideline, top of image or dual-side.Because all images all are upright, medium line can not appear at the sideline, bottom of image.At this moment, the position of medium line is by vanishing point position and the fastest direction straight line of change in depth L 2Decision.When comprising sky in the image, image ground is by medium line, sideline, bottom and the formed zone of dual-side; When not comprising sky in the image, image ground is entire image.
2. calculate the fastest direction straight line of ground change in depth L 2
Psychologic content as can be known, two kinds of degree of depth clues of linear perspective and texture gradient can be used to indicate the fastest direction of ground change in depth.These two kinds of clues are only effective at the image above ground portion, so calculated line L 2Image support that scope only is the image above ground portion.Utilize the linear perspective clue separately, can calculate one from sideline, image bottom to the fastest direction straight line of the change in depth of medium line, we claim that this straight line is linear perspective straight line L PUtilize the texture gradient clue separately, also can calculate one from sideline, image bottom to the fastest direction straight line of the change in depth of medium line, we claim that this straight line is texture gradient straight line L TL PWith L TComputing method introduce after a while, suppose that now these two straight lines obtain.Generally speaking, these two straight lines can not overlap, so during fast direction, can produce conflict in common indication ground change in depth inevitably.Because of these two straight lines all produce with least square fitting, so can think, the relative error of fitting of straight line is big more, and its indicated the fastest direction of change in depth is inaccurate more.A kind of solution of conflict is: two straight lines are power with relative error of fitting separately, and the fastest direction straight line of ground change in depth L is found the solution in linear combination 2, error of fitting is big more relatively, and the combination weights of line correspondence are more little, and concrete grammar is as follows:
If the fastest direction straight line of change in depth L 2, linear perspective straight line L P, texture gradient straight line L TRelative error of fitting be respectively δ 2, δ P, δ T, the angle of their slope correspondences is respectively θ 2, θ P, θ T, the span of all θ is [pi/2, a pi/2], then has
θ 2=θ P×δ T/(δ TP)+θ T×δ P/(δ TP) (2)
δ 2=δ P×δ P/(δ TP)+δ T×δ P/(δ TP) (3)
So, straight line L 2By its slope corresponding angles θ 2With straight line L PWith L TIntersection point uniquely determine.Introduce straight line L below respectively PWith L TComputing method.
2.1 find the solution linear perspective straight line L P
To the parallel lines that extend, in the plane of delineation, will lean on more and more closelyer at a distance in the objective world, even assemble.Such one group of line is called the convergence line, and their convergent point is called vanishing point.In image, the line indication is assembled to the surface of extending at a distance in the surface that the parallel lines indication is smooth.For outdoor images, the linear perspective effect generally only appears at the image above ground portion.But, act on above ground portion and sky part simultaneously for off-the-air picture.The depth perception rule of linear perspective is: the object in the image is near more from vanishing point, and perceived depth is big more, otherwise more little.Simultaneously, the center line of assembling line can point out that also the image perceived depth changes the fastest direction.
To every width of cloth image, use the Hough converter technique to find out the corresponding respectively image point set of 10 the longest straight lines earlier, with least square method these point sets are fitted to straight line respectively then, and obtain equation, slope corresponding angles θ and the relative error of fitting δ of every straight line.Utilizing the thought of similar formula (2), formula (3), serves as power linear combination by these 10 straight lines with separately relative error of fitting, and linear perspective straight line L is easy to get PSlope corresponding angles θ P, relative error of fitting δ PAnd straight-line equation.
2.2 find the solution texture gradient straight line L T
By the content of psychology of vision as can be known: surperficial far away more from the observer, it is more little that texture becomes.Its reason is: near more from viewpoint, the homogeneity object that retina of the same area (imaging plane) zone comprises is few more, and promptly image resolution ratio is big more, and the size of texel is big more.In the interior of articles zone, the difference of pixel intensity is little, so object generally is perceived as homogeneous region.This also just means: say that from the statistical significance near more from viewpoint, the pixel intensity difference sum in the identical image zone should be more little.For this reason, we are with the luminance difference degree of each pixel texture gradient as it, and further find the solution texture gradient straight line L with it T, concrete computation process is as follows:
(1) establish I (m is the brightness I=(R+G+B)/3 at arbitrary pixel place, image ground n), be calculated as follows this some place luminance difference degree Idiff (m, n).Z 1Determine the computer capacity of each pixel intensity difference, certain value of getting in 1,2,3 is advisable.
Idiff ( m , n ) = ( Σ i = - Z 1 Z 1 Σ j = - Z 1 Z 1 | I ( m , n ) - I ( m + i , n + j ) | ) / ( 2 Z 1 + 1 ) 2 - - - ( 4 )
(2) the image above ground portion is divided into Z equably 2* Z 2Fritter, the piece number of establishing horizontal direction (OK) and vertical direction (row) is respectively S, T.Every luminance difference degree Mdiff is the pixel intensity diversity factor Idiff sum of being had a few in the piece, finds out the piece that has minimum brightness diversity factor Mdiff in every row (horizontal direction), and note is made R respectively 1, R 2..., R T-1, R TSay piece R from the statistical significance 1, R 2..., R T-1, R TRepresent in each row from the nearest zone of viewpoint.Z 2Value unsuitable excessive, get about 5 and be advisable.
(3) with least square method to piece R 1, R 2..., R T-1, R TCenter point coordinate carry out match, just can calculate texture gradient straight line L TSlope corresponding angles θ T, relative error of fitting δ TAnd straight-line equation.
3. computed image ground perceived depth figure
As shown in Figure 2, the perceived depth minimum of near point U is made as D U, its value equal camera from the distance of the nearest imaging point of objective world scene divided by the camera imaging coefficient B.Changing Pattern at image above ground portion each point perceived depth is: near point U to the image medium line, along the fastest direction straight line of change in depth L 2, the image depth values linear increment reaches maximum until vanishing point V; With the fastest direction straight line of change in depth L 2On the perpendicular straight line have the identical degree of depth (sea-bottom contour) a little.If (m is m for the arbitrary coordinate of image above ground portion n) to P, the pixel of n, solution point P (m, the relative perceived depth D that n) locates PMethod as follows:
If the fastest direction straight line of change in depth L 2Slope be K 2, straight line L 3(m is n) and perpendicular to straight line L to cross some P 2So, straight line L 3Slope K 3=-1/K 2, straight line L then 3Equation be:
X+K 2Y-mK 2-n=0 (5)
If near point U is to straight line L 3Distance be D U-L3, then have:
D U-L3=|U x+K 2U y-mK 2-n|/(1+K 2 2) 1/2 (6)
So some P (m, the perceived depth D that n) locates PFor:
D P=D U+D U-L3 (7)
Generalized case, the perceived depth D of near point UBe difficult to estimate, consider it and D U-L3Compare much smallerly,, be set as 0 so in the experiment of back, do not consider.
4. the perception size of computed image object
Utilize formula (1) to calculate the perception size of each image object.Because we only calculate relative perception size, so the middle B value of formula (1) can be made as 1.
S=B×A×D=A×D (8)
Other variations of the present invention and modification it will be apparent to those skilled in the art that the present invention is not limited to described embodiment.Therefore, with the true spirit of the disclosed content of the present invention and any/all modifications, variation or the equivalent transformation in the cardinal rule scope, all belong to claim protection domain of the present invention.

Claims (7)

1. method for calculating image object size constance, it is characterized in that: it may further comprise the steps:
(1) publishes picture as medium line with the sky detection technique computes;
(2) at the image above ground portion, calculate from sideline, image bottom to the fastest direction straight line of the change in depth of medium line, obtain its slope;
(3) calculate the relative perceived depth of each image object midpoint;
(4) calculate the visually-perceptible size of each image object, as the result of calculation of size constancy.
2. a kind of method for calculating image object size constance according to claim 1, it is characterized in that: in the step (1), the colour consistency of image sky part (comprising ceiling) is better, and layout is simpler, utilize this characteristic, use image Segmentation Technology sky can be separated.
3. a kind of method for calculating image object size constance according to claim 1, it is characterized in that: in the step (2), calculate the fastest direction straight line of ground change in depth with linear perspective and two kinds of degree of depth clues of texture gradient, and proposed two kinds of methods that degree of depth clue merges.
4. according to claim 1,3 described a kind of method for calculating image object size constance, it is characterized in that: in the step (2), when calculating the fastest direction straight line of ground change in depth with linear perspective degree of depth clue, use the Hough converter technique to find out the corresponding respectively image point set of 10 the longest straight lines earlier, with least square method these point sets are fitted to straight line respectively then, and obtain the equation of every straight line, slope corresponding angles θ and relative error of fitting δ, at last, serves as power linear combination by these 10 straight lines with separately relative error of fitting, obtains linear perspective straight line L PSlope corresponding angles θ P, relative error of fitting δ PAnd straight-line equation.
5. as claim 1,3,4 described a kind of method for calculating image object size constance, it is characterized in that: in the step (2), proposed a kind of texture gradient clue of utilizing and calculated the fastest direction straight line (the straight line L of ground change in depth 2) method, key step is as follows:
(1) establish I (m is the brightness I=(R+G+B)/3 at arbitrary pixel place, image ground n), be calculated as follows this some place luminance difference degree Idiff (m, n).
Idiff ( m , n ) = ( Σ i = - Z 1 Z 1 Σ j = - Z 1 Z 1 | I ( m , n ) - I ( m + i , n + i ) ) / ( 2 Z 1 + 1 ) 2
Z1 determines the computer capacity of each pixel intensity difference, and experiment shows and gets 1,2, and certain value in 3 is advisable.
(2) the image above ground portion is divided into the Z2*Z2 fritter equably, the piece number of establishing horizontal direction (OK) and vertical direction (row) is respectively S, T.Every luminance difference degree Mdiff is the pixel intensity diversity factor Idiff sum of being had a few in the piece, finds out the piece that has minimum brightness diversity factor Mdiff in every row (horizontal direction), and note is made R1 respectively, R2 ..., RT-1, RT.Say from the statistical significance, piece R1, R2 ..., RT-1, RT represent in each row from the nearest zone of viewpoint.
(3) with least square method to piece R1, R2 ..., RT-1, the center point coordinate of RT carries out match, just can calculate the slope corresponding angles θ T of texture gradient straight line LT, relatively error of fitting δ T and straight-line equation.
6. a kind of method for calculating image object size constance according to claim 1 is characterized in that: in the step (3), proposed the relative perceived depth computing method of image object midpoint.(Ux, Uy), the point for the perceived depth minimum in the image is referred to as near point to the intersection point U in ground change in depth the fastest direction straight line L2 and sideline, bottom, image ground.The perceived depth of near point U is made as DU, its value equal camera from the distance of the nearest imaging point of objective world scene divided by the camera imaging coefficient B.If (m is m for the arbitrary coordinate of image above ground portion n) to P, the pixel of n, if the slope of the fastest direction straight line of change in depth L2 is K2, straight line L3 crosses a P, and (m is n) and perpendicular to straight line L2, so the slope K 3=-1/K2 of straight line L3, then the equation of straight line L3 is: X+K2Y-mK2-n=0.If near point U is DU-L3 to the distance of straight line L3, then have: DU-L3=|Ux+K2Uy-mK2-n|/(1+K22) 1/2.Then put P (m, the perceived depth DP computing formula of n) locating is:
DP=DU+DU-L3
7. a kind of method for calculating image object size constance according to claim 1 is characterized in that: in the step (4), proposed image object perception size computing formula: S=B * A * D.S is the perception size of object, A is the imaging visual angle of object, D is the perceived depth (also claiming perceived distance) of object, be human visual system perceives to image on object when the imaging from the distance of camera, B is the imaging coefficient (for a together imaging, B value to all objects all be identical) relevant with eyes (camera).
CN 200610113910 2006-10-20 2006-10-20 Method for calculating image object size constance Pending CN1945629A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200610113910 CN1945629A (en) 2006-10-20 2006-10-20 Method for calculating image object size constance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200610113910 CN1945629A (en) 2006-10-20 2006-10-20 Method for calculating image object size constance

Publications (1)

Publication Number Publication Date
CN1945629A true CN1945629A (en) 2007-04-11

Family

ID=38045024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200610113910 Pending CN1945629A (en) 2006-10-20 2006-10-20 Method for calculating image object size constance

Country Status (1)

Country Link
CN (1) CN1945629A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706964B (en) * 2009-08-27 2011-11-23 北京交通大学 Color constancy calculating method and system based on derivative structure of image
CN102467743A (en) * 2010-11-09 2012-05-23 株式会社东芝 Image processing apparatus, image processing method, and computer program product thereof
CN103686139A (en) * 2013-12-20 2014-03-26 华为技术有限公司 Frame image conversion method, frame video conversion method and frame video conversion device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706964B (en) * 2009-08-27 2011-11-23 北京交通大学 Color constancy calculating method and system based on derivative structure of image
CN102467743A (en) * 2010-11-09 2012-05-23 株式会社东芝 Image processing apparatus, image processing method, and computer program product thereof
CN103686139A (en) * 2013-12-20 2014-03-26 华为技术有限公司 Frame image conversion method, frame video conversion method and frame video conversion device
US9530212B2 (en) 2013-12-20 2016-12-27 Huawei Technologies Co., Ltd. Image frame conversion method and video frame conversion method and apparatus

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN1215444C (en) Representation and diawing method of three-D target and method for imaging movable three-D target
CN1241419C (en) Method for multiple view synthesis
Vaudrey et al. Differences between stereo and motion behaviour on synthetic and real-world stereo sequences
CN106709947A (en) RGBD camera-based three-dimensional human body rapid modeling system
CN1104816C (en) Method and apparatus for determining position of TV camera for use in virtual studio
CN109993783A (en) A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud
CN101673412B (en) Light template matching method of structured light three-dimensional vision system
CN105096292B (en) Number of objects method of estimation and device
CN111433820B (en) Device for generating highly colored image by ground object and computer readable medium
CN106127799B (en) A kind of visual attention detection method for 3 D video
CN103826032B (en) Depth map post-processing method
CN1613094A (en) Method for blending plurality of input images into output image
CN105574905B (en) A kind of two dimensional image expression method of three-dimensional laser point cloud data
CN105225230A (en) A kind of method and device identifying foreground target object
CN108613637A (en) A kind of structured-light system solution phase method and system based on reference picture
CN106408596B (en) Sectional perspective matching process based on edge
CN107990878A (en) Distance measuring system and distance measuring method based on low-light-level binocular camera
CN100341029C (en) Correcting method for curve projection geometry of artificial site
CN1797474A (en) Fast method for posting players to electronic game
CN110379004A (en) The method that a kind of pair of oblique photograph achievement carries out terrain classification and singulation is extracted
CN1945629A (en) Method for calculating image object size constance
CN1870049A (en) Human face countenance synthesis method based on dense characteristic corresponding and morphology
CN100337472C (en) Video composing method with motion prospect
CN111275698A (en) Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication