CN107767358A - A kind of objects in images fuzziness determines method and apparatus - Google Patents
A kind of objects in images fuzziness determines method and apparatus Download PDFInfo
- Publication number
- CN107767358A CN107767358A CN201610709852.7A CN201610709852A CN107767358A CN 107767358 A CN107767358 A CN 107767358A CN 201610709852 A CN201610709852 A CN 201610709852A CN 107767358 A CN107767358 A CN 107767358A
- Authority
- CN
- China
- Prior art keywords
- key point
- fuzziness
- subject image
- characteristic value
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000006870 function Effects 0.000 claims description 40
- 238000000605 extraction Methods 0.000 claims description 39
- 230000008030 elimination Effects 0.000 claims description 34
- 238000003379 elimination reaction Methods 0.000 claims description 34
- 238000012549 training Methods 0.000 claims description 31
- 239000000284 extract Substances 0.000 claims description 27
- 239000011159 matrix material Substances 0.000 claims description 26
- 230000009466 transformation Effects 0.000 claims description 18
- 230000015572 biosynthetic process Effects 0.000 claims description 17
- 238000003786 synthesis reaction Methods 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 11
- 238000010801 machine learning Methods 0.000 claims description 10
- 238000009432 framing Methods 0.000 claims description 8
- 238000013459 approach Methods 0.000 claims description 4
- 230000001815 facial effect Effects 0.000 description 25
- 238000013135 deep learning Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 210000004709 eyebrow Anatomy 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000010183 spectrum analysis Methods 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 239000012925 reference material Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
This application provides a kind of objects in images fuzziness to determine method and apparatus.This method includes:Receive subject image;Key point is positioned from the subject image, wherein, the key point is the point defined on the ad-hoc location of contour of object;Key point characteristic value is extracted at the key point;Based on the key point characteristic value extracted in each key point, the object fuzziness in subject image is determined.The application improves the precision for determining objects in images fuzziness.
Description
Technical field
The present invention relates to image identification technical field, more particularly to a kind of objects in images fuzziness to determine method and dress
Put.
Background technology
During image obtains, influenceed by shooting environmental, noise of equipment, compression losses and transmitting procedure, image
Quality might have different degrees of loss.And in the various application scenarios based on image, especially in object identification, image
Quality can directly influence the effect of object identification.In object identification, picture quality can significantly influence the angle of object area
Point and edge feature, and these features have the function that in identification it is important.The quality of image includes two aspects:On the one hand it is
The departure degree of the image and reference picture, i.e. fidelity;On the other hand it is sense of the people to image integral layout and local detail
By such as aesthetic feeling and fuzziness.
In the prior art, it is thus proposed that first determine the fuzziness of image, then generated according to fuzziness after eliminating fuzziness
Image, then carry out subsequent applications, such as object identification.Existing image blur determines scheme, is whole to image mostly
The frequency domain character of body carries out statistical analysis, by analyzing the characteristic estimating of high, medium and low frequency composition its fuzziness in image.It is existing
Some image blurs determine scheme, do not do certain optimisation for the object in image, and effect is undesirable.
The content of the invention
Present invention solves the technical problem that one of be to improve the precision that objects in images fuzziness determines.
According to one embodiment of the application, there is provided a kind of fuzziness determines method, including:
Receive subject image;
Key point is positioned from the subject image, wherein, the key point is fixed on the ad-hoc location of contour of object
The point of justice;
Key point characteristic value is extracted at the key point;
Based on the key point characteristic value extracted in each key point, the fuzziness of object in subject image is determined.According to this Shen
One embodiment please, there is provided a kind of objects in images recognition methods, including:
Based on the key point characteristic value extracted at each key point in subject image, the object mould in subject image is determined
Paste degree;
Object identification is carried out in the subject image after determining that result eliminates fuzziness according to fuzziness.
According to one embodiment of the application, there is provided a kind of objects in images attribute recognition approach, including:
Based on the key point characteristic value extracted at each key point in subject image, the object mould in subject image is determined
Paste degree;
In the subject image after determining that result eliminates fuzziness according to fuzziness, thingness is determined.
According to one embodiment of the application, there is provided a kind of objects in images fuzziness determining device, including:
Memory, for storing computer-readable program instructions;
Processor, for performing the computer-readable program instructions stored in memory, to perform:Receive subject image;
Key point is positioned from the subject image, wherein, the key point is the point defined on the ad-hoc location of contour of object;
Key point characteristic value is extracted at the key point;Based on the key point characteristic value extracted in each key point, determine in subject image
Object fuzziness.
According to one embodiment of the application, there is provided a kind of objects in images identification device, including:
Memory, for storing computer-readable program instructions;
Processor, for performing the computer-readable program instructions stored in memory, to perform:Based in subject image
In each key point at extract key point characteristic value, determine the object fuzziness in subject image;Determined according to fuzziness
As a result eliminate in the subject image after fuzziness and carry out object identification.
According to one embodiment of the application, there is provided a kind of objects in images property recognition means, including:
Memory, for storing computer-readable program instructions;
Processor, for performing the computer-readable program instructions stored in memory, to perform:Based in subject image
In each key point at extract key point characteristic value, determine the object fuzziness in subject image;Determined according to fuzziness
As a result eliminate in the subject image after fuzziness, determine thingness.
According to one embodiment of the application, there is provided a kind of objects in images fuzziness determining device, including:
Subject image reception device, for receiving subject image;
Crucial location device, for positioning key point from the subject image, wherein, the key point is in object
The point defined on the ad-hoc location of profile;
Key point characteristics extraction device, for extracting key point characteristic value at the key point;
Object fuzziness determining device, for based on the key point characteristic value extracted in each key point, determining subject image
In object fuzziness.
According to one embodiment of the application, there is provided a kind of objects in images identification device, including:
Objects in images fuzziness determining device, for based on the key point extracted at each key point in subject image
Characteristic value, determine the object fuzziness in subject image;
Object detector, for carrying out object in the subject image after determining that result eliminates fuzziness according to fuzziness
Identification.
According to one embodiment of the application, there is provided a kind of objects in images property recognition means, including:
Objects in images fuzziness determining device, for based on the key point extracted at each key point in subject image
Characteristic value, determine the object fuzziness in subject image;
Thingness determining device, for determined according to fuzziness result eliminate fuzziness after subject image in, really
Determine thingness.
In the embodiment of the present invention, it is contemplated that on object, background can be relatively fuzzyyer for generally focusing when taking pictures, it is taken as that wheel
Wide and subject image quality strong correlation.Such as the profile of the profile of face and face and quality of human face image strong correlation in face.Base
Analyzed more than, the application positions object key point, described when carrying out fuzziness estimation from subject image to be estimated
Key point characteristic value is extracted at key point, and based on the key point characteristic value extracted in each key point, is determined in subject image
Object fuzziness, the problem of prior art does certain optimisation for subject image this particular task is so avoided,
Improve the precision of object fuzziness determination.
Although those of ordinary skill in the art will be appreciated that following detailed description carries out referenced in schematic embodiment, accompanying drawing,
But the present invention is not limited in these embodiments.But the scope of the present invention is extensive, and it is intended to be bound only by appended right
It is required that limit the scope of the present invention.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is the flow chart that method is determined according to the object fuzziness of the application one embodiment.
Fig. 2 is the flow chart according to the application one embodiment training in advance object fuzziness Matching Model.
Fig. 3 is the key point schematic diagram defined according to the application one embodiment in the ad-hoc location of the profile of face.
Fig. 4 is the flow chart according to the objects in images recognition methods of the application one embodiment.
Fig. 5 is the flow chart according to the objects in images attribute recognition approach of the application one embodiment.
Fig. 6 is the hardware block diagram according to the objects in images fuzziness determining device of the application one embodiment.
Fig. 7 is the hardware block diagram according to the objects in images identification device of the application one embodiment.
Fig. 8 is the hardware block diagram according to the objects in images property recognition means of the application one embodiment.
Fig. 9 is the module frame chart according to the objects in images fuzziness determining device of the application one embodiment.
Figure 10 is the module frame chart according to the objects in images identification device of the application one embodiment.
Figure 11 is the module frame chart according to the objects in images property recognition means of the application one embodiment.
Although those of ordinary skill in the art will be appreciated that following detailed description carries out referenced in schematic embodiment, accompanying drawing,
But the present invention is not limited in these embodiments.But the scope of the present invention is extensive, and it is intended to be bound only by appended right
It is required that limit the scope of the present invention.
Embodiment
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail
The processing described as flow chart or method.Although operations are described as the processing of order by flow chart, therein to be permitted
Multioperation can be implemented concurrently, concomitantly or simultaneously.In addition, the order of operations can be rearranged.When it
The processing can be terminated when operation is completed, it is also possible to the additional step being not included in accompanying drawing.The processing
It can correspond to method, function, code, subroutine, subprogram etc..
The computer equipment includes user equipment and the network equipment.Wherein, the user equipment includes but is not limited to electricity
Brain, smart mobile phone, PDA etc.;The network equipment includes but is not limited to single network server, multiple webservers form
Server group or the cloud being made up of a large amount of computers or the webserver based on cloud computing (Cloud Computing), wherein,
Cloud computing is one kind of Distributed Calculation, a super virtual computer being made up of the computer collection of a group loose couplings.Its
In, the computer equipment can isolated operation realize the present invention, also can access network and by with other calculating in network
The present invention is realized in the interactive operation of machine equipment.Wherein, the network residing for the computer equipment include but is not limited to internet,
Wide area network, Metropolitan Area Network (MAN), LAN, VPN etc..
It should be noted that the user equipment, the network equipment and network etc. are only for example, other are existing or from now on may be used
The computer equipment or network that can occur such as are applicable to the present invention, should also be included within the scope of the present invention, and to draw
It is incorporated herein with mode.
Method (some of them are illustrated by flow) discussed hereafter can be by hardware, software, firmware, centre
Part, microcode, hardware description language or its any combination are implemented.Implement when with software, firmware, middleware or microcode
When, to implement the program code of necessary task or code segment can be stored in machine or computer-readable medium and (for example deposit
Storage media) in.(one or more) processor can implement necessary task.
Concrete structure and function detail disclosed herein are only representational, and are for describing showing for the present invention
The purpose of example property embodiment.But the present invention can be implemented by many alternative forms, and it is not interpreted as
It is limited only by the embodiments set forth herein.
Although it should be appreciated that may have been used term " first ", " second " etc. herein to describe unit,
But these units should not be limited by these terms.It is used for the purpose of using these terms by a unit and another unit
Make a distinction.For example, in the case of the scope without departing substantially from exemplary embodiment, it is single that first module can be referred to as second
Member, and similarly second unit can be referred to as first module.Term "and/or" used herein above include one of them or
Any and all combination of more listed associated items.
It should be appreciated that when a unit is referred to as " connecting " or during " coupled " to another unit, it can directly connect
Connect or be coupled to another unit, or there may be temporary location.On the other hand, when a unit is referred to as " directly connecting
Connect " or " direct-coupling " when arriving another unit, then in the absence of temporary location.It should in a comparable manner explain and be used to retouch
State the relation between unit other words (such as " between being in ... " compared to " between being directly in ... ", " and with ... it is adjacent
Closely " compared to " with ... be directly adjacent to " etc.).
Term used herein above is not intended to limit exemplary embodiment just for the sake of description specific embodiment.Unless
Context clearly refers else, otherwise singulative used herein above "one", " one " also attempt to include plural number.Should also
When understanding, term " comprising " and/or "comprising" used herein above provide stated feature, integer, step, operation,
The presence of unit and/or component, and do not preclude the presence or addition of other one or more features, integer, step, operation, unit,
Component and/or its combination.
It should further be mentioned that in some replaces realization modes, the function/action being previously mentioned can be according to different from attached
The order indicated in figure occurs.For example, depending on involved function/action, the two width figures shown in succession actually may be used
Substantially simultaneously to perform or can perform in a reverse order sometimes.
Technical scheme is described in further detail below in conjunction with the accompanying drawings.
Analyzed as preceding, existing method estimates the fuzzy of image using global characteristics (spectrum analysis, texture analysis) mostly
Degree.By research, it is found that the high fdrequency component of subject image is concentrated mainly on contour of object (such as profile in face and face
Profile) part, remainder is smoother.Generally focusing is in object during in view of taking pictures, therefore background can also compare mould
Paste.In summary, it is believed that contour of object and subject image fuzziness strong correlation, and the information of remainder then can be examined not
Consider.Therefore, the key point characteristic value for the key point that the profile of the invention for being in advance based on standard item is extracted, to train fuzziness
Forecast model, object fuzziness, rather than the method by spectrum analysis or texture analysis then are determined using this model, improved
The specific aim that object fuzziness determines, and then improve fuzziness and determine effect.
Fig. 1 is to determine method according to the objects in images fuzziness of the application one embodiment, including:
S110, receive subject image;
S120, key point is positioned from subject image, wherein, the key point is fixed on the ad-hoc location of contour of object
The point of justice;
S130, key point characteristic value is extracted at key point;
S140, based on the key point characteristic value extracted in each key point, determine the object fuzziness in subject image.
Object in the present invention is generally referred to the object for stablizing profile, and stable profile is meant that, in the picture, its
In-profile and exterior contour be stable, will not with the angle of objects in images, the light of image, object attitudes vibration thing
Body.Such as face, its in-profile are the profiles of face, exterior contour is the profile of face, is not easy with shooting angle, light
Line, posture influence and changed.For another example car plate, its in-profile is each digital outward flange in car plate, exterior contour
The outward flange of whole car plate, all it is to be not easy the change such as crimpness with shooting angle, light, car plate.Subject image refers to
With subject image, electronic edition is generally referred to, mobile phone photograph can be divided into according to its source, camera is taken pictures, monitored picture, screen
The subject image that sectional drawing, scanned photograph obtain.
Object fuzziness refers to the degree that object is fuzzy in subject image, and general value between zero and one, for 0 say by fuzziness
Bright object is entirely that clearly, it is entirely fuzzy that fuzziness represents object for 1.
Step S110-S140 is described in detail respectively below.
Step S110, subject image is received.
As described above, subject image can be taken pictures by mobile phone photograph, camera, monitored picture, screenshot capture, scanned photograph
Etc. obtaining.
Step S120, key point is positioned from the subject image,
The key point is the point defined on the ad-hoc location of contour of object.Profile includes outline and Internal periphery.It is right
For face, the profile of outline such as face, the profile of Internal periphery such as face.Fig. 3 is shown according to one reality of the application
Apply an example of the key point that example defines in the ad-hoc location of the profile of face.
In the step S120 of above-described embodiment one embodiment, key point can be by object contoured line
Special ratios position defines, such as point 301 in Fig. 3 is defined as the point of the eyebrow leftmost side on the right in facial image, point 302
Be defined as in facial image the eyebrow on the right 1/6 from left side at point, point 303 is defined as in facial image on the eyes on the right
Eye contour line 1/8 from left side at point.Therefore, step S120 can be thick by known object (including face) alignment algorithm
Body outline in subject image is slightly identified, the specific ratio in then being defined on these contour lines identified by key point
Realized to position key point example position.
In the above-described embodiments, can be by artificial or machine according to the key in certain rule one by one mark subject image
Point, such as provide at the high order end 1/6,1/3,1/2,2/3 away from left side eyebrow of the left side eyebrow of image to treat for all one respectively
The location point (location point is key point) of feature is extracted, then needs to measure and mark one by one.In another embodiment, in order to more
It is easy to position key point, step S120 can also be by way of another kind positions key point, and which mainly includes following place
Reason process:
The characteristic pattern that convolution obtains, which is carried out, to the subject image or to the subject image carries out convolution;
Linear transformation will be carried out to the result of subject image or characteristic pattern convolution;
Input using the result of the linear transformation as three-dimensional deformation model, the output of three-dimensional deformation model is key
Point.
Above-mentioned convolution is completed by convolution layer unit.Linear transformation is completed by connecting layer unit entirely.Convolution layer unit and
Full connection layer unit is the elementary cell of deep learning network.Deep learning network is a kind of special multilayer feedforward nerve net
Network, the response of its neuron are only relevant with the regional area of input signal.It is in image and video analysis using very extensive.
Convolution layer unit is the basic component units of deep learning network, and it is commonly used in the front portion of deep learning network and middle part, made
Convolution operation is carried out to input signal with multiple wave filters, exports multi channel signals.Full articulamentum is the base of deep learning network
This component units, it is commonly used in the rear portion of deep learning network, is multiplied using weight matrix (projection matrix) with input vector
(execution linear transformation), obtains output vector.No longer gone to live in the household of one's in-laws on getting married due to the existing mature technology of deep learning network, therefore to this part
State.
In convolution operation, convolution operation may be carried out to the different piece of subject image respectively with multiple wave filters,
Multi channel signals are exported, the signal representation of each passage feature of the different piece of subject image, have thus obtained object
The characteristic pattern of image.Convolution operation can also again be carried out to this characteristic pattern, further taken out on the basis of this feature figure
The feature of different piece, further feature figure is obtained, this is known to deep learning field.Therefore, to subject image or right
Subject image carries out the obtained characteristic pattern of convolution and carries out convolution operation, obtain be taken out from subject image it is different degrees of
Characteristic pattern, wherein what is obtained to subject image progress convolution operation is the characteristic pattern of the low layer taken out from subject image, to thing
What the characteristic pattern progress convolution operation that body image progress convolution obtains obtained is the feature of the higher taken out from subject image
Figure, they can express the different degrees of feature of subject image.
Linear transformation can be completed by full connection layer unit as described above.Full connection layer unit can be with convolution layer unit
The result of convolution operation is input, and linear transformation is carried out to the multi channel signals of multiple wave filters output.Convolution layer unit it is each
The feature that wave filter takes out is probably abstract, can not be more readily understood, and is likely to become by the combination of full connection layer unit
Specifically, the feature that can be more readily understood, such as three-dimensional deformation model (3D morphable model, 3DMM) is related to just below
Trade each shape principal component factor alpha of shadow T and objecti, wherein i is natural number.
Three-dimensional deformation model is the known of a kind of rigid body that can express to parametrization three-dimensional body and non-rigid Geometrical change
Model, rigid body translation is expressed in generally use rotation, translation and rectangular projection, using principal component analysis (Principle
Component analysis, PCA) express non-rigid shape deformations.
3DMM expression formula is:
Wherein, S be 3DMM output shape (i.e. sampling grid, that is, represent orient it is to be extracted in subject image
The grid of position of the location point of feature in subject image);M is average object (such as face) shape;wiFor 3DMM shape
Principal component (Principle components);T is 2x4 matrix (rectangular projection), and what it was expressed is recited above
Rigid body translation;αiFor each principal component coefficient of subject image, what it was expressed is non-rigid conversion recited above;N is principal component
Number.M and w in the modeliFor known variables, T and αiFor unknown parameter, T represents the rigid body translation of object, αiRepresent object
Non-rigid conversion.S, m and wiIt is matrix, their dimension is equal, such as:32x32.Each variable or the thing of parameter in the formula
It is known to manage meaning, therefore is repeated no more.M and wiFor known variables, T and αiIt is the 3DMM input (knot of preceding linear conversion
Fruit).Input represents the rectangular projection T of the rigid deformation of subject image and represents that the non-rigid of subject image becomes in 3DMM
Each principal component factor alpha of the subject image of shapeiAfterwards, just obtained in the subject image after elimination rigid deformation and non-rigid deformation
The grid S of the position composition of the location point of feature should be extracted.In the embodiment, convolution layer unit and full connection layer unit is allowed to close
To obtain the rectangular projection T of subject image, each principal component factor alphai, then by T and αi3DMM is inputted, what is obtained is exactly sampling network
Lattice, that is, represent the grid of position of the location point of feature to be extracted in subject image in subject image oriented.
Present inventor proposes the concept of the spatial alternation layer (STL) based on 3DMM first, i.e. 3DMM-STL, it will
Convolution layer unit, connection layer unit, 3DMM are combined entirely.Expression thing is obtained using convolution layer unit, full connection layer unit
The rectangular projection T of the rigid deformation of body image and each principal component for representing the subject image that the non-rigid of subject image deforms
Factor alphai, recycle 3DMM to eliminate the characteristics of rigid deformation and non-rigid deform, by T and αi3DMM is inputted, be eliminated rigid body
The position grid of the location point of feature to be extracted in the subject image of deformation and non-rigid deformation, so as to eliminate the posture pair of object
It is positioned at the influence of the location point of feature to be extracted in subject image.It is not required to mark feature to be extracted one by one according to certain rule
Location point, but directly subject image is carried out automatically convolution, again linear transformation, again by three-dimensional deformation model treatment deformation
The location point of feature to be extracted is obtained, extracts feature according still further to location point, the process of so a series of automations eliminates one by one
Mark the burden of the location point of feature to be extracted.The characteristics of due to three-dimensional deformation model itself, to posture (including orientation, shooting
Angle, crimpness etc.) there is robustness, i.e., the posture of object is influenceed very little in by the subject image inputted, with convolution and linearly
Conversion combines is provided with strong classification capacity again, and so, having ensured has distinction to different objects, and lifts crucial point location knot
Fruit improves crucial spot placement accuracy to the robustness of the posture of object.
Step S130, key point characteristic value is extracted at the key point.
Key point characteristic value refer to it is being taken at or near key point, the feature at key point in subject image can be represented
Value.Generally, key point characteristic value is used in the pixel value expression of some pixels taken at or near key point, because
The pixel value of the pixel taken at or near key point just directly express at the key point with subject image on other positions area
Another characteristic.
In one embodiment, the pixel value of the pixel of the key point can directly be taken as key point characteristic value, but this
Kind mode can not accurately reflect the feature of key point, because the feature at key point is often by each pixel near the point
What amplitude of variation embodied, for example, for changing precipitous profile, it is possible that the pixel of adjacent pixel at or near profile
Value differs very big situation, because a possible pixel falls on profile, another adjacent pixel then falls outside profile;And
For changing gentle profile, it is possible that the pixel value of adjacent pixel differs very small situation at or near profile.
Therefore, in another embodiment, the pixel value of pixel of the key point nearby in a specific region can be taken as crucial
Point feature value.For example, using the key point as the center of circle, circle is drawn by radius of predetermined length, by the pixel value of all pixels in circle
Included together as the key point characteristic value of the key point.
Be not relative to key point directive pixel can equably reflect the feature of key point, such as taking turns
The edge of profile, may perpendicular to pixel of the key point in the normal direction of the tangent line of the body outline where key point with
Pixel value changes are most obvious between key point, best embody out the feature at or near key point.Therefore, at one of the application
In preferred embodiment, step S130 includes:Make the tangent line of the body outline where key point at the key point, with
The vertical direction of the tangent line is normal direction;By in the normal direction of the key point with closest pre- of the key point
The pixel value of fixed number mesh pixel is as the key point characteristic value extracted in the key point.For example, predetermined number is 11, for closing
For key point 304, it is located at the center of the lower contour of mouth, makees to be horizontally oriented in the tangent line of the lower contour of mouth at this point
, normal is vertical direction, then takes 5 nearest from key point, continuous adjacents upwards in the normal direction of key point 304
Pixel, 5 nearest from key point, continuous adjacent pixels are taken downwards, along with key point pixel, the pixel of these pixels
Value is as the key point characteristic value extracted in the key point.The embodiment is due to choosing key point along the body outline at place
Pixel value away from the nearest predetermined number pixel of key point in normal direction is as key point characteristic value, the feature of these pixels
Value can be more represented at or near key point relative to the difference of image other positions, with more distinction, improve fuzziness identification
Effect.
In the preferred embodiment of the application, step S130 includes:For each target corresponding with subject image
Image, in the normal direction of the key point, the pixel value of the predetermined number pixel closest with the key point is taken,
As the key point characteristic value extracted in the key point, wherein target image corresponding with subject image includes:The object figure
As, zoom in or out after the subject image and/or the subject image gradient image.
That is, taken not only with subject image sheet along the normal direction of key point with key point distance most
The pixel value of near predetermined number pixel, is also zoomed in or out to subject image, the object figure after zooming in or out
Along the normal direction of key point as on, the pixel value of the predetermined number pixel closest with the key point is taken, or is gone back
The gradient image of subject image is made, along the normal direction of key point on gradient image, is taken closest with the key point
Predetermined number pixel pixel value.Key point characteristic value by all these pixels of taking-up together as the key point.
For example, for point 304,11 pixel values are taken as stated above with Fig. 3 facial image sheet;Fig. 3 facial image is put
Greatly to 2 times, then 11 pixel values are taken as stated above;Fig. 3 facial image is narrowed down to 1/2, then takes 11 as stated above
Pixel value;Take 11 pixel values as stated above on the gradient image of Fig. 3 facial image again.This 44 pixel value all conducts
The key point characteristic value of key point 304.
Advantage of this is that subject image is zoomed in or out, so as to get the feature of different scale, so as to right
Fuzziness has more preferable descriptive power, enhances the effect of fuzziness judgement.In addition, the feature on gradient image is taken also to strengthen
To the descriptive power of fuzziness, the effect of enhancing fuzziness judgement.Object
In one embodiment of the application, the key point characteristic value in the extraction of each key point takes the shape of matrix
The pixel value for the predetermined number pixel that the one-dimensional representative of formula, wherein matrix takes on each target image of each key point,
It is another to tie up each target image for representing each key point.
In the example of 51 key point, 51 key points are oriented in facial image.Facial image in itself,
The facial image that is amplified to 2 times, the facial image for narrowing down to 1/2, facial image gradient image on, pressed for each key point
The above method takes 11 pixel values, then takes out 11 × 4 × 51=2244 pixel value altogether.For example, will be for each key point
Each target image (facial image in itself, be amplified to 2 times of facial image, narrow down to 1/2 facial image, facial image
Gradient image) pixel value of 11 pixels that takes out is placed in a line of matrix.Because there are 51 key points, each key point
There are 4 kinds of target images, then matrix has 51 × 4=204 rows, 11 row, the matrix that formation is one 204 × 11.Using the form of matrix,
Compared to using long vector form, have be easy to subsequent treatment (based on the key point characteristic value extracted in each key point, it is determined that
Object fuzziness in subject image) benefit, especially in the pattern of model learning, matrix as input, with long vector phase
Than having higher efficiency in model learning.
Step S140, based on the key point characteristic value extracted in each key point, the object fuzziness in subject image is determined.
In one embodiment, step S140 includes:The key point characteristic value extracted in each key point is inputted into instruction in advance
In experienced object simulation degree Matching Model, the object fuzziness in subject image is obtained.The object fuzziness Matching Model is
The key point characteristic value of subject image based on input and export the machine learning model of the object fuzziness in subject image.Thing
Body fuzziness Matching Model training in advance as follows, as shown in Figure 2:
S210, each and each object fuzziness in object fuzziness set based on multiple standard item images,
Subject image synthesis is carried out, obtains the training set of synthetic body image;
S220, from each synthetic body framing sample key point;
S230, sample key point characteristic value is extracted at the sample key point of positioning;
S240, matched respectively as object fuzziness with the sample key point characteristic value of extraction and corresponding object fuzziness
The known input and known output of model, train object fuzziness Matching Model.
Step S210-S240 is described respectively below.
Step S210, each based on multiple standard item images obscures with each object in object fuzziness set
Degree, subject image synthesis is carried out, obtains the training set of synthetic body image.
Standard item image is considered as the subject image that readability meets predetermined condition.Preset a reference material
Body image collection, there are multiple different standard item images, such as the face shooting 1000 of respectively 1000 people in set
Open enough clearly head portraits.An object fuzziness set is also set in advance, for example, 0,0.01,0.02,0.03 ...,
0.99,1}。
By the object fuzziness in the standard item image in standard item image collection and object fuzziness set each other
Combination, subject image synthesis is carried out respectively.For example, there is 1000 subject images in standard item image collection, object obscures
In the case of having 101 kinds of fuzzinesses in degree set, the training of synthetic body image is concentrated with 101 × 1000=101000 synthesis
Subject image.
A kind of method of subject image synthesis includes:Each object fuzziness in object fuzziness set, generation
The intensity of corresponding point spread function, wherein point spread function is determined by object fuzziness;Distinguished with the point spread function of generation
Each filtering to multiple standard item images;Random noise is added in image after the filtering, obtains synthetic body image
Training set.
Point spread function is to describe optical system to the function of point source analytic ability, and point source is after any optical system
The picture point of an expansion will be formed due to diffraction.The application simulates Gaussian Blur using the point spread function of given shape
And motion blur, so as to generate blurred picture as training sample.Functional form is Gaussian Blur or motion blur.
Random noise is filtered and added to image with point spread function and belongs to prior art.
Step S220, from each synthetic body framing sample key point.
Key is positioned from the subject image from each synthetic body framing sample key point and step S120
The method of point is identical.In one embodiment, synthetic is identified roughly by known object (including face) alignment algorithm
Body objects in images contour line, special ratios position in then being defined on these contour lines identified by key point are determined
Position, obtains sample key point.Here sample key point will have identical definition with the key point in step S120, i.e., special in object
The definition of the special ratios position of fixed wheel profile should be the same.
In another embodiment, include from each synthetic body framing sample key point:
The characteristic pattern that convolution obtains, which is carried out, to synthesis subject image or to the synthetic body image carries out convolution;
Linear transformation will be carried out to the result of synthesis subject image or characteristic pattern convolution;
Input using the result of the linear transformation as three-dimensional deformation model, the output of three-dimensional deformation model is key
Point.
This method class with positioning key point from subject image previously by convolution, linear transformation and three-dimensional deformation model
Seemingly, therefore do not repeat.
Step S230, sample key point characteristic value is extracted at the sample key point of positioning.
Sample key point characteristic value is extracted at the sample key point of positioning with being carried in step S130 at the key point
The method for taking key point characteristic value is identical, and is consistent.The false extraction key point characteristic value located in the above at the key point
During, facial image in itself, be amplified to 2 times of facial image, narrow down to 1/2 facial image, facial image ladder
Spend on image, take 11 pixels nearest with key point in the normal direction of the body outline at place for each key point
Pixel value, then step S230 also will synthesis facial image in itself, be amplified to 2 times of synthesis facial image, narrow down to
1/2 synthesis facial image, synthesize on the gradient image of facial image, for contour of object of each sample key point at place
The pixel value of 11 pixels nearest with key point is taken in the normal direction of line, the sample key point characteristic value as extraction.
Step S240, with the sample key point characteristic value and corresponding object fuzziness of extraction respectively as object fuzziness
The known input and known output of Matching Model, train object fuzziness Matching Model.
In one embodiment, object fuzziness Matching Model can use depth convolutional network.Depth convolutional network one
As comprising several convolution (conv) layers, pond (pool) layer and entirely connect (fc) layer.Depth convolutional network is stacked by these layers
Form, every layer of parameter for having demand solution, for convolutional layer, these parameters are referred to as wave filter, and for full articulamentum, these parameters are referred to as
Projection matrix.In the application one embodiment, using " conv-pool-conv-pool-fc " structure, but according to the essence of reality
Degree and operand demand can arbitrarily increase and decrease the number of convolutional layer, pond layer and full articulamentum.Due to existing in depth convolutional network
Every layer of demand solution parameter, it is therefore desirable to which these parameters are determined by training in advance.
With the sample key point characteristic value and corresponding object fuzziness of extraction respectively as object fuzziness Matching Model
Known input and known output, training object fuzziness Matching Model process include:With the sample key point feature of extraction
Value be used as independent variable, and corresponding object fuzziness is as dependent variable, construction object function, the ginseng of each layer in depth convolutional network
Number is equivalent to the parameter in object function;Solved using back-propagation algorithm (BP) when sening as an envoy to object function minimum in object function
Parameter, also just obtained the parameter of each layer of depth convolutional network.On in model training construct object function method and
Back-propagation algorithm is prior art, therefore is not repeated.
After each parameter in object fuzziness Matching Model determines, object fuzziness Matching Model has just been trained.This
When, in step S140, the object simulation degree that the key point characteristic value extracted in each key point is inputted to training in advance matches mould
In type, key point characteristic value is independent variable, and object fuzziness is dependent variable, and the object thus obtained in subject image obscures
Degree.
As shown in figure 4, according to one embodiment of the application, a kind of recognition methods of objects in images is additionally provided, is wrapped
Include:
S410, the key point characteristic value based on extraction at each key point in subject image, are determined in subject image
Object fuzziness;
S420, carry out object identification in the subject image after determining that result eliminates fuzziness according to fuzziness.
Because step S410 is to determine what method determined by objects in images fuzziness as shown in Figure 1, therefore do not repeat.
Step S420, object identification is carried out in the subject image after determining that result eliminates fuzziness according to fuzziness.
In one embodiment, step S420 includes:
Result is determined according to fuzziness, fuzziness elimination is carried out to subject image;
Object identification is carried out in the subject image after eliminating fuzziness.
In one embodiment, result is determined according to fuzziness, fuzziness elimination is carried out to subject image, specifically included:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the thing after fuzziness
Body image.
This process is measured equivalent to step S210 to obtain band without fuzzy subject image and object fuzziness
There is the inverse process of fuzzy subject image, prior art can be used to realize.For example, point spread function is generated according to object fuzziness
Number, the intensity of point spread function are determined by object fuzziness;Inverse filtering is carried out to subject image using the point spread function of generation
And denoising, the subject image after the fuzziness that is eliminated.
In one embodiment, object identification is carried out in the subject image after eliminating fuzziness, specifically included:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, its
In, the key point is the point defined on the ad-hoc location of contour of object;
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and extracts key point
The key point characteristic value at place, wherein, the key point is the point defined on the ad-hoc location of contour of object;
Based on the key point characteristic value extracted in the subject image after fuzziness is eliminated and from each subject image standard
The matching for the key point characteristic value extracted in sample, generate object recognition result.
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point and figure
The implementation method of step S120, S130 in 1 is basically identical, therefore does not repeat.Unlike step S120, S130 in Fig. 1,
In step S120, S130 in Fig. 1, be from receive do not eliminate fuzziness subject image in position key point, and extract
Key point characteristic value at key point, and this step is to position key point in the subject image after fuzziness is eliminated, and extract
Key point characteristic value at key point.
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and extracts key point
The implementation method of the key point characteristic value and step S120, S130 in Fig. 1 at place is basically identical, therefore does not repeat.With the step in Fig. 1
It is from receiving the subject image that does not eliminate fuzziness in step S120, S130 in Fig. 1 unlike rapid S120, S130
Middle positioning key point, and the key point characteristic value at key point is extracted, and this step is concentrated from subject image master sample
Key point is positioned in each subject image master sample, and extracts the key point characteristic value at key point.Subject image standard sample
The subject image master sample of this concentration gathers in advance, such as various object normal pictures, the normal pictures of different people, such as demonstrate,proves
Part photograph etc..
In one embodiment, based on the key point characteristic value extracted in the subject image after fuzziness is eliminated and
The matching for the key point characteristic value extracted from each subject image master sample, generation object recognition result include:
The key point characteristic value extracted in the subject image after fuzziness is eliminated is represented with matrix, from each subject image
In the case that the key point characteristic value extracted in master sample is also represented with matrix, the poor matrix of two matrixes can be calculated, its
Middle poor matrix refers to that the element in two matrix same positions subtracts each other, after the difference subtracted each other is placed at the same position of matrix
The matrix arrived.Then, the quadratic sum of the value of poor matrix each element or the arithmetic average root of quadratic sum are calculated.The quadratic sum or
Subject image master sample corresponding to the arithmetic average root reckling of person's quadratic sum is exactly recognition result.By taking recognition of face as an example,
That people in facial image master sample corresponding to the arithmetic average root reckling of the quadratic sum or quadratic sum is exactly to identify
The people gone out.
As shown in figure 5, according to one embodiment of the application, a kind of objects in images attribute recognition approach is additionally provided,
Including:
S410, the key point characteristic value based on extraction at each key point in subject image, are determined in subject image
Object fuzziness;
S520, determined according to fuzziness result eliminate fuzziness after subject image in, determine thingness.
Attribute refers to the property that object possesses in itself, such as animal species, sex, age, race (just for people), expression
(cry, laugh at), ornament (glasses, earrings etc.).
Because step S410 is to determine what method determined by objects in images fuzziness as shown in Figure 1, therefore do not repeat.
In one embodiment, step S520 includes:
According to the object fuzziness determined, fuzziness elimination is carried out to subject image;
In the subject image after eliminating fuzziness, thingness is determined.
In one embodiment, according to the object fuzziness determined, fuzziness elimination is carried out to subject image, including:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the thing after fuzziness
Body image.
This process is measured equivalent to step S210 to obtain band without fuzzy subject image and object fuzziness
There is the inverse process of fuzzy subject image, prior art can be used to realize.
In one embodiment, in the subject image after eliminating fuzziness, thingness is determined, including:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, its
In, the key point is the point defined on the ad-hoc location of contour of object;
The key point characteristic value of extraction in the subject image after elimination fuzziness, determines thingness.
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point and figure
The implementation method of step S120, S130 in 1 is basically identical, therefore does not repeat.Unlike step S120, S130 in Fig. 1,
In step S120, S130 in Fig. 1, be from receive do not eliminate fuzziness subject image in position key point, and extract
Key point characteristic value at key point, and this step is to position key point in the subject image after fuzziness is eliminated, and extract
Key point characteristic value at key point.
In one embodiment, the key point characteristic value of the extraction in the subject image after elimination fuzziness, it is determined that
Thingness, including:
The key point characteristic value of extraction in subject image after elimination fuzziness is inputted into thingness identification model, institute
Stating thingness identification model is the input based on the key point characteristic value extracted in subject image and exports in subject image
The machine learning model of the attribute of object.
In one embodiment, the thingness identification model is trained in the following manner forms:
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and extracts key point
The key point characteristic value at place, wherein, the key point is the point defined on the ad-hoc location of contour of object;
In each subject image master sample concentrated used in subject image master sample the key point characteristic value extracted with
Known object attribute in the subject image master sample is respectively as the known input of thingness identification model and known defeated
Go out, train thingness identification model.
The subject image master sample that subject image master sample is concentrated is selected, easily can substantially identified by people in advance
Go out the sample of the subject image of its each attribute.If people easily can not substantially identify certain attribute, such as sex, wear
Ornament, then it is not considered as a subject image master sample.For example, for face, in various different sexes, not the same year
Age, not agnate, different expressions, the face with different ornaments are put into subject image standard as subject image master sample
In sample set, each attribute of the people in these subject image master samples easily must be identified substantially by people.
Due to each attribute (such as sex, age of each subject image master sample that subject image master sample is concentrated
Deng) previously known, it at this moment can be used in the pass extracted in each subject image master sample that subject image master sample is concentrated
Key point feature value is with the known object attribute in the subject image master sample respectively as known to thingness identification model
Input and known output, train thingness identification model, wherein key point characteristic value is independent variable, it is known that thingness be because
Variable, object function is constructed, the parameter in thingness identification model uses backpropagation equivalent to the parameter in object function
Algorithm for Solving send as an envoy to object function minimum when object function in parameter, also just obtained the ginseng in thingness identification model
Number.It is prior art on constructing the method for object function and back-propagation algorithm in model training, therefore does not repeat.
After each parameter in thingness identification model determines, thingness identification model has just been trained.At this moment, will
Eliminate the key point characteristic value input thingness identification model of the extraction in the subject image after fuzziness, key point characteristic value
It is independent variable, thingness is dependent variable, thus have identified the thingness in subject image.
As shown in fig. 6, according to one embodiment of the application, a kind of objects in images fuzziness determining device is additionally provided
100, including:
Memory 1001, for storing computer-readable program instructions;
Processor 1002, for performing the computer-readable program instructions stored in memory, to perform:
Receive subject image;
Key point is positioned from the subject image, wherein, the key point is fixed on the ad-hoc location of contour of object
The point of justice;
Key point characteristic value is extracted at the key point;
Based on the key point characteristic value extracted in each key point, the object fuzziness in subject image is determined.
In one embodiment, based on the key point characteristic value extracted in each key point, the object in subject image is determined
Fuzziness, including:
By in the object fuzziness Matching Model of the key point characteristic value extracted in each key point input training in advance, obtain
Object fuzziness in subject image, wherein, the object fuzziness Matching Model is the key of the subject image based on input
Point feature value and export the machine learning model of the object fuzziness in subject image.
In one embodiment, object fuzziness Matching Model training in advance as follows:
Each based on multiple standard item images and each object fuzziness in object fuzziness set, carry out thing
Body image synthesizes, and obtains the training set of synthetic body image;
From each synthetic body framing sample key point;
Sample key point characteristic value is extracted at the sample key point of positioning;
With the sample key point characteristic value and corresponding object fuzziness of extraction respectively as object fuzziness Matching Model
Known input and known output, train object fuzziness Matching Model.
In one embodiment, key point is positioned from the subject image, including:
The characteristic pattern that convolution obtains, which is carried out, to the subject image or to the subject image carries out convolution;
Linear transformation will be carried out to the result of subject image or characteristic pattern convolution;
Input using the result of the linear transformation as three-dimensional deformation model, the output of three-dimensional deformation model is key
Point.
In one embodiment, key point characteristic value is extracted at the key point, including:
Make the tangent line of the body outline where key point at the key point, the direction vertical with the tangent line is normal
Direction;
By the pixel of closest with key point predetermined number pixel in the normal direction of the key point
Value is as the key point characteristic value extracted in the key point.
In one embodiment, by predetermined number closest with the key point in the normal direction of the key point
The pixel value of mesh pixel as the key point characteristic value extracted in the key point, including:
For each target image corresponding with subject image, in the normal direction of the key point, take and the pass
The pixel value of the closest predetermined number pixel of key point, as the key point characteristic value extracted in the key point, wherein with
Target image includes corresponding to subject image:The subject image, zoom in or out after the subject image and/or the thing
The gradient image of body image.
In one embodiment, the key point characteristic value in the extraction of each key point takes the form of matrix, wherein square
The pixel value for the predetermined number pixel that the one-dimensional representative of battle array takes on each target image of each key point, another dimension represent
Each target image of each key point.
In one embodiment, each based on multiple standard item images and each thing in object fuzziness set
Body fuzziness, subject image synthesis is carried out, the training set of synthetic body image is obtained, specifically includes:
Each object fuzziness in object fuzziness set, generates corresponding point spread function, the diffusion of its midpoint
The intensity of function is determined by object fuzziness;
With point spread function each filtering to multiple standard item images respectively of generation;
Random noise is added in image after the filtering, obtains the training set of synthetic body image.
In one embodiment, the object is face.
As shown in fig. 7, one embodiment according to the application, there is provided a kind of objects in images identification device 500, bag
Include:
Memory 5001, for storing computer-readable program instructions;
Processor 5002, for performing the computer-readable program instructions stored in memory, to perform:
Based on the key point characteristic value extracted at each key point in subject image, the object mould in subject image is determined
Paste degree;
Object identification is carried out in the subject image after determining that result eliminates fuzziness according to fuzziness.
In one embodiment, object knowledge is carried out in the subject image after determining that result eliminates fuzziness according to fuzziness
Not, including:
Result is determined according to fuzziness, fuzziness elimination is carried out to subject image;
Object identification is carried out in the subject image after eliminating fuzziness.
In one embodiment, result is determined according to fuzziness, fuzziness elimination is carried out to subject image, including:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the thing after fuzziness
Body image.
In one embodiment, object identification is carried out in the subject image after eliminating fuzziness, including:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, its
In, the key point is the point defined on the ad-hoc location of contour of object;
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and extracts key point
The key point characteristic value at place, wherein, the key point is the point defined on the ad-hoc location of contour of object;
Based on the key point characteristic value extracted in the subject image after fuzziness is eliminated and from each subject image standard
The matching for the key point characteristic value extracted in sample, generate object recognition result.
As shown in figure 8, according to one embodiment of the application, a kind of objects in images property recognition means are additionally provided
600, it is characterised in that including:
Memory 6001, for storing computer-readable program instructions;
Processor 6002, for performing the computer-readable program instructions stored in memory, to perform:
Based on the key point characteristic value extracted at each key point in subject image, the object mould in subject image is determined
Paste degree;
In the subject image after determining that result eliminates fuzziness according to fuzziness, thingness is determined.
In one embodiment, in the subject image after determining that result eliminates fuzziness according to fuzziness, object is determined
Attribute, including:
According to the object fuzziness determined, fuzziness elimination is carried out to subject image;
In the subject image after eliminating fuzziness, thingness is determined.
In one embodiment, according to the object fuzziness determined, fuzziness elimination is carried out to subject image, including:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the thing after fuzziness
Body image.
In one embodiment, in the subject image after eliminating fuzziness, thingness is determined, including:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, its
In, the key point is the point defined on the ad-hoc location of contour of object;
The key point characteristic value of extraction in the subject image after elimination fuzziness, determines thingness.
In one embodiment, the key point characteristic value of the extraction in the subject image after elimination fuzziness, it is determined that
Thingness, including:
The key point characteristic value of extraction in subject image after elimination fuzziness is inputted into thingness identification model, institute
Stating thingness identification model is the input based on the key point characteristic value extracted in subject image and exports in subject image
The machine learning model of the attribute of object.
In one embodiment, the thingness identification model is trained in the following manner forms:
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and extracts key point
The key point characteristic value at place, wherein, the key point is the point defined on the ad-hoc location of contour of object;
In each subject image master sample concentrated used in subject image master sample the key point characteristic value extracted with
Known object attribute in the subject image master sample is respectively as the known input of thingness identification model and known defeated
Go out, train thingness identification model.
As shown in figure 9, according to one embodiment of the application, a kind of objects in images fuzziness determining device is additionally provided
100, including:
Subject image reception device 110, for receiving subject image;
Crucial location device 120, for positioning key point from the subject image, wherein, the key point be
The point defined on the ad-hoc location of contour of object;
Key point characteristics extraction device 130, for extracting key point characteristic value at the key point;
Object fuzziness determining device 140, for based on the key point characteristic value extracted in each key point, determining object figure
Object fuzziness as in.
In one embodiment, object fuzziness determining device 140 is further used for:
By in the object fuzziness Matching Model of the key point characteristic value extracted in each key point input training in advance, obtain
Object fuzziness in subject image, wherein, the object fuzziness Matching Model is the key of the subject image based on input
Point feature value and export the machine learning model of the object fuzziness in subject image.
In one embodiment, object fuzziness Matching Model training in advance as follows:
Each based on multiple standard item images and each object fuzziness in object fuzziness set, carry out thing
Body image synthesizes, and obtains the training set of synthetic body image;
From each synthetic body framing sample key point;
Sample key point characteristic value is extracted at the sample key point of positioning;
With the sample key point characteristic value and corresponding object fuzziness of extraction respectively as object fuzziness Matching Model
Known input and known output, train object fuzziness Matching Model.
In one embodiment, the crucial location device 120 is further used for:
The characteristic pattern that convolution obtains, which is carried out, to the subject image or to the subject image carries out convolution;
Linear transformation will be carried out to the result of subject image or characteristic pattern convolution;
Input using the result of the linear transformation as three-dimensional deformation model, the output of three-dimensional deformation model is key
Point.
In one embodiment, the key point characteristics extraction device 130 is further used for:
Make the tangent line of the body outline where key point at the key point, the direction vertical with the tangent line is normal
Direction;
By the pixel of closest with key point predetermined number pixel in the normal direction of the key point
Value is as the key point characteristic value extracted in the key point.
In one embodiment, by predetermined number closest with the key point in the normal direction of the key point
The pixel value of mesh pixel specifically includes as the key point characteristic value extracted in the key point:
For each target image corresponding with subject image, in the normal direction of the key point, take and the pass
The pixel value of the closest predetermined number pixel of key point, as the key point characteristic value extracted in the key point, wherein with
Target image includes corresponding to subject image:The subject image, zoom in or out after the subject image and/or the thing
The gradient image of body image.
In one embodiment, the key point characteristic value in the extraction of each key point takes the form of matrix, wherein square
The pixel value for the predetermined number pixel that the one-dimensional representative of battle array takes on each target image of each key point, another dimension represent
Each target image of each key point.
In one embodiment, each based on multiple standard item images and each thing in object fuzziness set
Body fuzziness, subject image synthesis is carried out, the training set of synthetic body image is obtained, specifically includes:
Each object fuzziness in object fuzziness set, generates corresponding point spread function, the diffusion of its midpoint
The intensity of function is determined by object fuzziness;
With point spread function each filtering to multiple standard item images respectively of generation;
Random noise is added in image after the filtering, obtains the training set of synthetic body image.
In one embodiment, the object is face.
As shown in Figure 10, according to one embodiment of the application, a kind of objects in images identification device 500 is additionally provided,
Including:
Objects in images fuzziness determining device 100, for based on the pass extracted at each key point in subject image
Key point feature value, determine the object fuzziness in subject image;
Object detector 520, for being carried out in the subject image after determining that result eliminates fuzziness according to fuzziness
Object identification.
In one embodiment, the object detector 520 is further used for:
Result is determined according to fuzziness, fuzziness elimination is carried out to subject image;
Object identification is carried out in the subject image after eliminating fuzziness.
In one embodiment, result is determined according to fuzziness, fuzziness elimination is carried out to subject image, specifically included:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the thing after fuzziness
Body image.
In one embodiment, object identification is carried out in the subject image after eliminating fuzziness, specifically included:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, its
In, the key point is the point defined on the ad-hoc location of contour of object;
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and extracts key point
The key point characteristic value at place, wherein, the key point is the point defined on the ad-hoc location of contour of object;
Based on the key point characteristic value extracted in the subject image after fuzziness is eliminated and from each subject image standard
The matching for the key point characteristic value extracted in sample, generate object recognition result.
As shown in figure 11, according to one embodiment of the application, there is provided a kind of objects in images property recognition means
600, including:
Objects in images fuzziness determining device 100, for based on the pass extracted at each key point in subject image
Key point feature value, determine the object fuzziness in subject image;
Thingness determining device 620, for determined according to fuzziness result eliminate fuzziness after subject image in,
Determine thingness.
In one embodiment, thingness determining device 620 is further used for:
According to the object fuzziness determined, fuzziness elimination is carried out to subject image;
In the subject image after eliminating fuzziness, thingness is determined.
In one embodiment, according to the object fuzziness determined, fuzziness elimination is carried out to subject image, including:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the thing after fuzziness
Body image.
In one embodiment, in the subject image after eliminating fuzziness, thingness is determined, including:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, its
In, the key point is the point defined on the ad-hoc location of contour of object;
The key point characteristic value of extraction in the subject image after elimination fuzziness, determines thingness.
In one embodiment, the key point characteristic value of the extraction in the subject image after elimination fuzziness, it is determined that
Thingness, including:
The key point characteristic value of extraction in subject image after elimination fuzziness is inputted into thingness identification model, institute
Stating thingness identification model is the input based on the key point characteristic value extracted in subject image and exports in subject image
The machine learning model of the attribute of object.
In one embodiment, the thingness identification model is trained in the following manner forms:
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and extracts key point
The key point characteristic value at place, wherein, the key point is the point defined on the ad-hoc location of contour of object;
In each subject image master sample concentrated used in subject image master sample the key point characteristic value extracted with
Known object attribute in the subject image master sample is respectively as the known input of thingness identification model and known defeated
Go out, train thingness identification model.
It should be noted that the present invention can be carried out in the assembly of software and/or software and hardware, for example, can adopt
With application specific integrated circuit (ASIC), general purpose computer or any other realized similar to hardware device.In one embodiment
In, software program of the invention can realize steps described above or function by computing device.Similarly, it is of the invention
Software program (including related data structure) can be stored in computer readable recording medium storing program for performing, for example, RAM memory,
Magnetically or optically driver or floppy disc and similar devices.In addition, some steps or function of the present invention can employ hardware to realize, example
Such as, coordinate as with processor so as to perform the circuit of each step or function.
In addition, the part of the present invention can be applied to computer program product, such as computer program instructions, when its quilt
When computer performs, by the operation of the computer, the method according to the invention and/or technical scheme can be called or provided.
And the programmed instruction of the method for the present invention is called, it is possibly stored in fixed or moveable recording medium, and/or pass through
Broadcast or the data flow in other signal bearing medias and be transmitted, and/or be stored according to described program instruction operation
In the working storage of computer equipment.Here, including a device according to one embodiment of present invention, the device includes using
Memory in storage computer program instructions and processor for execute program instructions, wherein, when the computer program refers to
When order is by the computing device, method and/or skill of the plant running based on foregoing multiple embodiments according to the present invention are triggered
Art scheme.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power
Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.Any reference in claim should not be considered as to the involved claim of limitation.This
Outside, it is clear that the word of " comprising " one is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in system claims is multiple
Unit or device can also be realized by a unit or device by software or hardware.The first, the second grade word is used for table
Show title, and be not offered as any specific order.
Claims (43)
1. a kind of objects in images fuzziness determines method, it is characterised in that including:
Receive subject image;
Key point is positioned from the subject image, wherein, the key point is defined on the ad-hoc location of contour of object
Point;
Key point characteristic value is extracted at the key point;
Based on the key point characteristic value extracted in each key point, the object fuzziness in subject image is determined.
2. according to the method for claim 1, it is characterised in that described based on the crucial point feature extracted in each key point
The step of being worth, determining the object fuzziness in subject image includes:
By in the object fuzziness Matching Model of the key point characteristic value extracted in each key point input training in advance, object is obtained
Object fuzziness in image, wherein, the object fuzziness Matching Model is that the key point of the subject image based on input is special
Value indicative and the machine learning model for exporting the object fuzziness in subject image.
3. according to the method for claim 2, it is characterised in that the object fuzziness Matching Model is advance as follows
Training:
Each based on multiple standard item images and each object fuzziness in object fuzziness set, carry out object figure
As synthesis, the training set of synthetic body image is obtained;
From each synthetic body framing sample key point;
Sample key point characteristic value is extracted at the sample key point of positioning;
With the sample key point characteristic value and corresponding object fuzziness of extraction respectively as object fuzziness Matching Model
Know input and known output, train object fuzziness Matching Model.
4. according to the method for claim 1, it is characterised in that described the step of key point is positioned from the subject image
Including:
The characteristic pattern that convolution obtains, which is carried out, to the subject image or to the subject image carries out convolution;
Linear transformation will be carried out to the result of subject image or characteristic pattern convolution;
Input using the result of the linear transformation as three-dimensional deformation model, the output of three-dimensional deformation model is key point.
5. according to the method for claim 1, it is characterised in that described that key point characteristic value is extracted at the key point
Step includes:
Make the tangent line of the body outline where key point at the key point, the direction vertical with the tangent line is normal side
To;
The pixel value of closest with key point predetermined number pixel in the normal direction of the key point is made
For the key point characteristic value extracted in the key point.
6. according to the method for claim 5, it is characterised in that it is described by the normal direction of the key point with it is described
The step of pixel value of the closest predetermined number pixel of key point is as the key point characteristic value extracted in the key point
Including:
For each target image corresponding with subject image, in the normal direction of the key point, take and the key point
The pixel value of closest predetermined number pixel, as the key point characteristic value extracted in the key point, wherein with object
Target image includes corresponding to image:The subject image, zoom in or out after the subject image and/or the object figure
The gradient image of picture.
7. according to the method for claim 6, it is characterised in that the key point characteristic value in the extraction of each key point is taken
The predetermined number pixel that the one-dimensional representative of the form of matrix, wherein matrix takes on each target image of each key point
Pixel value, it is another to tie up each target image for representing each key point.
8. according to the method for claim 3, it is characterised in that each based on multiple standard item images and object mould
Each object fuzziness in paste degree set, subject image synthesis is carried out, obtains the training set of synthetic body image, specific bag
Include:
Each object fuzziness in object fuzziness set, generates corresponding point spread function, wherein point spread function
Intensity determined by object fuzziness;
With point spread function each filtering to multiple standard item images respectively of generation;
Random noise is added in image after the filtering, obtains the training set of synthetic body image.
9. according to the method for claim 1, it is characterised in that the object is face.
A kind of 10. objects in images recognition methods, it is characterised in that including:
Based on the key point characteristic value extracted at each key point in subject image, determine that the object in subject image obscures
Degree;
Object identification is carried out in the subject image after determining that result eliminates fuzziness according to fuzziness.
11. according to the method for claim 10, it is characterised in that described to determine that result eliminates fuzziness according to fuzziness
The step of object identification is carried out in subject image afterwards includes:
Result is determined according to fuzziness, fuzziness elimination is carried out to subject image;
Object identification is carried out in the subject image after eliminating fuzziness.
12. according to the method for claim 10, it is characterised in that the step of determining the object fuzziness in subject image be
Carried out according to the method described in claim any one of 2-9.
13. according to the method for claim 11, it is characterised in that result is determined according to fuzziness, subject image is carried out
Fuzziness eliminates, including:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the object figure after fuzziness
Picture.
14. according to the method any one of claim 11-13, it is characterised in that the object figure after fuzziness is eliminated
Object identification is carried out as in, including:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, wherein, institute
It is the point defined on the ad-hoc location of contour of object to state key point;
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and is extracted at key point
Key point characteristic value, wherein, the key point is the point defined on the ad-hoc location of contour of object;
Based on the key point characteristic value extracted in the subject image after fuzziness is eliminated and from each subject image master sample
The matching of the key point characteristic value of middle extraction, generate object recognition result.
A kind of 15. objects in images attribute recognition approach, it is characterised in that including:
Based on the key point characteristic value extracted at each key point in subject image, determine that the object in subject image obscures
Degree;
In the subject image after determining that result eliminates fuzziness according to fuzziness, thingness is determined.
16. according to the method for claim 15, it is characterised in that after determining that result eliminates fuzziness according to fuzziness
In subject image, thingness is determined, including:
According to the object fuzziness determined, fuzziness elimination is carried out to subject image;
In the subject image after eliminating fuzziness, thingness is determined.
17. according to the method for claim 16, it is characterised in that according to the object fuzziness determined, to subject image
Fuzziness elimination is carried out, including:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the object figure after fuzziness
Picture.
18. according to the method for claim 16, it is characterised in that in the subject image after eliminating fuzziness, determine thing
Body attribute, including:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, wherein, institute
It is the point defined on the ad-hoc location of contour of object to state key point;
The key point characteristic value of extraction in the subject image after elimination fuzziness, determines thingness.
19. according to the method for claim 18, it is characterised in that according to the extraction in the subject image after elimination fuzziness
Key point characteristic value, determine thingness, including:
The key point characteristic value of extraction in subject image after elimination fuzziness is inputted into thingness identification model, the thing
Body attribute Recognition Model is the input based on the key point characteristic value extracted in subject image and exports the object in subject image
Attribute machine learning model.
20. according to the method for claim 19, it is characterised in that the thingness identification model is instructed in the following manner
White silk forms:
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and is extracted at key point
Key point characteristic value, wherein, the key point is the point defined on the ad-hoc location of contour of object;
The key point characteristic value extracted in each subject image master sample concentrated used in subject image master sample and the thing
Known input and known output of the known object attribute respectively as thingness identification model in body graphics standard sample, instruction
Practice thingness identification model.
A kind of 21. objects in images fuzziness determining device, it is characterised in that including:
Memory, for storing computer-readable program instructions;
Processor, for performing the computer-readable program instructions stored in memory, to perform:
Receive subject image;
Key point is positioned from the subject image, wherein, the key point is defined on the ad-hoc location of contour of object
Point;
Key point characteristic value is extracted at the key point;
Based on the key point characteristic value extracted in each key point, the object fuzziness in subject image is determined.
22. device according to claim 21, it is characterised in that based on the key point characteristic value extracted in each key point,
The object fuzziness in subject image is determined, including:
By in the object fuzziness Matching Model of the key point characteristic value extracted in each key point input training in advance, object is obtained
Object fuzziness in image, wherein, the object fuzziness Matching Model is that the key point of the subject image based on input is special
Value indicative and the machine learning model for exporting the object fuzziness in subject image.
23. device according to claim 22, it is characterised in that the object fuzziness Matching Model is pre- as follows
First train:
Each based on multiple standard item images and each object fuzziness in object fuzziness set, carry out object figure
As synthesis, the training set of synthetic body image is obtained;
From each synthetic body framing sample key point;
Sample key point characteristic value is extracted at the sample key point of positioning;
With the sample key point characteristic value and corresponding object fuzziness of extraction respectively as object fuzziness Matching Model
Know input and known output, train object fuzziness Matching Model.
24. device according to claim 21, it is characterised in that key point is positioned from the subject image, including:
The characteristic pattern that convolution obtains, which is carried out, to the subject image or to the subject image carries out convolution;
Linear transformation will be carried out to the result of subject image or characteristic pattern convolution;
Input using the result of the linear transformation as three-dimensional deformation model, the output of three-dimensional deformation model is key point.
25. device according to claim 21, it is characterised in that key point characteristic value, bag are extracted at the key point
Include:
Make the tangent line of the body outline where key point at the key point, the direction vertical with the tangent line is normal side
To;
The pixel value of closest with key point predetermined number pixel in the normal direction of the key point is made
For the key point characteristic value extracted in the key point.
26. device according to claim 25, it is characterised in that by the normal direction of the key point with the pass
The pixel value of the closest predetermined number pixel of key point as the key point characteristic value extracted in the key point, including:
For each target image corresponding with subject image, in the normal direction of the key point, take and the key point
The pixel value of closest predetermined number pixel, as the key point characteristic value extracted in the key point, wherein with object
Target image includes corresponding to image:The subject image, zoom in or out after the subject image and/or the object figure
The gradient image of picture.
27. device according to claim 26, it is characterised in that the key point characteristic value in the extraction of each key point is adopted
Take the form of matrix, the predetermined number pixel that wherein the one-dimensional representative of matrix takes on each target image of each key point
Pixel value, another tie up represent each target image of each key point.
28. device according to claim 23, it is characterised in that each based on multiple standard item images and object
Each object fuzziness in fuzziness set, subject image synthesis is carried out, obtains the training set of synthetic body image, specific bag
Include:
Each object fuzziness in object fuzziness set, generates corresponding point spread function, wherein point spread function
Intensity determined by object fuzziness;
With point spread function each filtering to multiple standard item images respectively of generation;
Random noise is added in image after the filtering, obtains the training set of synthetic body image.
29. device according to claim 21, it is characterised in that the object is face.
A kind of 30. objects in images identification device, it is characterised in that including:
Memory, for storing computer-readable program instructions;
Processor, for performing the computer-readable program instructions stored in memory, to perform:
Based on the key point characteristic value extracted at each key point in subject image, determine that the object in subject image obscures
Degree;
Object identification is carried out in the subject image after determining that result eliminates fuzziness according to fuzziness..
31. device according to claim 30, it is characterised in that after determining that result eliminates fuzziness according to fuzziness
Object identification is carried out in subject image, including:
Result is determined according to fuzziness, fuzziness elimination is carried out to subject image;
Object identification is carried out in the subject image after eliminating fuzziness.
32. device according to claim 30, it is characterised in that the step of determining the object fuzziness in subject image be
Carried out according to the method described in claim any one of 2-9.
33. device according to claim 31, it is characterised in that result is determined according to fuzziness, carried out to subject image
Fuzziness eliminates, including:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the object figure after fuzziness
Picture.
34. according to the device any one of claim 31-33, it is characterised in that the object figure after fuzziness is eliminated
Object identification is carried out as in, including:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, wherein, institute
It is the point defined on the ad-hoc location of contour of object to state key point;
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and is extracted at key point
Key point characteristic value, wherein, the key point is the point defined on the ad-hoc location of contour of object;
Based on the key point characteristic value extracted in the subject image after fuzziness is eliminated and from each subject image master sample
The matching of the key point characteristic value of middle extraction, generate object recognition result.
A kind of 35. objects in images property recognition means, it is characterised in that including:
Memory, for storing computer-readable program instructions;
Processor, for performing the computer-readable program instructions stored in memory, to perform:
Based on the key point characteristic value extracted at each key point in subject image, determine that the object in subject image obscures
Degree;
In the subject image after determining that result eliminates fuzziness according to fuzziness, thingness is determined.
36. device according to claim 35, it is characterised in that after determining that result eliminates fuzziness according to fuzziness
In subject image, thingness is determined, including:
According to the object fuzziness determined, fuzziness elimination is carried out to subject image;
In the subject image after eliminating fuzziness, thingness is determined.
37. device according to claim 36, it is characterised in that according to the object fuzziness determined, to subject image
Fuzziness elimination is carried out, including:
According to the object fuzziness in the subject image of reception and the subject image determined, generation eliminates the object figure after fuzziness
Picture.
38. device according to claim 36, it is characterised in that in the subject image after eliminating fuzziness, determine thing
Body attribute, including:
Key point is positioned in subject image after fuzziness is eliminated, and extracts the key point characteristic value at key point, wherein, institute
It is the point defined on the ad-hoc location of contour of object to state key point;
The key point characteristic value of extraction in the subject image after elimination fuzziness, determines thingness.
39. the device according to claim 38, it is characterised in that according to the extraction in the subject image after elimination fuzziness
Key point characteristic value, determine thingness, including:
The key point characteristic value of extraction in subject image after elimination fuzziness is inputted into thingness identification model, the thing
Body attribute Recognition Model is the input based on the key point characteristic value extracted in subject image and exports the object in subject image
Attribute machine learning model.
40. the device according to claim 39, it is characterised in that the thingness identification model is instructed in the following manner
White silk forms:
Key point is positioned in each subject image master sample that subject image master sample is concentrated, and is extracted at key point
Key point characteristic value, wherein, the key point is the point defined on the ad-hoc location of contour of object;
The key point characteristic value extracted in each subject image master sample concentrated used in subject image master sample and the thing
Known input and known output of the known object attribute respectively as thingness identification model in body graphics standard sample, instruction
Practice thingness identification model.
A kind of 41. objects in images fuzziness determining device, it is characterised in that including:
Subject image reception device, for receiving subject image;
Crucial location device, for positioning key point from the subject image, wherein, the key point is in contour of object
Ad-hoc location on the point that defines;
Key point characteristics extraction device, for extracting key point characteristic value at the key point;
Object fuzziness determining device, for based on the key point characteristic value extracted in each key point, determining in subject image
Object fuzziness.
A kind of 42. objects in images identification device, it is characterised in that including:
Objects in images fuzziness determining device, for based on the crucial point feature extracted at each key point in subject image
Value, determines the object fuzziness in subject image;
Object detector, for carrying out object knowledge in the subject image after determining that result eliminates fuzziness according to fuzziness
Not.
A kind of 43. objects in images property recognition means, it is characterised in that including:
Objects in images fuzziness determining device, for based on the crucial point feature extracted at each key point in subject image
Value, determines the object fuzziness in subject image;
Thingness determining device, in the subject image after determining that result eliminates fuzziness according to fuzziness, determining thing
Body attribute.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610709852.7A CN107767358B (en) | 2016-08-23 | 2016-08-23 | Method and device for determining ambiguity of object in image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610709852.7A CN107767358B (en) | 2016-08-23 | 2016-08-23 | Method and device for determining ambiguity of object in image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107767358A true CN107767358A (en) | 2018-03-06 |
CN107767358B CN107767358B (en) | 2021-08-13 |
Family
ID=61264659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610709852.7A Active CN107767358B (en) | 2016-08-23 | 2016-08-23 | Method and device for determining ambiguity of object in image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107767358B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447942A (en) * | 2018-09-14 | 2019-03-08 | 平安科技(深圳)有限公司 | Image blur determines method, apparatus, computer equipment and storage medium |
CN109493336A (en) * | 2018-11-14 | 2019-03-19 | 上海艾策通讯科技股份有限公司 | Video mosaic based on artificial intelligence identifies the system and method learnt automatically |
CN110609039A (en) * | 2019-09-23 | 2019-12-24 | 上海御微半导体技术有限公司 | Optical detection device and method thereof |
CN112070889A (en) * | 2020-11-13 | 2020-12-11 | 季华实验室 | Three-dimensional reconstruction method, device and system, electronic equipment and storage medium |
WO2021179905A1 (en) * | 2020-03-13 | 2021-09-16 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Motion blur robust image feature descriptor |
CN113484852A (en) * | 2021-07-07 | 2021-10-08 | 烟台艾睿光电科技有限公司 | Distance measurement method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100080469A1 (en) * | 2008-10-01 | 2010-04-01 | Fuji Xerox Co., Ltd. | Novel descriptor for image corresponding point matching |
CN101789091A (en) * | 2010-02-05 | 2010-07-28 | 上海全土豆网络科技有限公司 | System and method for automatically identifying video definition |
CN102750695A (en) * | 2012-06-04 | 2012-10-24 | 清华大学 | Machine learning-based stereoscopic image quality objective assessment method |
CN103177249A (en) * | 2011-08-22 | 2013-06-26 | 富士通株式会社 | Image processing apparatus and image processing method |
CN105868716A (en) * | 2016-03-29 | 2016-08-17 | 中国科学院上海高等研究院 | Method for human face recognition based on face geometrical features |
-
2016
- 2016-08-23 CN CN201610709852.7A patent/CN107767358B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100080469A1 (en) * | 2008-10-01 | 2010-04-01 | Fuji Xerox Co., Ltd. | Novel descriptor for image corresponding point matching |
CN101789091A (en) * | 2010-02-05 | 2010-07-28 | 上海全土豆网络科技有限公司 | System and method for automatically identifying video definition |
CN103177249A (en) * | 2011-08-22 | 2013-06-26 | 富士通株式会社 | Image processing apparatus and image processing method |
CN102750695A (en) * | 2012-06-04 | 2012-10-24 | 清华大学 | Machine learning-based stereoscopic image quality objective assessment method |
CN105868716A (en) * | 2016-03-29 | 2016-08-17 | 中国科学院上海高等研究院 | Method for human face recognition based on face geometrical features |
Non-Patent Citations (3)
Title |
---|
MING ZENG 等: ""Keypoint-Based Enhanced Image Quality Assessment"", 《ADVANCES IN COMPUTER SCIENCE AND EDUCATION APPLICATIONS》 * |
胡安洲: ""主客观一致的图像感知质量评价方法研究"", 《中国博士学位论文全文数据库-信息科技辑》 * |
韩瑜 等: ""全信息图像质量评估研究发展综述"", 《指挥控制与仿真》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447942A (en) * | 2018-09-14 | 2019-03-08 | 平安科技(深圳)有限公司 | Image blur determines method, apparatus, computer equipment and storage medium |
CN109447942B (en) * | 2018-09-14 | 2024-04-23 | 平安科技(深圳)有限公司 | Image ambiguity determining method, apparatus, computer device and storage medium |
CN109493336A (en) * | 2018-11-14 | 2019-03-19 | 上海艾策通讯科技股份有限公司 | Video mosaic based on artificial intelligence identifies the system and method learnt automatically |
CN109493336B (en) * | 2018-11-14 | 2022-03-04 | 上海艾策通讯科技股份有限公司 | System and method for video mosaic identification automatic learning based on artificial intelligence |
CN110609039A (en) * | 2019-09-23 | 2019-12-24 | 上海御微半导体技术有限公司 | Optical detection device and method thereof |
CN110609039B (en) * | 2019-09-23 | 2021-09-28 | 上海御微半导体技术有限公司 | Optical detection device and method thereof |
WO2021179905A1 (en) * | 2020-03-13 | 2021-09-16 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Motion blur robust image feature descriptor |
CN112070889A (en) * | 2020-11-13 | 2020-12-11 | 季华实验室 | Three-dimensional reconstruction method, device and system, electronic equipment and storage medium |
CN113484852A (en) * | 2021-07-07 | 2021-10-08 | 烟台艾睿光电科技有限公司 | Distance measurement method and system |
CN113484852B (en) * | 2021-07-07 | 2023-11-07 | 烟台艾睿光电科技有限公司 | Distance measurement method and system |
Also Published As
Publication number | Publication date |
---|---|
CN107767358B (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767358A (en) | A kind of objects in images fuzziness determines method and apparatus | |
CN108416266B (en) | Method for rapidly identifying video behaviors by extracting moving object through optical flow | |
Xie et al. | Synthesizing dynamic patterns by spatial-temporal generative convnet | |
CN109543548A (en) | A kind of face identification method, device and storage medium | |
Liu et al. | Partial convolution for padding, inpainting, and image synthesis | |
CN106951870A (en) | The notable event intelligent detecting prewarning method of monitor video that active vision notes | |
TW200828176A (en) | Apparatus and method for processing video data | |
CN113762138B (en) | Identification method, device, computer equipment and storage medium for fake face pictures | |
CN108681695A (en) | Video actions recognition methods and device, electronic equipment and storage medium | |
JP6207210B2 (en) | Information processing apparatus and method | |
CN107730536B (en) | High-speed correlation filtering object tracking method based on depth features | |
CN111738344A (en) | Rapid target detection method based on multi-scale fusion | |
CN110322002A (en) | The training of image generation network and image processing method and device, electronic equipment | |
Ranjan et al. | Learning human optical flow | |
CN104063871B (en) | The image sequence Scene Segmentation of wearable device | |
Zhang et al. | Video salient region detection model based on wavelet transform and feature comparison | |
CN107766864A (en) | Extract method and apparatus, the method and apparatus of object identification of feature | |
Huang et al. | Fast blind image super resolution using matrix-variable optimization | |
Ople et al. | Multi-scale neural network with dilated convolutions for image deblurring | |
Chaurasiya et al. | Deep dilated CNN based image denoising | |
CN106415606B (en) | A kind of identification based on edge, system and method | |
CN110598646B (en) | Depth feature-based unconstrained repeated action counting method | |
CN116977674A (en) | Image matching method, related device, storage medium and program product | |
Wu et al. | [Retracted] 3D Film Animation Image Acquisition and Feature Processing Based on the Latest Virtual Reconstruction Technology | |
Tu | (Retracted) Computer hand-painting of intelligent multimedia images in interior design major |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20201210 Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China Applicant after: Zebra smart travel network (Hong Kong) Limited Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands Applicant before: Alibaba Group Holding Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |