CN108133220A - Model training, crucial point location and image processing method, system and electronic equipment - Google Patents

Model training, crucial point location and image processing method, system and electronic equipment Download PDF

Info

Publication number
CN108133220A
CN108133220A CN201611080382.9A CN201611080382A CN108133220A CN 108133220 A CN108133220 A CN 108133220A CN 201611080382 A CN201611080382 A CN 201611080382A CN 108133220 A CN108133220 A CN 108133220A
Authority
CN
China
Prior art keywords
image
key point
sample
sample set
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611080382.9A
Other languages
Chinese (zh)
Inventor
王晋玮
钱晨
栾青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201611080382.9A priority Critical patent/CN108133220A/en
Publication of CN108133220A publication Critical patent/CN108133220A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Biomedical Technology (AREA)
  • Game Theory and Decision Science (AREA)
  • Evolutionary Biology (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present invention provides a kind of model training, crucial point location and image processing method, system and electronic equipment, wherein, the model training method for positioning key point includes:First sample set is obtained, the first sample set does not mark sample image including multiple;Based on deep neural network, key point position mark is carried out to each sample image that do not mark that the first sample is concentrated, obtains the second sample set;According at least to the part sample image and third sample set in second sample set, the parameter of the deep neural network is adjusted, wherein, the third sample set has marked sample image including multiple.The model training of the embodiment of the present invention, crucial point location and image processing method, system and electronic equipment, it can realize not all under the premise of having marked image in the image for being input to model, improve the training accuracy of key point location model, not only can be to avoid the sample wasting of resources, but also the efficiency of model training can be improved.

Description

Model training, crucial point location and image processing method, system and electronic equipment
Technical field
The present invention relates to a kind of technical field of image processing more particularly to model training, crucial point location and image procossings Method, system and electronic equipment.
Background technology
Crucial point location is a critical issue in object identification research.By taking Face detection as an example, face key point is determined Position technology is typically given one and includes the image of face, and provide the position in this image where face (usually with rectangle Frame represents), it is input in face key point location model, obtains on face several key points (such as eyes, face, nose Deng) location information.
Wherein, above-mentioned key point location model needs to be trained it before for crucial point location.Existing model Training method is used as input by sample image, and the key point manually marked carries out model training as supervision message.
Invention content
The purpose of the embodiment of the present invention is, provides a kind of model training, crucial point location and image processing method, system And electronic device technology scheme.
It is according to embodiments of the present invention in a first aspect, provide it is a kind of for positioning the model training method of key point, including: First sample set is obtained, the first sample set does not mark sample image including multiple;Based on deep neural network, to described Each sample image that do not mark that one sample is concentrated carries out key point position mark, obtains the second sample set, wherein, the depth Neural network is spent to be used to carry out image crucial point location;According at least to the part sample image in second sample set and Three sample sets adjust the parameter of the deep neural network, wherein, the third sample set has marked sample graph including multiple Picture.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, it is described based on deep neural network, it is right Each sample image that do not mark that the first sample is concentrated carries out key point position mark, obtains the second sample set, including: Image conversion process is carried out to each sample image that do not mark that the first sample is concentrated, obtains the 4th sample set, wherein, Described image conversion process includes:It rotation, translation, scaling plus makes an uproar and adds any one in shelter or arbitrary combination;It is based on The deep neural network, each sample image concentrated to the 4th sample set and the first sample carry out key point Mark is put, obtains second sample set.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, the first sample is concentrated every It is a not mark sample image, the key point location information after sample image carries out image conversion process is not marked based on described, is sentenced Whether the disconnected key point location information for not marking sample image is optional sample;Wherein, it is described not mark sample image Key point location information after key point location information and its progress image conversion process is included in second sample set In;Each optional sample and third sample set in second sample set adjust the ginseng of the deep neural network Number.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, it is described according at least to second sample The part sample image of concentration and third sample set adjust the parameter of the deep neural network, including:According to second sample Whole sample images of this concentration and third sample set adjust the parameter of the deep neural network.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, the key point includes:Face is crucial Any one in point, limbs key point, palmmprint key point and marker key point or arbitrary combination.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, when the key point includes face key During point, the face key point includes:Eyes key point, nose key point, face key point, eyebrow key point and face mask Any one in key point or arbitrary combination.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, the deep neural network is convolution god Through network.
Second aspect according to embodiments of the present invention also provides a kind of crucial independent positioning method, including:Obtain target figure Picture;Using the model for being used to position key point obtained by the method training described in first aspect, the target image is closed Key point location.
The third aspect according to embodiments of the present invention also provides a kind of image processing method, including:Obtain pending figure Picture;Using the model for being used to position key point obtained by the method training described in first aspect, determine in the pending image At least one key point;Based at least one key point, the pending image is handled.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, it is described to be based at least one key Point handles the pending image, including:Based at least one key point, the pending image is carried out The processing of at least one of:Target object tracks in recongnition of objects, image in crucial point identification, U.S. face, Face Changing, image It is shown with business object.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, when what is carried out to the pending image It is described based at least one key point when processing includes the business object displaying, at the pending image Reason, including:At least one key point is compared with predetermined trigger key point;In response at least one key point In any one key point matched with predetermined trigger key point, the business object is plotted in the pending image and is closed Key point position.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, the key point position include with down toward It is one of few:The eye areas of face in image, nasal area, face region, brow region, face mask region, limbs region, The region of advance marker in palmmprint region, image.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, the predetermined trigger key point includes: Eyes key point, nose key point, face key point, eyebrow key point, face mask key point, limbs key point, palmmprint close Any one in key point and marker key point or arbitrary combination.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, the business object is includes semanteme The special efficacy of information.
Optionally, with reference to any method provided in an embodiment of the present invention, wherein, the business object includes including advertisement The special efficacy of following at least one form of information:Two-dimentional paster special efficacy, three-dimensional special efficacy, particle effect.
Fourth aspect according to embodiments of the present invention also provides a kind of model training systems for being used to position key point, packet It includes:Sample set acquisition module, for obtaining first sample set, the first sample set does not mark sample image including multiple;It closes For being based on deep neural network, each described sample graph is not marked to what the first sample was concentrated for key point position labeling module As progress key point position mark, the second sample set is obtained, wherein, the deep neural network is used to carry out key point to image Positioning;Network parameter adjusts module, for according at least to the part sample image and third sample set in second sample set, The parameter of the deep neural network is adjusted, wherein, the third sample set has marked sample image including multiple.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the key point position labeling module packet It includes:Image converting processing unit, for carrying out image transformation to each sample image that do not mark that the first sample is concentrated Processing, obtains the 4th sample set, wherein, described image conversion process includes:Rotation, scaling, adds and makes an uproar and add in shelter translation Any one or arbitrary combination;Key point position marks unit, for being based on the deep neural network, to the 4th sample Each sample image that this collection and the first sample are concentrated carries out key point position mark, obtains second sample set.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the network parameter adjusts module packet It includes:Optional sample judging unit is not each marked sample image for what is concentrated for the first sample, is not marked based on described It notes sample image and carries out the key point location information after image conversion process, do not mark the key point of sample image described in judgement Whether confidence breath is optional sample;Wherein, the key point location information for not marking sample image and its progress image transformation Treated, and key point location information is included in second sample set;First network parameter adjustment unit, for basis Each optional sample and third sample set in second sample set adjust the parameter of the deep neural network.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the network parameter adjusts module packet It includes:Second network parameter adjustment unit for the whole sample images and third sample set in second sample set, is adjusted The parameter of the whole deep neural network.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the key point includes:Face is crucial Any one in point, limbs key point, palmmprint key point and marker key point or arbitrary combination.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, when the key point includes face key During point, the face key point includes:Eyes key point, nose key point, face key point, eyebrow key point and face mask Any one in key point or arbitrary combination.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the deep neural network is convolution god Through network.
5th aspect according to embodiments of the present invention also provides a kind of crucial point positioning system, including:Target image obtains Module, for obtaining target image;Key point locating module, for being used for using obtained by the systematic training described in fourth aspect The model of key point is positioned, crucial point location is carried out to the target image.
6th aspect according to embodiments of the present invention, also provides a kind of image processing system, including:Image collection module, For obtaining pending image;Key point determining module, for fixed using being used for obtained by the systematic training described in fourth aspect The model of position key point, determines at least one of pending image key point;Image processing module, it is described for being based on At least one key point handles the pending image.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, described image processing module includes image Processing unit;Described image processing unit for being based at least one key point, carries out the pending image following At least one processing:Target object tracking and industry in recongnition of objects, image in crucial point identification, U.S. face, Face Changing, image Business object displaying.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, described image processing unit is additionally operable to At least one key point is compared with predetermined trigger key point;In response to arbitrary at least one key point One key point is matched with predetermined trigger key point, and the business object is plotted in key point in the pending image It puts.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the key point position include with down toward It is one of few:The eye areas of face in image, nasal area, face region, brow region, face mask region, limbs region, The region of advance marker in palmmprint region, image.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the predetermined trigger key point includes: Eyes key point, nose key point, face key point, eyebrow key point, face mask key point, limbs key point, palmmprint close Any one in key point and marker key point or arbitrary combination.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the business object is includes semanteme The special efficacy of information.
Optionally, with reference to any system provided in an embodiment of the present invention, wherein, the business object includes including advertisement The special efficacy of following at least one form of information:Two-dimentional paster special efficacy, three-dimensional special efficacy, particle effect.
7th aspect according to embodiments of the present invention, provides a kind of electronic equipment.The electronic equipment includes:Processor, Memory, communication interface and communication bus, the processor, the memory and the communication interface pass through the communication bus Complete mutual communication;For the memory for storing an at least executable instruction, the executable instruction makes the processing Device performs the corresponding operation of model training method for being used to position key point provided such as above-mentioned first aspect.
Eighth aspect according to embodiments of the present invention, provides a kind of electronic equipment.The electronic equipment includes:Processor, Memory, communication interface and communication bus, the processor, the memory and the communication interface pass through the communication bus Complete mutual communication;For the memory for storing an at least executable instruction, the executable instruction makes the processing Device performs the corresponding operation of crucial independent positioning method provided such as above-mentioned second aspect.
9th aspect according to embodiments of the present invention, provides a kind of electronic equipment.The electronic equipment includes:Processor, Memory, communication interface and communication bus, the processor, the memory and the communication interface pass through the communication bus Complete mutual communication;For the memory for storing an at least executable instruction, the executable instruction makes the processing Device performs the corresponding operation of image processing method provided such as the above-mentioned third aspect.
Tenth aspect according to embodiments of the present invention, additionally provides a kind of computer readable storage medium, the computer Readable storage medium storing program for executing is stored with:For obtaining first sample set, the first sample set does not mark sample image including multiple Executable instruction;For being based on deep neural network, each sample image that do not mark that the first sample is concentrated is carried out Key point position marks, and obtains the second sample set, wherein, the deep neural network is used to carry out image crucial point location Executable instruction;For according at least to the part sample image and third sample set in second sample set, adjusting the depth The parameter of neural network is spent, wherein, the third sample set includes multiple executable instructions for having marked sample image.
On the one hand according to embodiments of the present invention the tenth, additionally provides a kind of computer readable storage medium, the calculating Machine readable storage medium storing program for executing is stored with:For obtaining the executable instruction of target image;For using the method described in first aspect For positioning the model of key point obtained by training, the executable instruction of crucial point location is carried out to the target image.
12nd aspect according to embodiments of the present invention, additionally provides a kind of computer readable storage medium, the calculating Machine readable storage medium storing program for executing is stored with:For obtaining the executable instruction of pending image;For using the side described in first aspect For positioning the model of key point obtained by method training, the executable of at least one of pending image key point is determined Instruction;For being based at least one key point, the executable instruction that is handled the pending image.
Model training, crucial point location and image processing method, system and the electronics provided according to embodiments of the present invention is set Standby, using the parameter of two sample set percentage regulation neural networks, one of them is the second sample set, the second sample set source In based on deep neural network, the mark acquisition of key point position is carried out to including multiple first sample sets for not marking sample image 's.The other is including multiple third sample sets for having marked sample image.However, with being needed in the prior art to all inputs Artificial mark is carried out to the key point in the image of model to compare, the embodiment of the present invention can be realized in the image for being input to model Not all is under the premise of having marked image, to improve the training accuracy of key point location model, in other words, both can be to avoid sample This wasting of resources, and the efficiency of model training can be improved.
Description of the drawings
Fig. 1 be show according to embodiments of the present invention one for position key point model training method flow chart;
Fig. 2 be show according to embodiments of the present invention two for position key point model training method flow chart;
Fig. 3 is the key that the flow chart for showing three independent positioning method according to embodiments of the present invention;
Fig. 4 is the flow chart for the image processing method for showing according to embodiments of the present invention four;
Fig. 5 be show according to embodiments of the present invention five for position key point model training systems logic diagram;
Fig. 6 be show according to embodiments of the present invention six for position key point model training systems logic diagram;
Fig. 7 is the key that the logic diagram for showing seven point positioning system according to embodiments of the present invention;
Fig. 8 is the logic diagram for the image processing system for showing according to embodiments of the present invention eight;
Fig. 9 is another logic diagram for the image processing system for showing according to embodiments of the present invention eight;
Figure 10 is the structure diagram for the first electronic equipment for showing according to embodiments of the present invention nine;
Figure 11 is the structure diagram for the second electronic equipment for showing according to embodiments of the present invention ten;
Figure 12 is the structure diagram for the third electronic equipment for showing according to embodiments of the present invention 11.
Specific embodiment
(identical label represents identical element in several attached drawings) and embodiment below in conjunction with the accompanying drawings, implement the present invention The specific embodiment of example is described in further detail.Following embodiment is used to illustrate the present invention, but be not limited to the present invention Range.
It will be understood by those skilled in the art that the terms such as " first ", " second " in the embodiment of the present invention are only used for distinguishing Different step, equipment or module etc. neither represent any particular technology meaning, also do not indicate that the inevitable logic between them is suitable Sequence.
Embodiment one
Fig. 1 be show according to embodiments of the present invention one for position key point model training method flow chart.It can By including performing the method for positioning the equipment of the model training systems of key point.
With reference to Fig. 1, in step S110, first sample set is obtained, first sample set does not mark sample image including multiple.
In practical applications, the image for being labeled with key point location information in model is usually will enter into, referred to as Sample image is marked.Wherein, key point location information refers to coordinate information of the key point in image coordinate system.Specifically, Key point position mark can be carried out to sample image in advance by the modes such as manually marking.
By taking face key point as an example, the face key point of mark is mainly distributed on human face and face mask, and face closes Key point such as eyes key point, nose key point, face key point, face mask key point etc..Face key point location information is Coordinate information of the face key point in facial image coordinate system.For example, the upper left corner by a certain sample image comprising face Coordinate origin is denoted as, with horizontal to the right for X-axis positive direction, for Y-axis positive direction, to establish facial image coordinate system vertically downward, Coordinate of i-th of face key point in the facial image coordinate system is denoted as (xi, yi).The sample obtained through the above way Image is to have marked sample image.If on the contrary, do not carry out the processing of above-mentioned key point position mark to sample image, that The sample image can be understood as not marking sample image.First sample set in this step be exactly include it is multiple it is above-mentioned not Mark the image collection of sample image.
In step S120, based on deep neural network, key is carried out to each sample image that do not mark that first sample is concentrated Point position mark, obtains the second sample set, wherein, deep neural network is used to carry out image crucial point location.
Wherein, deep neural network can be convolutional neural networks, but not limited to this.Since deep neural network is to be used for Crucial point location is carried out to image, therefore, each sample image do not marked by what first sample was concentrated and is input to depth nerve net In network, it is possible to not mark sample image to each and realize key point position mark.It should be noted that key point position is marked Note is exactly to mark out the key point location information (i.e. coordinate information) not marked in sample image to come.
Optionally, key point includes:In face key point, limbs key point, palmmprint key point and marker key point Any one is arbitrarily combined.When key point includes face key point, face key point includes:Eyes key point, nose close Any one in key point, face key point, eyebrow key point and face mask key point or arbitrary combination.
Still by comprising face do not mark sample image for, will comprising face not mark sample image input depth god Through network, output is not mark the key point location information such as eyes key that sample image does not mark sample image in itself and The coordinate information of point, coordinate information of nose key point etc..As a result, when multiple sample images that do not mark comprising face are input to During deep neural network, the key point location information sets that sample image does not mark sample image in itself and are not marked largely Into the second sample set in this step.
In step S130, according at least to the part sample image and third sample set in the second sample set, percentage regulation is refreshing Parameter through network, wherein, third sample set has marked sample image including multiple.
After the processing by abovementioned steps S110-S120, the second sample set is obtained.Thus, it is possible to use second The ginseng of part sample image in sample set or whole sample images and third sample set percentage regulation neural network together Number.Here, introduction and explanation that sample image is referred in the present embodiment step S110 have been marked, it is no longer superfluous herein It states.
What is provided through this embodiment is used to position the model training method of key point, utilizes two sample set percentage regulations The parameter of neural network, one of them is the second sample set, which derives from based on deep neural network, to including Multiple first sample sets for not marking sample image carry out what key point position mark obtained.The other is it is marked including multiple The third sample set of sample image.However, with being needed in the prior art to all crucial click-through being input in the image of model Pedestrian's work mark is compared, and the embodiment of the present invention can realize that not all in the image for being input to model is the premise for having marked image Under, the training accuracy of key point location model is improved, in other words, not only can be to avoid the sample wasting of resources, but also mould can be improved The efficiency of type training.
Embodiment two
Fig. 2 be show according to embodiments of the present invention two for position key point model training method flow chart.It can By including performing the method for positioning the equipment of the model training systems of key point.
With reference to Fig. 2, in step S210, first sample set is obtained, first sample set does not mark sample image including multiple.
In practical applications, the image for being labeled with key point location information in model is usually will enter into, referred to as Sample image is marked.Wherein, key point location information refers to coordinate information of the key point in image coordinate system.Specifically, Key point position mark can be carried out to sample image in advance by the modes such as manually marking.If on the contrary, not to sample image Carry out the processing of key point position mark, then the sample image can be understood as not marking sample image.In this step One sample set is exactly to include multiple image collections for not marking sample image.
In step S220, image conversion process is carried out to each sample image that do not mark that first sample is concentrated, obtains the 4th Sample set.
Wherein, image conversion process may include:Rotation, translation, scaling, plus make an uproar and add shelter in any one or appoint Meaning combination, but not limited to this.
In practical applications, any of the above-described kind of image procossing mode is that the image carried out to image by a small margin converts.Example Such as, sample image rotation set angle is not marked by a certain, the value range of the set angle is usually (- 20 °, 20 °), that is, is belonged to In rotation transformation by a small margin, similarly, translation processing is also only the translation of thin tail sheep.
It is assumed that first sample set does not mark sample image including 10,000, sample image is not marked to each by image Conversion process (such as scale, translate) obtains not marking sample image after 10 image conversion process.At this point, 10,000 are not marked Note sample image has reformed into 100,000 and has not marked sample image, this 100,000 do not mark sample image and constitute the 4th sample Collection.
It should be noted that respectively not marking sample image for what first sample was concentrated, can do at identical image transformation Reason, can also do different images conversion process.
There can be two layers of meaning among these, one layer is meant that for one does not mark sample image, can do identical Image conversion process can also do different images conversion process.By taking one does not mark sample image A as an example, it is assumed that it is carried out Image conversion process obtains not marking sample image after 3 image conversion process.Not marking after this 3 image conversion process Sample image is noted, for example, it may be do not mark sample image A through rotation processing, is obtained after first image conversion process not Sample image is marked, sample image A is not marked and is handled through translation, obtain not marking sample graph after second image conversion process Picture does not mark sample image A and is handled through rotation and translation, obtains not marking sample image after third image conversion process. Either, sample image is not marked after first, second and third image conversion process, is to not marking sample Image A has carried out rotation processing and has obtained.
Another layer be meant that for it is different do not mark sample image for, identical image conversion process can be done, also may be used To do different images conversion process.Continuation is illustrated with above-mentioned, it is assumed that is not marked sample image B to another and is carried out image change Processing is changed, obtains not marking sample image after 3 image conversion process.For example, not marking after this 3 image conversion process Sample image has carried out rotation processing and has obtained to not marking sample image B.As it can be seen that sample image A and not is not marked Mark sample image B has done identical image conversion process.For another example, sample image is not marked after this 3 image conversion process In, one be to do not mark sample image B carried out scaling processing and obtains, the other is to do not mark sample image B progress Plus make an uproar processing and obtain, another be to do not mark sample image B carried out plus plus shelter handle and obtain.As it can be seen that not It marks sample image A and does not mark sample image B and done different images conversion process.
In the present embodiment, the combination of either any type of image conversion process, as long as can reach for first sample That concentrates does not respectively mark sample image, does identical or different images conversion process effect, belongs to the skill of the embodiment of the present invention In art scope.In addition, to not marking which kind of image conversion process sample image specifically does, sample image can be combined in itself Characteristic does the image conversion process for being suitble to the sample image.
In step S230, based on deep neural network, each sample image concentrated to the 4th sample set and first sample Key point position mark is carried out, obtains the second sample set.
Due to the 4th sample set and first sample concentration be not mark sample image, then based on previous embodiment one The same principle of middle explanation will not mark sample image input deep neural network, and export the 4th sample set and first sample The key point location information of itself and each sample image of each sample image concentrated.
In step S240, for first sample concentrate it is each do not mark sample image, based on do not mark sample image into Key point location information after row image conversion process, whether the key point location information for judging not marking sample image is optional Sample.
Wherein, the key point location information of sample image is not marked and its carries out the key point after image conversion process Confidence breath is included in the second sample set.
In concrete implementation mode, by taking one does not mark sample image as an example, the pass for not marking sample image is judged Whether key dot position information is optional sample.It is specific as follows:
First, this is not marked to sample image and carries out the key point location information after image conversion process, carries out image calibration Positive processing.It should be noted that image correction process be exactly above-mentioned image conversion process inverse transformation processing, for example, it is a certain not Mark sample image is shifted to the right 5 millimeters, then this does not mark sample image and carries out the key point after image conversion process Confidence ceases, and needs to 5 millimeters of left to realize image correction process.Secondly, to the key point location information through image rectification (i.e. the coordinate value of series of points) seeks covariance matrix Cov1, and (or line by line) is launched into vector by column by covariance matrix Cov1 Form, and be normalized into unit vector Cov1_v.Also, the key point location information for not marking sample image to this asks association side Poor Matrix C ov2, and Cov2 (or is launched into the form of vector, and is normalized into unit vector Cov2_v line by line by column.It calculates The inner product of Cov1_v and Cov2_v, and inner product is denoted as D.Finally, D is compared with the inner product threshold value set, if D is less than interior Product threshold value, then the key point location information for not marking sample image is optional sample.On the contrary, if D is more than or equal to inner product threshold Value, then the key point location information for not marking sample image is not optional sample.And so on, based on scheming in the second sample set As the key point location information before and after conversion process, each sample image is not marked carry out to what first sample was concentrated Above-mentioned judgement processing, it is possible to select each optional sample in step S240.
In addition, another mode for choosing optional sample is that the part different from above-mentioned deterministic process is only that last step In rapid, if D is less than the threshold value of setting, sample image is not marked for this, using carrying out the pass after image conversion process to it Key dot position information carries out image correction process, obtains the key point location information through image rectification.It recycles through image rectification Key point location information the result (such as mean value of the coordinate value of series of points) that is inferred to of data distribution situation, to this not It marks sample image and carries out key point position mark, the key point location information of mark is included in the second sample as optional sample This concentration.
In step S250, each optional sample and third sample set in the second sample set, percentage regulation neural network Parameter.
The part sample image in each optional sample namely the second sample set in this step, according in the second sample set Part sample image and third sample set in it is multiple marked sample images training deep neural network.
In step S260, whole sample images and third sample set in the second sample set, percentage regulation nerve net The parameter of network.
It should be noted that after the processing for completing step S230, the processing of step S260 is directly performed.Namely It says, without choosing the part sample image in the second sample set, but directly uses sample image all in the second sample set, Together with including multiple third sample sets for having marked sample image, the parameter of percentage regulation neural network.
What is provided through this embodiment is used to position the model training method of key point, utilizes two sample set percentage regulations The parameter of neural network, one of them is the second sample set, which derives from based on deep neural network, to including Multiple first sample sets for not marking sample image carry out what key point position mark obtained.The other is it is marked including multiple The third sample set of sample image.However, with being needed in the prior art to all crucial click-through being input in the image of model Pedestrian's work mark is compared, and the embodiment of the present invention can realize that not all in the image for being input to model is the premise for having marked image Under, the training accuracy of key point location model is improved, in other words, not only can be to avoid the sample wasting of resources, but also mould can be improved The efficiency of type training.
Further, it on the basis of above-described embodiment, also has the following technical effect that:Using image conversion process technology, On the one hand it can so that sample image is more diversified, and then the capacity of exptended sample collection;On the other hand, after image conversion process Key point location information as the basis for estimation for subsequently choosing optional sample, so as to remove some ineffective sample graphs Picture so that good sample image participates in the processing of subsequent percentage regulation neural network parameter in the second sample set.
Embodiment three
Fig. 3 is the key that the flow chart for showing three independent positioning method according to embodiments of the present invention.It can be by including key point The equipment of alignment system performs the method.
With reference to Fig. 3, in step S310, target image is obtained.
Target image can be derived from the image of image capture device, be made of image one by one, or Individual frame image or piece image, can also derive from other equipment, and image includes the figure in still image or video Picture.
In step S320, it is used to position key point obtained by the model training method of key point is trained using for positioning Model carries out target image crucial point location.
In the present embodiment, which can be previous embodiment one or embodiment two The model training method for being used to position key point.Target image can specifically be inputted trained for positioning key In the model of point, the key point location information of target image is obtained.
The crucial independent positioning method provided through this embodiment, the model for being used to position key point finished using training, Crucial point location can be carried out to target image.Crucial point location phase is carried out with model of the existing use comprising multiple networks Than reducing the logical complexity and operand of crucial independent positioning method, and then quickly and accurately carry out key point to image and determine Position.
Example IV
Fig. 4 is the flow chart for the image processing method for showing according to embodiments of the present invention four.It can be by including image procossing The equipment of system performs the method.
With reference to Fig. 4, in step S410, pending image is obtained.
Pending image can be derived from the image of image capture device, be made of image one by one, can also For individual frame image or piece image, other equipment can also be derived from, image is included in still image or video Image.
In step S420, it is used to position key point obtained by the model training method of key point is trained using for positioning Model determines at least one of pending image key point.
In the present embodiment, which can be previous embodiment one or embodiment two The model training method for being used to position key point.Pending image can specifically be inputted trained for positioning pass In the model of key point, the key point location information of at least one key point of pending image is obtained.
In step S430, based at least one key point, pending image is handled.
The image processing method provided through this embodiment is treated using training in advance for positioning the model of key point It handles image and carries out crucial point location, obtain at least one of pending image key point, so as to be carried for subsequent image processing For accurate, reliable data basis.And then pending image is handled according to obtained key point, it can effectively realize pre- The effect thought.
Optionally, above-mentioned steps S430 includes:Based at least one key point, pending image is carried out it is following at least it One processing:Target object tracking and business object in recongnition of objects, image in crucial point identification, U.S. face, Face Changing, image Displaying.
Here, above-mentioned crucial point identification processing can identify two ways by point identification or rectangle frame and realize.Citing For, it is assumed that the key point in determining pending image is eyes, then can utilize some points being located on eye contour Eyes key point is identified, alternatively, the rectangle frame that can frame eyes using one is by eyes key point identification Out.Further, it is also possible to words identification is added on the basis of point identification or rectangle frame mark, for example, can be in rectangle frame Inside shows word " eyes ".
Similarly, the realization of the functions such as U.S. face, Face Changing, it is also desirable to first find face key point in image, and then be based on The key point found such as is beautified, is replaced at the processing.Target object tracking is for example about crossroad access shadow in above-mentioned image In the video image of picture, people, vehicle etc. are in dynamic.To be tracked to it, can be based on key point locating and tracking people, The change in location of vehicle etc..Recongnition of objects in image, such as previously with regard in the video image of crossroad access image, The identification of some human body can be handled the image comprising the human body based on key point.
Pending image in the present embodiment can be video image.
Wherein, it is right based at least one key point when the processing carried out to pending image, which includes business object, to be shown Pending image carries out processing and includes:At least one key point and predetermined trigger key point are compared;In response at least one Any one key point in a key point is matched with predetermined trigger key point, and business object is plotted in pending image and is closed Key point position.
Here, key point position may include, but be not limited at least one of:The eye areas of face, nose in image The region of advance marker in region, face region, brow region, face mask region, limbs region, palmmprint region, image. Predetermined trigger key point may include:Eyes key point, nose key point, face key point, eyebrow key point, face mask are crucial Point.Any one in limbs key point, palmmprint key point and marker key point or arbitrary combination.
If the result that determining key point is compared with predetermined trigger key point is, appointing at least one key point One key point of meaning is one kind in predetermined trigger key point, for example, eyes key point, it is determined that the key point is touched with predetermined Key point matching is sent out, business object is further plotted in the key point position of the key point in pending image.
Business object in the present embodiment can be to include the special efficacy of semantic information, can specifically include believing comprising advertisement The special efficacy of following at least one form of breath:Two-dimentional paster special efficacy, as the advertising sticker of two dimensional form (is shown using paster form Advertisement), three-dimensional special efficacy (advertisement shown using 3D special efficacys form), particle effect.But not limited to this, the business of other forms Object is equally applicable image procossing scheme provided in this embodiment, as APP or other application explanatory note or introduction or The object (such as electronic pet) interacted with video spectators of certain forms.
Wherein, business object is plotted in key point position in pending image can be by appropriate computer graphical figure As draw or render etc. modes realize, including but not limited to:It is drawn etc. based on OpenGL graph drawing engines.OpenGL determines Justice one is unrelated with hardware across programming language, the graphic package interface of the profession of cross-platform programming interface specification, can be with Easily carry out the drafting of 2D or 3D graph images.By OpenGL, the drafting of 2D effects such as 2D pasters can be not only realized, also It can realize the drafting of 3D special efficacys and the drafting of particle effect etc..But be not limited to OpenGL, other manner, such as Unity or OpenCL etc. is equally applicable.
In addition, the present embodiment also has the following technical effect that:On the one hand, pending image can be carried out based on key point Target object tracking processing in such as crucial point identification, U.S. face, Face Changing, image so that crucial point location is widely used in various Image processing means;On the other hand, in the processing that business object is shown in image is carried out to pending image based on key point, When business object for when showing advertisement, compared with traditional video ads mode, which to be combined with video playing, Need not the system money of Internet resources and client be saved by the network transmission additional ad video data unrelated with video Source.In addition, business object is combined closely with the key point (such as face, eyes) in video image, spectators are left alone with one kind Mode show business object, do not influence the normal video viewing experience of spectators, be not easy that spectators is caused to dislike.
Embodiment five
With reference to Fig. 5, according to embodiments of the present invention five logic for being used to position the model training systems of key point is shown Block diagram.Include in the present embodiment for positioning the model training systems of key point:Sample set acquisition module 510, key point Put labeling module 520 and network parameter adjustment module 530.
For sample set acquisition module 510 for obtaining first sample set, first sample set does not mark sample image including multiple.
Key point position labeling module 520 is used for based on deep neural network, and respectively sample is not marked to what first sample was concentrated This image carries out key point position mark, obtains the second sample set, wherein, deep neural network is used to carry out key point to image Positioning.
Network parameter adjustment module 530 is used for according at least to the part sample image and third sample in the second sample set Collection, the parameter of percentage regulation neural network, wherein, third sample set has marked sample image including multiple.
The description of related content in the present embodiment is referred to instruct the model for being used to position key point in previous embodiment Practice the associated description of method, details are not described herein for the present embodiment.
What is provided through this embodiment is used to position the model training systems of key point, utilizes two sample set percentage regulations The parameter of neural network, one of them is the second sample set, which derives from based on deep neural network, to including Multiple first sample sets for not marking sample image carry out what key point position mark obtained.The other is it is marked including multiple The third sample set of sample image.However, with being needed in the prior art to all crucial click-through being input in the image of model Pedestrian's work mark is compared, and the embodiment of the present invention can realize that not all in the image for being input to model is the premise for having marked image Under, the training accuracy of key point location model is improved, in other words, not only can be to avoid the sample wasting of resources, but also mould can be improved The efficiency of type training.
Embodiment six
With reference to Fig. 6, according to embodiments of the present invention six logic for being used to position the model training systems of key point is shown Block diagram.
Optionally, key point position labeling module 520 includes:
Image converting processing unit 5201 is used to carry out image transformation to each sample image that do not mark that first sample is concentrated Processing, obtains the 4th sample set, wherein, image conversion process includes:It rotates, translation, scaling, add times for making an uproar and adding in shelter Meaning one or arbitrary combination.
Key point position mark unit 5202 is used for based on deep neural network, to the 4th sample set and first sample set In each sample image carry out key point position mark, obtain the second sample set.
Optionally, network parameter adjustment module 530 includes:
Optional sample judging unit 5301 is used to each not mark sample image for what first sample was concentrated, based on not marking It notes sample image and carries out the key point location information after image conversion process, judge not marking the key point confidence of sample image Whether breath is optional sample;Wherein, after not marking the key point location information of sample image and its carrying out image conversion process Key point location information is included in the second sample set.
First network parameter adjustment unit 5302 is used for each optional sample and third sample set in the second sample set, The parameter of percentage regulation neural network.
Optionally, network parameter adjustment module 530 includes:
Second network parameter adjustment unit 5303 is for the whole sample images and third sample in the second sample set Collection, the parameter of percentage regulation neural network.
Optionally, key point includes:In face key point, limbs key point, palmmprint key point and marker key point Any one is arbitrarily combined.
Optionally, when key point includes face key point, face key point includes:Eyes key point, nose key point, Any one in face key point, eyebrow key point and face mask key point or arbitrary combination.
Optionally, deep neural network is convolutional neural networks.
The description of related content in the present embodiment is referred to instruct the model for being used to position key point in previous embodiment Practice the associated description of method, details are not described herein for the present embodiment.
What is provided through this embodiment is used to position the model training systems of key point, utilizes two sample set percentage regulations The parameter of neural network, one of them is the second sample set, which derives from based on deep neural network, to including Multiple first sample sets for not marking sample image carry out what key point position mark obtained.The other is it is marked including multiple The third sample set of sample image.However, with being needed in the prior art to all crucial click-through being input in the image of model Pedestrian's work mark is compared, and the embodiment of the present invention can realize that not all in the image for being input to model is the premise for having marked image Under, the training accuracy of key point location model is improved, in other words, not only can be to avoid the sample wasting of resources, but also mould can be improved The efficiency of type training.
Further, it on the basis of above-described embodiment, also has the following technical effect that:Using image conversion process technology, On the one hand it can so that sample image is more diversified, and then dilatation is carried out to sample set;On the other hand, after image conversion process Key point location information as the basis for estimation for subsequently choosing optional sample, so as to remove some ineffective sample graphs Picture so that the good sample image of the second sample concentrated part participates in the processing of the parameter of subsequent percentage regulation neural network.
Embodiment seven
With reference to Fig. 7, the logic diagram of according to embodiments of the present invention seven crucial point positioning system is shown.In the present embodiment Crucial point positioning system include:Target image acquisition module 710 and key point locating module 720.
Target image acquisition module 710 is used to obtain target image.
Key point locating module 720 be used for using previous embodiment five or described in embodiment six for positioning key point For positioning the model of key point obtained by model training systems training, crucial point location is carried out to target image.
The description of related content in the present embodiment is referred to instruct the model for being used to position key point in previous embodiment Practice the associated description of method, details are not described herein for the present embodiment.
The crucial point positioning system provided through this embodiment, the model for being used to position key point finished using training, Crucial point location can be carried out to target image.Crucial point location phase is carried out with model of the existing use comprising multiple networks Than reducing the logical complexity and operand of crucial independent positioning method, and then quickly and accurately carry out key point to image and determine Position.
Embodiment eight
With reference to Fig. 8, the logic diagram of according to embodiments of the present invention eight image processing system is shown.In the present embodiment Image processing system includes:Image collection module 810, key point determining module 820 and image processing module 830.
Image collection module 810 is used to obtain pending image.
Key point determining module 820 be used for using previous embodiment five or described in embodiment six for positioning key point For positioning the model of key point obtained by model training systems training, at least one of pending image key point is determined.
Image processing module 830 is used to, based at least one key point, handle pending image.
On the basis of above-described embodiment, with reference to Fig. 9, according to embodiments of the present invention eight image processing system is shown Another logic diagram.
Optionally, image processing module 830 includes image processing unit 8301;Image processing unit 8301 is used for based on institute At least one key point is stated, the processing of at least one of is carried out to pending image:Crucial point identification, U.S. face, Face Changing, image Target object tracking and business object displaying in middle recongnition of objects, image.
Optionally, image processing unit 8301 is additionally operable to compare at least one key point and predetermined trigger key point Compared with;It is matched in response to any one key point at least one key point with predetermined trigger key point, business object is drawn The key point position in pending image.
Optionally, key point position includes at least one of:The eye areas of face, nasal area, face in image The region of advance marker in region, brow region, face mask region, limbs region, palmmprint region, image.
Optionally, predetermined trigger key point includes:Eyes key point, nose key point, face key point, eyebrow are crucial Point, face mask key point.Any one in limbs key point, palmmprint key point and marker key point or arbitrary combination.
Optionally, business object is to include the special efficacy of semantic information.
Optionally, business object includes the special efficacy of following at least one form comprising advertising information:Two-dimentional paster special efficacy, Three-dimensional special efficacy, particle effect.
The description of related content in the present embodiment is referred to instruct the model for being used to position key point in previous embodiment Practice the associated description of method, details are not described herein for the present embodiment.
The image processing system provided through this embodiment is treated using training in advance for positioning the model of key point It handles image and carries out crucial point location, obtain at least one of pending image key point, so as to be carried for subsequent image processing For accurate, reliable data basis.And then pending image is handled according to obtained key point, it can effectively realize pre- The effect thought.
In addition, the present embodiment also has the following effects that:On the one hand, pending image can be carried out based on key point as closed The processing such as target object tracking in key point identification, U.S. face, Face Changing, image so that crucial point location is widely used in various images Processing means;On the other hand, in the processing that business object is shown in image is carried out to pending image based on key point, work as industry When business object is used to show advertisement, compared with traditional video ads mode, which is combined with video playing, without By the additional ad video data that network transmission is unrelated with video, the system resource of Internet resources and client has been saved.This Outside, business object is combined closely with the key point (such as face, eyes) in video image, by it is a kind of leave spectators alone in a manner of It shows business object, does not influence the normal video viewing experience of spectators, be not easy that spectators is caused to dislike.
Embodiment nine
The embodiment of the present invention additionally provides a kind of first electronic equipment, such as can be mobile terminal, personal computer (PC), tablet computer, server etc..Below with reference to Figure 10, it illustrates suitable for being used for realizing that the terminal of the embodiment of the present application is set The structure diagram of standby or server electronic equipment 900:As shown in Figure 10, computer system 900 is included at one or more Device, communication unit etc. are managed, one or more of processors are for example:One or more central processing unit (CPU) 901 and/or one A or multiple images processor (GPU) 913 etc., processor can be executable in read-only memory (ROM) 902 according to being stored in Instruction or performed from the executable instruction that storage section 908 is loaded into random access storage device (RAM) 903 it is various appropriate Action and processing.Communication unit 912 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) Network interface card.
Processor can communicate with read-only memory 902 and/or random access storage device 930 to perform executable instruction, It is connected by bus 904 with communication unit 912 and is communicated through communication unit 912 with other target devices, is implemented so as to complete the application The corresponding operation of any one method that example provides, for example, obtaining first sample set, the first sample set does not mark including multiple Sample image;Based on deep neural network, key point is carried out to each sample image that do not mark that the first sample is concentrated Position marks, and obtains the second sample set, wherein, the deep neural network is used to carry out image crucial point location;At least root According to the part sample image and third sample set in second sample set, the parameter of the deep neural network is adjusted, wherein, The third sample set has marked sample image including multiple.
In addition, in RAM 903, it can also be stored with various programs and data needed for device operation.CPU901、ROM902 And RAM903 is connected with each other by bus 904.In the case where there is RAM903, ROM902 is optional module.RAM903 is stored Executable instruction is written in executable instruction into ROM902 at runtime, and it is above-mentioned logical that executable instruction performs processor 901 The corresponding operation of letter method.Input/output (I/O) interface 905 is also connected to bus 904.Communication unit 912 can be integrally disposed, It may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interfaces 905 are connected to lower component:Importation 906 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 907 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 908 including hard disk etc.; And the communications portion 909 of the network interface card including LAN card, modem etc..Communications portion 909 via such as because The network of spy's net performs communication process.Driver 910 is also according to needing to be connected to I/O interfaces 905.Detachable media 911, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 910, as needed in order to be read from thereon Computer program be mounted into storage section 908 as needed.
Need what is illustrated, framework as shown in Figure 10 is only a kind of optional realization method, can root during concrete practice The component count amount and type of above-mentioned Figure 10 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component Put, can also be used it is separately positioned or integrally disposed and other implementations, such as GPU and CPU separate setting or can be by GPU collection Into on CPU, communication unit separates setting, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiments Each fall within protection domain disclosed by the invention.
The embodiment of the present invention additionally provides a kind of computer readable storage medium, in accordance with an embodiment of the present disclosure, joins above The process for examining flow chart description may be implemented as computer software programs.For example, embodiment of the disclosure includes a kind of calculating Machine program product, including the computer program being tangibly embodied on machine readable media, computer program includes to hold The program code of method shown in row flow chart, program code may include corresponding execution method and step provided by the embodiments of the present application Corresponding instruction, for example, obtaining first sample set, the first sample set does not mark sample image including multiple;Based on depth Neural network carries out key point position mark to each sample image that do not mark that the first sample is concentrated, obtains second Sample set, wherein, the deep neural network is used to carry out image crucial point location;According at least in second sample set Part sample image and third sample set, adjust the parameter of the deep neural network, wherein, the third sample set includes It is multiple to have marked sample image.In such embodiments, the computer program can by communications portion 909 from network quilt It downloads and installs and/or be mounted from detachable media 911.It is held in the computer program by central processing unit (CPU) 901 During row, the above-mentioned function of being limited in the present processes is performed.
The first electronic equipment provided through this embodiment, using the parameter of two sample set percentage regulation neural networks, One of them is the second sample set, which derives from based on deep neural network, to including multiple not marking sample The first sample set of image carries out what key point position mark obtained.The other is including multiple thirds for having marked sample image Sample set.However, compared with needing to carry out artificial mark to all key points being input in the image of model in the prior art, The embodiment of the present invention can be realized not all under the premise of having marked image, raising key point is determined in the image for being input to model The training accuracy of bit model, in other words, not only can be to avoid the sample wasting of resources, but also can improve the efficiency of model training.
Embodiment ten
The embodiment of the present invention additionally provides a kind of second electronic equipment, such as can be mobile terminal, personal computer (PC), tablet computer, server etc..Below with reference to Figure 11, it illustrates suitable for being used for realizing that the terminal of the embodiment of the present application is set The structure diagram of standby or server electronic equipment 1000:As shown in figure 11, computer system 1000 includes one or more Processor, communication unit etc., one or more of processors are for example:One or more central processing unit (CPU) 1001 and/ Or one or more image processors (GPU) 1013 etc., processor can be according to being stored in read-only memory (ROM) 1002 Executable instruction is performed from the executable instruction that storage section 1008 is loaded into random access storage device (RAM) 1003 Various appropriate actions and processing.Communication unit 1012 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) network interface card.
Processor can communicate to perform executable finger with read-only memory 1002 and/or random access storage device 1030 It enables, is connected by bus 1004 with communication unit 1012 and communicated through communication unit 1012 with other target devices, so as to complete this Shen Please embodiment provide the corresponding operation of any one method, for example, obtain target image;Using for positioning the model of key point For positioning the model of key point obtained by training method training, crucial point location is carried out to the target image.
In addition, in RAM 1003, it can also be stored with various programs and data needed for device operation.CPU1001、 ROM1002 and RAM1003 is connected with each other by bus 1004.In the case where there is RAM1003, ROM1002 is optional module. RAM1003 stores executable instruction or executable instruction is written into ROM1002 at runtime, and executable instruction makes processor 1001 perform the corresponding operation of above-mentioned communication means.Input/output (I/O) interface 1005 is also connected to bus 1004.Communication unit 1012 can be integrally disposed, may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interfaces 1005 are connected to lower component:Importation 1006 including keyboard, mouse etc.;Including such as cathode The output par, c 1007 of ray tube (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section including hard disk etc. 1008;And the communications portion 1009 of the network interface card including LAN card, modem etc..Communications portion 1009 passes through Communication process is performed by the network of such as internet.Driver 1010 is also according to needing to be connected to I/O interfaces 1005.It is detachable to be situated between Matter 1011, such as disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 1010 as needed, so as to In being mounted into storage section 1008 as needed from the computer program read thereon.
Need what is illustrated, framework as shown in figure 11 is only a kind of optional realization method, can root during concrete practice The component count amount and type of above-mentioned Figure 11 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component Put, can also be used it is separately positioned or integrally disposed and other implementations, such as GPU and CPU separate setting or can be by GPU collection Into on CPU, communication unit separates setting, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiments Each fall within protection domain disclosed by the invention.
The embodiment of the present invention additionally provides a kind of computer readable storage medium, in accordance with an embodiment of the present disclosure, joins above The process for examining flow chart description may be implemented as computer software programs.For example, embodiment of the disclosure includes a kind of calculating Machine program product, including the computer program being tangibly embodied on machine readable media, computer program includes to hold The program code of method shown in row flow chart, program code may include corresponding execution method and step provided by the embodiments of the present application Corresponding instruction, for example, obtaining target image;It is used to determine obtained by the model training method of key point is trained using for positioning The model of position key point carries out the target image crucial point location.In such embodiments, which can be with It is downloaded and installed from network by communications portion 1009 and/or is mounted from detachable media 1011.In the computer journey When sequence is performed by central processing unit (CPU) 1001, the above-mentioned function of being limited in the present processes is performed.
The second electronic equipment provided through this embodiment, the model for being used to position key point finished using training, can To carry out crucial point location to target image.Compared with model of the existing use comprising multiple networks carries out crucial point location, The logical complexity and operand of crucial independent positioning method are reduced, and then crucial point location is quickly and accurately carried out to image.
Embodiment 11
The embodiment of the present invention additionally provides a kind of third electronic equipment, such as can be mobile terminal, personal computer (PC), tablet computer, server etc..Below with reference to Figure 12, it illustrates suitable for being used for realizing that the terminal of the embodiment of the present application is set The structure diagram of standby or server electronic equipment 1100:As shown in figure 12, computer system 1100 includes one or more Processor, communication unit etc., one or more of processors are for example:One or more central processing unit (CPU) 1101 and/ Or one or more image processors (GPU) 1113 etc., processor can be according to being stored in read-only memory (ROM) 1102 Executable instruction is performed from the executable instruction that storage section 1108 is loaded into random access storage device (RAM) 1103 Various appropriate actions and processing.Communication unit 1112 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) network interface card.
Processor can communicate to perform executable finger with read-only memory 1102 and/or random access storage device 1130 It enables, is connected by bus 1104 with communication unit 1112 and communicated through communication unit 1112 with other target devices, so as to complete this Shen Please embodiment provide the corresponding operation of any one method, for example, obtaining pending image;Using for positioning the mould of key point For positioning the model of key point obtained by the training of type training method, at least one of pending image key is determined Point;Based at least one key point, the pending image is handled.
In addition, in RAM 1103, it can also be stored with various programs and data needed for device operation.CPU1101、 ROM1102 and RAM1103 is connected with each other by bus 1104.In the case where there is RAM1103, ROM1102 is optional module. RAM1103 stores executable instruction or executable instruction is written into ROM1102 at runtime, and executable instruction makes processor 1101 perform the corresponding operation of above-mentioned communication means.Input/output (I/O) interface 1105 is also connected to bus 1104.Communication unit 1112 can be integrally disposed, may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interfaces 1105 are connected to lower component:Importation 1106 including keyboard, mouse etc.;Including such as cathode The output par, c 1107 of ray tube (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section including hard disk etc. 1108;And the communications portion 1109 of the network interface card including LAN card, modem etc..Communications portion 1109 passes through Communication process is performed by the network of such as internet.Driver 1110 is also according to needing to be connected to I/O interfaces 1105.It is detachable to be situated between Matter 1111, such as disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 1110 as needed, so as to In being mounted into storage section 1108 as needed from the computer program read thereon.
Need what is illustrated, framework as shown in figure 12 is only a kind of optional realization method, can root during concrete practice The component count amount and type of above-mentioned Figure 12 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component Put, can also be used it is separately positioned or integrally disposed and other implementations, such as GPU and CPU separate setting or can be by GPU collection Into on CPU, communication unit separates setting, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiments Each fall within protection domain disclosed by the invention.
The embodiment of the present invention additionally provides a kind of computer readable storage medium, in accordance with an embodiment of the present disclosure, joins above The process for examining flow chart description may be implemented as computer software programs.For example, embodiment of the disclosure includes a kind of calculating Machine program product, including the computer program being tangibly embodied on machine readable media, computer program includes to hold The program code of method shown in row flow chart, program code may include corresponding execution method and step provided by the embodiments of the present application Corresponding instruction, for example, obtaining pending image;It is used for using for positioning obtained by the model training method of key point is trained The model of key point is positioned, determines at least one of pending image key point;Based at least one key point, The pending image is handled.In such embodiments, the computer program can by communications portion 1109 from It is downloaded and installed on network and/or is mounted from detachable media 1111.In the computer program by central processing unit (CPU) during 1101 execution, the above-mentioned function of being limited in the present processes is performed.
The third electronic equipment provided through this embodiment is treated using training in advance for positioning the model of key point It handles image and carries out crucial point location, obtain at least one of pending image key point, so as to be carried for subsequent image processing For accurate, reliable data basis.And then pending image is handled according to obtained key point, it can effectively realize pre- The effect thought.
In addition, the present embodiment also has the following effects that:On the one hand, pending image can be carried out based on key point as closed The processing such as target object tracking in key point identification, U.S. face, Face Changing, image so that crucial point location is widely used in various images Processing means;On the other hand, in the processing that business object is shown in image is carried out to pending image based on key point, work as industry When business object is used to show advertisement, compared with traditional video ads mode, which is combined with video playing, without By the additional ad video data that network transmission is unrelated with video, the system resource of Internet resources and client has been saved.This Outside, business object is combined closely with the key point (such as face, eyes) in video image, by it is a kind of leave spectators alone in a manner of It shows business object, does not influence the normal video viewing experience of spectators, be not easy that spectators is caused to dislike.
Methods and apparatus of the present invention, equipment may be achieved in many ways.For example, software, hardware, firmware can be passed through Or any combinations of software, hardware, firmware realize methods and apparatus of the present invention, equipment.The step of for method Sequence is stated merely to illustrate, the step of method of the invention is not limited to sequence described in detail above, unless with other Mode illustrates.In addition, in some embodiments, the present invention can be also embodied as recording program in the recording medium, this A little programs include being used to implement machine readable instructions according to the method for the present invention.Thus, the present invention also covering stores to hold The recording medium of the program of row according to the method for the present invention.
Description of the invention provides for the sake of example and description, and is not exhaustively or will be of the invention It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.It selects and retouches It states embodiment and is to more preferably illustrate the principle of the present invention and practical application, and those of ordinary skill in the art is enable to manage The solution present invention is so as to design the various embodiments with various modifications suitable for special-purpose.

Claims (10)

1. a kind of model training method for being used to position key point, which is characterized in that including:
First sample set is obtained, the first sample set does not mark sample image including multiple;
Based on deep neural network, key point position mark is carried out to each sample image that do not mark that the first sample is concentrated Note, obtains the second sample set, wherein, the deep neural network is used to carry out image crucial point location;
According at least to the part sample image and third sample set in second sample set, the deep neural network is adjusted Parameter, wherein, the third sample set has marked sample image including multiple.
2. the model training method according to claim 1 for being used to position key point, which is characterized in that described to be based on depth Neural network carries out key point position mark to each sample image that do not mark that the first sample is concentrated, obtains second Sample set, including:
Image conversion process is carried out to each sample image that do not mark that the first sample is concentrated, obtains the 4th sample set, Wherein, described image conversion process includes:Rotation, translation, scaling, plus make an uproar and add shelter in any one or arbitrary group It closes;
Based on the deep neural network, each sample image concentrated to the 4th sample set and the first sample carries out Key point position marks, and obtains second sample set.
3. a kind of key independent positioning method, which is characterized in that including:
Obtain target image;
Using obtained by the method training described in claims 1 or 2 for positioning the model of key point, to the target image into Row key point location.
4. a kind of image processing method, which is characterized in that including:
Obtain pending image;
Using the model for being used to position key point obtained by the method training described in claims 1 or 2, the pending figure is determined At least one of picture key point;
Based at least one key point, the pending image is handled.
5. a kind of model training systems for being used to position key point, which is characterized in that including:
Sample set acquisition module, for obtaining first sample set, the first sample set does not mark sample image including multiple;
Key point position labeling module for being based on deep neural network, each described does not mark to what the first sample was concentrated Sample image carries out key point position mark, obtains the second sample set, wherein, the deep neural network is used to carry out image Crucial point location;
Network parameter adjusts module, for according at least to the part sample image and third sample set in second sample set, The parameter of the deep neural network is adjusted, wherein, the third sample set has marked sample image including multiple.
6. a kind of key point positioning system, which is characterized in that including:
Target image acquisition module, for obtaining target image;
Key point locating module, for being used to position the model of key point obtained by the systematic training described in use claim 5, Crucial point location is carried out to the target image.
7. a kind of image processing system, which is characterized in that including:
Image collection module, for obtaining pending image;
Key point determining module, for being used to position the model of key point obtained by the systematic training described in use claim 5, Determine at least one of pending image key point;
Image processing module for being based at least one key point, is handled the pending image.
8. a kind of electronic equipment, which is characterized in that including:Processor, memory, communication interface and communication bus, the processing Device, the memory and the communication interface complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, the executable instruction makes the processor perform right such as will Ask the corresponding operation of model training method for being used to position key point described in 1 or 2.
9. a kind of electronic equipment, which is characterized in that including:Processor, memory, communication interface and communication bus, the processing Device, the memory and the communication interface complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, the executable instruction makes the processor perform right such as will Ask the corresponding operation of crucial independent positioning method described in 3.
10. a kind of electronic equipment, which is characterized in that including:Processor, memory, communication interface and communication bus, the processing Device, the memory and the communication interface complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, the executable instruction makes the processor perform right such as will Ask the corresponding operation of the image processing method described in 4.
CN201611080382.9A 2016-11-30 2016-11-30 Model training, crucial point location and image processing method, system and electronic equipment Pending CN108133220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611080382.9A CN108133220A (en) 2016-11-30 2016-11-30 Model training, crucial point location and image processing method, system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611080382.9A CN108133220A (en) 2016-11-30 2016-11-30 Model training, crucial point location and image processing method, system and electronic equipment

Publications (1)

Publication Number Publication Date
CN108133220A true CN108133220A (en) 2018-06-08

Family

ID=62387465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611080382.9A Pending CN108133220A (en) 2016-11-30 2016-11-30 Model training, crucial point location and image processing method, system and electronic equipment

Country Status (1)

Country Link
CN (1) CN108133220A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921798A (en) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 The method, apparatus and electronic equipment of image procossing
CN109325907A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image landscaping treatment method, apparatus and system
CN109410270A (en) * 2018-09-28 2019-03-01 百度在线网络技术(北京)有限公司 A kind of damage identification method, equipment and storage medium
CN109657537A (en) * 2018-11-05 2019-04-19 北京达佳互联信息技术有限公司 Image-recognizing method, system and electronic equipment based on target detection
CN109871883A (en) * 2019-01-24 2019-06-11 北京市商汤科技开发有限公司 Neural network training method and device, electronic equipment and storage medium
CN110347134A (en) * 2019-07-29 2019-10-18 南京图玩智能科技有限公司 A kind of AI intelligence aquaculture specimen discerning method and cultivating system
CN110363127A (en) * 2019-07-04 2019-10-22 陕西丝路机器人智能制造研究院有限公司 Robot identifies the method with positioning to workpiece key point
CN110472737A (en) * 2019-08-15 2019-11-19 腾讯医疗健康(深圳)有限公司 Training method, device and the magic magiscan of neural network model
WO2020010927A1 (en) * 2018-07-11 2020-01-16 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
WO2020010979A1 (en) * 2018-07-10 2020-01-16 腾讯科技(深圳)有限公司 Method and apparatus for training model for recognizing key points of hand, and method and apparatus for recognizing key points of hand
WO2020037678A1 (en) * 2018-08-24 2020-02-27 太平洋未来科技(深圳)有限公司 Method, device, and electronic apparatus for generating three-dimensional human face image from occluded image
CN111028313A (en) * 2019-12-26 2020-04-17 浙江口碑网络技术有限公司 Table distribution image generation method and device
CN111126108A (en) * 2018-10-31 2020-05-08 北京市商汤科技开发有限公司 Training method and device of image detection model and image detection method and device
CN111126481A (en) * 2019-12-20 2020-05-08 湖南千视通信息科技有限公司 Training method and device of neural network model
CN111339846A (en) * 2020-02-12 2020-06-26 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN111401158A (en) * 2020-03-03 2020-07-10 平安科技(深圳)有限公司 Difficult sample discovery method and device and computer equipment
CN111523422A (en) * 2020-04-15 2020-08-11 北京华捷艾米科技有限公司 Key point detection model training method, key point detection method and device
CN111954075A (en) * 2020-08-20 2020-11-17 腾讯科技(深圳)有限公司 Video processing model state adjusting method and device, electronic equipment and storage medium
WO2021159774A1 (en) * 2020-02-13 2021-08-19 腾讯科技(深圳)有限公司 Object detection model training method and apparatus, object detection method and apparatus, computer device, and storage medium
CN113449718A (en) * 2021-06-30 2021-09-28 平安科技(深圳)有限公司 Method and device for training key point positioning model and computer equipment
WO2021238410A1 (en) * 2020-05-29 2021-12-02 北京沃东天骏信息技术有限公司 Image processing method and apparatus, electronic device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101287093A (en) * 2008-05-30 2008-10-15 北京中星微电子有限公司 Method for adding special effect in video communication and video customer terminal
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
WO2016168235A1 (en) * 2015-04-17 2016-10-20 Nec Laboratories America, Inc. Fine-grained image classification by exploring bipartite-graph labels
CN106096510A (en) * 2016-05-31 2016-11-09 北京小米移动软件有限公司 The method and apparatus of fingerprint recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101287093A (en) * 2008-05-30 2008-10-15 北京中星微电子有限公司 Method for adding special effect in video communication and video customer terminal
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
WO2016168235A1 (en) * 2015-04-17 2016-10-20 Nec Laboratories America, Inc. Fine-grained image classification by exploring bipartite-graph labels
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
CN106096510A (en) * 2016-05-31 2016-11-09 北京小米移动软件有限公司 The method and apparatus of fingerprint recognition

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921798B (en) * 2018-06-14 2021-06-22 北京微播视界科技有限公司 Image processing method and device and electronic equipment
CN108921798A (en) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 The method, apparatus and electronic equipment of image procossing
US11989350B2 (en) 2018-07-10 2024-05-21 Tencent Technology (Shenzhen) Company Limited Hand key point recognition model training method, hand key point recognition method and device
WO2020010979A1 (en) * 2018-07-10 2020-01-16 腾讯科技(深圳)有限公司 Method and apparatus for training model for recognizing key points of hand, and method and apparatus for recognizing key points of hand
WO2020010927A1 (en) * 2018-07-11 2020-01-16 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
WO2020037678A1 (en) * 2018-08-24 2020-02-27 太平洋未来科技(深圳)有限公司 Method, device, and electronic apparatus for generating three-dimensional human face image from occluded image
CN109325907A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image landscaping treatment method, apparatus and system
CN109410270A (en) * 2018-09-28 2019-03-01 百度在线网络技术(北京)有限公司 A kind of damage identification method, equipment and storage medium
CN111126108A (en) * 2018-10-31 2020-05-08 北京市商汤科技开发有限公司 Training method and device of image detection model and image detection method and device
CN111126108B (en) * 2018-10-31 2024-05-21 北京市商汤科技开发有限公司 Training and image detection method and device for image detection model
CN109657537A (en) * 2018-11-05 2019-04-19 北京达佳互联信息技术有限公司 Image-recognizing method, system and electronic equipment based on target detection
CN109871883A (en) * 2019-01-24 2019-06-11 北京市商汤科技开发有限公司 Neural network training method and device, electronic equipment and storage medium
CN109871883B (en) * 2019-01-24 2022-04-05 北京市商汤科技开发有限公司 Neural network training method and device, electronic equipment and storage medium
CN110363127A (en) * 2019-07-04 2019-10-22 陕西丝路机器人智能制造研究院有限公司 Robot identifies the method with positioning to workpiece key point
CN110347134A (en) * 2019-07-29 2019-10-18 南京图玩智能科技有限公司 A kind of AI intelligence aquaculture specimen discerning method and cultivating system
CN110472737B (en) * 2019-08-15 2023-11-17 腾讯医疗健康(深圳)有限公司 Training method and device for neural network model and medical image processing system
CN110472737A (en) * 2019-08-15 2019-11-19 腾讯医疗健康(深圳)有限公司 Training method, device and the magic magiscan of neural network model
CN111126481A (en) * 2019-12-20 2020-05-08 湖南千视通信息科技有限公司 Training method and device of neural network model
CN111028313A (en) * 2019-12-26 2020-04-17 浙江口碑网络技术有限公司 Table distribution image generation method and device
CN111028313B (en) * 2019-12-26 2020-10-09 浙江口碑网络技术有限公司 Table distribution image generation method and device
CN111339846A (en) * 2020-02-12 2020-06-26 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN111339846B (en) * 2020-02-12 2022-08-12 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
WO2021159774A1 (en) * 2020-02-13 2021-08-19 腾讯科技(深圳)有限公司 Object detection model training method and apparatus, object detection method and apparatus, computer device, and storage medium
CN111401158B (en) * 2020-03-03 2023-09-01 平安科技(深圳)有限公司 Difficult sample discovery method and device and computer equipment
CN111401158A (en) * 2020-03-03 2020-07-10 平安科技(深圳)有限公司 Difficult sample discovery method and device and computer equipment
CN111523422B (en) * 2020-04-15 2023-10-10 北京华捷艾米科技有限公司 Key point detection model training method, key point detection method and device
CN111523422A (en) * 2020-04-15 2020-08-11 北京华捷艾米科技有限公司 Key point detection model training method, key point detection method and device
WO2021238410A1 (en) * 2020-05-29 2021-12-02 北京沃东天骏信息技术有限公司 Image processing method and apparatus, electronic device, and medium
CN111954075A (en) * 2020-08-20 2020-11-17 腾讯科技(深圳)有限公司 Video processing model state adjusting method and device, electronic equipment and storage medium
CN113449718A (en) * 2021-06-30 2021-09-28 平安科技(深圳)有限公司 Method and device for training key point positioning model and computer equipment

Similar Documents

Publication Publication Date Title
CN108133220A (en) Model training, crucial point location and image processing method, system and electronic equipment
US11514947B1 (en) Method for real-time video processing involving changing features of an object in the video
US10776970B2 (en) Method and apparatus for processing video image and computer readable medium
US10657652B2 (en) Image matting using deep learning
CN108460343B (en) Image processing method, system and server
CN105184249B (en) Method and apparatus for face image processing
CN110390269A (en) PDF document table extracting method, device, equipment and computer readable storage medium
CN108230252A (en) Image processing method, device and electronic equipment
CN108038474A (en) Method for detecting human face, the training method of convolutional neural networks parameter, device and medium
CN109960986A (en) Human face posture analysis method, device, equipment, storage medium and program
CN107343211A (en) Method of video image processing, device and terminal device
CN109960974A (en) Face critical point detection method, apparatus, electronic equipment and storage medium
CN108229282A (en) Critical point detection method, apparatus, storage medium and electronic equipment
CN108229301B (en) Eyelid line detection method and device and electronic equipment
CN109635752A (en) Localization method, face image processing process and the relevant apparatus of face key point
CN108062510A (en) Dynamic display method and computer equipment during a kind of multiple target tracking fructufy
CN112330527A (en) Image processing method, image processing apparatus, electronic device, and medium
CN111860484B (en) Region labeling method, device, equipment and storage medium
CN109615671A (en) A kind of character library sample automatic generation method, computer installation and readable storage medium storing program for executing
CN110211032B (en) Chinese character generating method and device and readable storage medium
JP2021182441A (en) Method for processing image, device, apparatus, medium, and program
CN106406693A (en) Method and device for selecting image
Zhang et al. POFMakeup: A style transfer method for Peking Opera makeup
CN116524475A (en) Method and device for generating recommended dressing, vehicle, electronic equipment and storage medium
CN116629201A (en) Automatic label layout and typesetting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180608

RJ01 Rejection of invention patent application after publication