CN109544516A - Image detecting method and device - Google Patents

Image detecting method and device Download PDF

Info

Publication number
CN109544516A
CN109544516A CN201811309965.3A CN201811309965A CN109544516A CN 109544516 A CN109544516 A CN 109544516A CN 201811309965 A CN201811309965 A CN 201811309965A CN 109544516 A CN109544516 A CN 109544516A
Authority
CN
China
Prior art keywords
image
blackhead
key point
area
mentioned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811309965.3A
Other languages
Chinese (zh)
Other versions
CN109544516B (en
Inventor
鞠汶奇
刘子威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hetai Intelligent Home Appliance Controller Co ltd
Original Assignee
Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Het Data Resources and Cloud Technology Co Ltd filed Critical Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority to CN201811309965.3A priority Critical patent/CN109544516B/en
Publication of CN109544516A publication Critical patent/CN109544516A/en
Application granted granted Critical
Publication of CN109544516B publication Critical patent/CN109544516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of image detecting method and devices.Wherein, this method comprises: determining target image, the target image is the image of blackhead quantity to be detected;The target image is input to depth detection neural network, exports the blackhead position of first area image and the position of second area image, the reflective degree of the second area image is greater than the reflective degree of the first area image;The second area image is input in deep neural network, the blackhead quantity in the second area image is exported;The blackhead quantity in the first area image is determined according to the blackhead position of the first area image, and the blackhead quantity in the target image is determined according to the blackhead quantity in the blackhead quantity and the second area image in the first area image.Using the application, the accuracy of blackhead detection can be improved.

Description

Image detecting method and device
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image detecting methods and device.
Background technique
Currently, facial skin problem affects the beauty and ugliness of people, wherein blackhead is exactly common skin problem.While by In differences such as gender, age and Regional Properties, the quantity for also resulting in blackhead is different.
In beauty makeups field, generally it can to detect the number of face blackhead automatically by terminal such as mobile phone self-timer picture Amount, in conjunction with face others skin characteristic, to provide skin care opinion for user.It, can such as by detection face blackhead quantity Think that user selects suitable skin care item, food to improve skin of face problem.Specifically, the method for detection blackhead quantity is general Skin can be irradiated by two kinds of light of common white coloured light and ultraviolet light, so that the highlighted target in the ultraviolet light is extracted, It obtains target and highlights image, image is then highlighted according to target and generates blackhead administrative division map, and then using the blackhead administrative division map as covering Mould is marked the blackhead region in white light image, identifies blackhead.
It needs to irradiate skin by ultraviolet light using the above method, and equipment requirement is complicated, accuracy is low.
Summary of the invention
This application provides a kind of image detecting method and devices, and the accuracy rate of blackhead detection can be improved.
In a first aspect, the embodiment of the present application provides a kind of image detecting method, comprising:
Determine that target image, the target image are the image of blackhead quantity to be detected;
The target image is input to depth detection neural network, exports the blackhead position and the of first area image The position of two area images, the reflective degree of the second area image are greater than the reflective degree of the first area image;
The second area image is input in deep neural network, the blackhead number in the second area image is exported Amount;
The blackhead quantity in the first area image, Yi Jigen are determined according to the blackhead position of the first area image The target image is determined according to the blackhead quantity in the blackhead quantity and the second area image in the first area image In blackhead quantity.
In the embodiment of the present application, after determining target image, first area figure is first determined by depth detection neural network The blackhead position of picture and the position of second area image, wherein the reflective degree of second area image is greater than described first The reflective degree in region;Then, then by the second area image it is input in deep neural network, obtains the second area The blackhead quantity of image;Finally, obtain the blackhead quantity of first area image, and by the blackhead number in the first area image Blackhead quantity in amount and the second area image obtains the blackhead quantity in the target image.By implementing the application reality Example is applied, the blackhead quantity of the stronger second area image of reflective degree can individually be detected, can avoid because of illumination The factors such as intensity and lead to not the case where accurately calculating blackhead quantity, and then improve blackhead detection accuracy rate.
In one possible implementation, the target image is region corresponding with nose;The determining target figure Picture, comprising:
After getting facial image, the face key point in the facial image is determined;Wherein, in the facial image Face key point includes the first key point and the second key point;First key point is that the wing of nose is most left in the face key point The point on side, second key point are the point of wing of nose rightmost in the face key point;
The target image is determined according to first key point and second key point.
In one possible implementation, it is described get facial image after, determine the face in the facial image Key point determines the target image according to first key point and second key point, comprising:
After getting facial image, the face key point in the facial image is determined;Wherein, in the facial image Face key point includes the first key point, the second key point, third key point and the 4th key point;First key point is institute The key point 31 in face key point is stated, second key point is the key point 35 in the face key point, the third Key point is one in key point 28 and key point 29 in the face key point, and the 4th key point is the face Key point 33 in key point;
The target image is determined according to the abscissa of the abscissa of first key point and second key point First side length;
The target area is determined according to the ordinate of the ordinate of the third key point and the 4th key point Second side length;
The target image is determined according to first side length and second side length.
In the embodiment of the present application, in the embodiment of the present application, by the first key point, the second key point, third key point and 4th key point determines target image, and simple possible improves the determination efficiency of target image.
In one possible implementation, the determining target image, comprising:
After getting facial image, the face key point in the facial image is determined;Wherein, in the facial image Face key point includes the 5th key point, the 6th key point and the 7th key point;5th key point is that the face is crucial Key point 30 in point, the 6th key point are the key point 0 in the face key point, and the 7th key point is described Key point 16 in face key point;
The central point of the target image is determined according to the 5th key point;
The target image is determined according to the abscissa of the abscissa of the 6th key point and the 7th key point Third side length;
The 4th side length of the target area is determined according to the third side length;
The target image is determined according to the central point, the third side length and the 4th side length.
In one possible implementation, described that the target image is input to depth detection neural network, output The blackhead position of first area image and the position of second area image, comprising:
The resolution ratio for reducing the target image, the target image after being reduced resolution ratio;
The target image after the reduction resolution ratio is input to the depth detection neural network, exports the firstth area The blackhead position of area image and the position of second area image;
Alternatively, enhance the resolution ratio of the target image, the target image after obtaining enhancing resolution ratio;
The target image after the enhancing resolution ratio is input to the depth detection neural network, exports the firstth area The blackhead position of area image and the position of second area image.
In the embodiment of the present application, by reducing the resolution ratio of the target image, the depth detection nerve can be accelerated Network operations speed improves the computational efficiency of blackhead detection;By enhancing the resolution ratio of the target image, the mesh can be made Logo image is more clear, and improves the accuracy rate of blackhead detection.
In one possible implementation, described that the second area image is input in deep neural network, it is defeated Out before the blackhead quantity in the second area image, the method also includes:
First sample image and the second sample image are obtained, the reflective degree of second sample image is greater than described first The reflective degree of sample image, and pair for including in the object and second sample image for including in the first sample image As identical;
Determine the blackhead quantity of the first sample image;
The blackhead quantity of second sample image and the first sample image is input to the depth nerve net In network, the training deep neural network.
Wherein, the object for including in first sample image and the object for including in second sample image are identical, can manage Solution is the first sample image and second sample image is shot by same facial image or the same person, That is the first sample image and second sample image are to belong to same group of sample image, alternatively, the first sample image It is one-to-one relationship with second sample image;It, can be with after getting the first sample image and second sample image Determine the blackhead quantity in the first sample image;It is understood that the number of the first sample image and second sample image is extremely It less include two groups.
In the embodiment of the present application, by above method training depth neural network, training depth nerve can be effectively improved The efficiency of network improves the accuracy of the deep neural network output test result.
In one possible implementation, the acquisition first sample image and the second sample image, comprising:
The first sample image is obtained under first light source;
Second sample image is obtained under second light source, the intensity of illumination of the first light source is less than second light The intensity of illumination in source.
In the embodiment of the present application, wherein obtain the first sample image under first light source, it will be appreciated that for by normal Clear, no-reflection first sample image obtained under illumination, that is to say, that first sample obtained under the first light source This image is clear that the blackhead quantity of the first sample image;Second sample graph is obtained under second light source Picture, it will be appreciated that pass through second sample image for having retroreflective regions obtained under strong light, that is to say, that under the second light source There are zone of ignorances for the blackhead quantity in second sample image obtained.Implement the embodiment of the present application, by under different illumination Sample image is obtained, the unicity of sample image is can avoid, the multiplicity under different illumination scenes of sample image can be increased Property, to improve the accuracy of deep neural network training.
Second aspect, the embodiment of the present application provide a kind of image detection device, comprising:
First determination unit, for determining that target image, the target image are the image of blackhead quantity to be detected;
First input-output unit exports the firstth area for the target image to be input to depth detection neural network The reflective degree of the blackhead position of area image and the position of second area image, the second area image is greater than described first The reflective degree of area image;
Second input-output unit, for the second area image to be input in deep neural network, described in output Blackhead quantity in second area image;
Second determination unit, for being determined in the first area image according to the blackhead position of the first area image Blackhead quantity, and according to the blackhead quantity in the blackhead quantity and the second area image in the first area image Determine the blackhead quantity in the target image.
In one possible implementation, the target image is region corresponding with nose;
First determination unit includes:
First determines that subelement determines the face key point in the facial image after getting facial image;Its In, the face key point in the facial image includes that the first key point, the second key point, third key point and the 4th are crucial Point;First key point is the leftmost point of the wing of nose in the face key point, and second key point is face pass The point of wing of nose rightmost in key point, the third key point be the face key point in bridge of the nose the top point, the described 4th Key point is the point of nasal septum bottom in the face key point;
Second determines subelement, for according to the abscissa of first key point and the abscissa of second key point Determine the first side length of the target image;
Third determines subelement, for according to the ordinate of the third key point and the ordinate of the 4th key point Determine the second side length of the target area;
4th determines subelement, for determining the target image according to first side length and second side length.
In one possible implementation, first input-output unit includes:
Reduce subelement, the target figure for reducing the resolution ratio of the target image, after being reduced resolution ratio Picture;
First input and output subelement, for the target image after the reduction resolution ratio to be input to the depth Neural network is detected, the blackhead position of first area image and the position of second area image are exported;
Alternatively, enhanson, for enhancing the resolution ratio of the target image, the mesh after obtaining enhancing resolution ratio Logo image;
Second input and output subelement, for the target image after the enhancing resolution ratio to be input to the depth Neural network is detected, the blackhead position of first area image and the position of second area image are exported.
In one possible implementation, described device further include:
Acquiring unit, for obtaining first sample image and the second sample image, the reflective journey of second sample image Degree is greater than the reflective degree of the first sample image, and the object and second sample for including in the first sample image The object for including in image is identical;
Third determination unit, for determining the blackhead quantity of the first sample image;
Training unit, for the blackhead quantity of second sample image and the first sample image to be input to institute It states in deep neural network, the training deep neural network.
In one possible implementation, the acquiring unit includes:
First obtains subelement, for obtaining the first sample image under first light source;
Second obtains subelement, for obtaining second sample image, the light of the first light source under second light source It is less than the intensity of illumination of the second light source according to intensity.
The third aspect, the embodiment of the present application also provides a kind of image detection devices, comprising: processor, memory and defeated Enter output interface, the processor and the memory, the input/output interface are interconnected by route;Wherein, the storage Device is stored with program instruction;When described program instruction is executed by the processor, execute the processor such as first aspect institute The corresponding method stated.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, the computer-readable storage Computer program is stored in medium, the computer program includes program instruction, and described program instruction is worked as to be filled by image detection When the processor set executes, the processor is made to execute method described in first aspect.
5th aspect, the embodiment of the present application provides a kind of computer program product comprising instruction, when it is in computer When upper operation, so that computer executes method described in above-mentioned first aspect.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application or in background technique below will be implemented the application Attached drawing needed in example or background technique is illustrated.
Fig. 1 is a kind of flow diagram of image detecting method provided by the embodiments of the present application;
Fig. 2 is a kind of schematic diagram of face key point provided by the embodiments of the present application;
Fig. 3 is a kind of schematic diagram of determining target image provided by the embodiments of the present application;
Fig. 4 is a kind of flow diagram of deep neural network training method provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of image detection device provided by the embodiments of the present application;
Fig. 6 is the provided by the embodiments of the present application a kind of first structural schematic diagram for determining subelement;
Fig. 7 is a kind of structural schematic diagram of first input-output unit provided by the embodiments of the present application;
Fig. 8 is another first input-output unit structural schematic diagram provided by the embodiments of the present application;
Fig. 9 is another image detection device structural schematic diagram provided by the embodiments of the present application;
Figure 10 is a kind of acquiring unit structural schematic diagram provided by the embodiments of the present application;
Figure 11 is the structural schematic diagram of another image detection device provided by the embodiments of the present application.
Specific embodiment
In order to keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application make into One step it is described in detail.
The description and claims of this application and term " first " in above-mentioned attached drawing, " second " etc. are for distinguishing Different objects, is not use to describe a particular order.In addition, term " includes " and " having " and their any deformations, meaning Figure, which is to cover, non-exclusive includes.Such as contain the process, method, system, product or equipment of a series of steps or units It is not limited to listed step or unit, but optionally further comprising the step of not listing or unit, or optionally also Including other step or units intrinsic for these process, methods or equipment.
It is a kind of flow diagram of image detecting method provided by the embodiments of the present application, image inspection referring to Fig. 1, Fig. 1 Survey method can be applied to image detection device, which may include server or terminal device, which can Including mobile phone, desktop computer, laptop computer and other equipment etc., the embodiment of the present application is for the specific of the image detection device Form is not construed as limiting.
As shown in Figure 1, the image detecting method includes:
101, determine that target image, above-mentioned target image are the image of blackhead quantity to be detected.
In the embodiment of the present application, determine that target image can be regarded as image detection device acquisition or obtain target image, So that it is determined that the target image;It also is understood as the image detection device and acquires or obtain the target image from other devices, How the image detection device is acquired or is obtained target image, the embodiment of the present application is not construed as limiting.It is understood that the target figure Image as being blackhead quantity to be detected, i.e. the target image is the image it needs to be determined that the severity of blackhead.
Optionally, above-mentioned target image is region corresponding with nose, and nose can be directly intercepted out from target image Region, such as to fix length and width as according to intercepting.But when to fix length and width to intercept, often because of everyone nose Son is in different size, and causes the nasal area intercepted out different.Therefore, the embodiment of the present application also provides a kind of determining noses The method of the image in region, as follows:
After getting facial image, the face key point in above-mentioned facial image is determined;Wherein, in above-mentioned facial image Face key point includes the first key point and the second key point;Above-mentioned first key point is that the wing of nose is most left in above-mentioned face key point The point on side, above-mentioned second key point are the point of wing of nose rightmost in above-mentioned face key point;
Above-mentioned target image is determined according to above-mentioned first key point and above-mentioned second key point.
In the embodiment of the present application, a kind of side of image that above-mentioned nasal area is determined by face key point is additionally provided Method.
Specifically, it is above-mentioned get facial image after, the face key point in above-mentioned facial image is determined, according to above-mentioned One key point and above-mentioned second key point determine above-mentioned target image, comprising:
After getting facial image, the face key point in above-mentioned facial image is determined;Wherein, in above-mentioned facial image Face key point includes the first key point, the second key point, third key point and the 4th key point;Above-mentioned first key point is upper The key point 31 in face key point is stated, above-mentioned second key point is the key point 35 in above-mentioned face key point, above-mentioned third Key point is one in key point 28 and key point 29 in above-mentioned face key point, and above-mentioned 4th key point is above-mentioned face Key point 33 in key point;
Above-mentioned target image is determined according to the abscissa of the abscissa of above-mentioned first key point and above-mentioned second key point First side length;
Above-mentioned target area is determined according to the ordinate of the ordinate of above-mentioned third key point and above-mentioned 4th key point Second side length;
Above-mentioned target image is determined according to above-mentioned first side length and above-mentioned second side length.
In the embodiment of the present application, in the embodiment of the present application, by the first key point, the second key point, third key point and 4th key point determines target image, and simple possible improves the determination efficiency of target image.
In the embodiment of the present application, the method for determining the face key point in above-mentioned facial image includes: that can pass through algorithm Such as edge detection robert algorithm, Sobel sobel algorithm etc.;Correlation model such as Active contour models snake model can also be passed through Etc..
The face key point in facial image can be determined although with above-mentioned various algorithms or model, but above method On the one hand more complicated, another aspect effect is poor.Therefore, the embodiment of the present application provides a kind of straightforward procedure, not only realizes Simply, but also face key point can be effectively determined, as follows:
Face key point in above-mentioned determining facial image, comprising:
The face key point in above-mentioned facial image is determined by third-party application.
In the embodiment of the present application, third-party application can be the face key point of open source for third party's kit dlib, dlib The preferable kit of locating effect and be one include machine learning algorithm C++ Open-Source Tools packet.Kit dlib quilt at present It is widely used in including robot, embedded device, mobile phone and large-scale high-performance computing environment field.It therefore can be effective Face key point is done using the kit to position, and obtains face key point.Specifically, the face key point can close for 68 faces Key point etc..As shown in Fig. 2, Fig. 2 is a kind of schematic diagram of face key point provided by the embodiments of the present application.It can be seen that Face key point may include key point 0, key point 1 ... key point 67, i.e. 68 key points.
In the embodiment of the present application, referring to fig. 2, Fig. 2 is a kind of schematic diagram of face key point provided by the embodiments of the present application, As shown in Fig. 2, including key point 31,35,29 and 33 in face key point, and it is in nasal area, therefore, with key point key Point 31,35,29 and 33 is benchmark key point.It is understood that when passing through face key point location nose key point, each key point There are coordinate, i.e. pixel coordinate.Therefore, the first side length using the abscissa of key point 31 and 35 as target area, to close Second side length of the ordinate of key point 29 and 33 as target area.For example, leftmost 31 He of key point of the wing of nose is chosen The key point 35 of nose rightmost is used as datum mark, and x1 coordinate is the abscissa of key point 31, and x2 coordinate is the cross of key point 35 Coordinate;The key point 33 of the key point 29 and nasal septum bottom of choosing bridge of the nose the top is used as datum mark, and y1 coordinate is key The ordinate of point 29, y2 coordinate are the ordinate of key point 33, then determine nasal area by coordinate (x1, y1, x2, y2), from And intercept image of the image of the nasal area as above-mentioned target area.As shown in Figure 3.Fig. 3 is that the embodiment of the present application provides A kind of determining target image schematic diagram.In Fig. 3, the length of target area is the difference of the abscissa of key point 31 and 35, mesh Mark the difference of the wide as ordinate of key point 29 and 33 in region.Implement the embodiment of the present application, can quickly position above-mentioned target Area image improves detection efficiency.
It is understood that in the embodiment of the present application, the coordinate system of the abscissa and ordinate of the first key point and the second key point Standard is consistent, and third key point and the abscissa of the 4th key point are consistent with the coordinate system standard of ordinate and first closes Key point and the second key point are consistent with the coordinate system of third key point and the 4th key point.For example, above-mentioned first key point Abscissa with the abscissa of the second key point and above-mentioned third key point and the 4th key point can be in pixel coordinate system Coordinate be that the abscissa and above-mentioned third key point of standard and above-mentioned first key point and the second key point and the 4th are closed The ordinate of key point can also be using pixel coordinate system as standard.
Optionally, as follows the embodiment of the present application also provides another method for determining target image:
After getting facial image, the face key point in above-mentioned facial image is determined;Wherein, in above-mentioned facial image Face key point includes the 5th key point, the 6th key point and the 7th key point;Above-mentioned 5th key point is that above-mentioned face is crucial Key point 30 in point, above-mentioned 6th key point are the key point 0 in above-mentioned face key point, and above-mentioned 7th key point is above-mentioned Key point 16 in face key point;
The central point of above-mentioned target image is determined according to above-mentioned 5th key point;
Above-mentioned target image is determined according to the abscissa of the abscissa of above-mentioned 6th key point and above-mentioned 7th key point Third side length;
The 4th side length of above-mentioned target area is determined according to above-mentioned third side length;
Above-mentioned target image is determined according to above-mentioned central point, above-mentioned third side length and above-mentioned 4th side length.
Specifically, referring to fig. 2, Fig. 2 is a kind of face key point provided by the embodiments of the present application in the embodiment of the present application Schematic diagram, as shown in Fig. 2, including key point 30,0 and 16 in face key point, on the basis of key point key point 30,0 and 16 Key point.It is understood that each key point has coordinate, i.e. pixel coordinate when passing through face key point location nasal area. Therefore, using key point 30 as the central point of nasal area;With a quarter of the abscissa absolute value difference of key point 0 and 16 As the third side length of nasal area, the i.e. length of nasal area;The 4th side length using third side length as nasal area, i.e. nose The width and equal length of subregion.For example, central point of the key point 30 as nasal area is chosen;Choose key point 0 It is used as datum mark with key point 16, x3 coordinate is the abscissa of key point 0, and x4 coordinate is the abscissa of key point 16;Pass through meter The difference of abscissa (x3, x4) absolute value is calculated, and takes length and width of a quarter of the difference as nasal area, to cut Take the image of the nasal area as the image of above-mentioned target area.Implement the embodiment of the present application, can quickly position above-mentioned mesh Area image is marked, detection efficiency is improved.It is understood that for the specific determination side of above-mentioned nasal area side length in the embodiment of the present application Formula is not construed as limiting.
It is understood that in the embodiment of the present application, the abscissa and vertical seat of the 5th key point, the 6th key point and the 7th key point Target coordinate system standard is consistent.For example, the abscissa and above-mentioned 7th pass of above-mentioned 5th key point and the 6th key point The abscissa of key point can be using the coordinate in pixel coordinate system as standard.
102, above-mentioned target image is input to depth detection neural network, export the blackhead position of first area image with And the position of second area image, the reflective degree of above-mentioned second area image are greater than the reflective journey of above-mentioned first area image Degree.
Can exist reflective in various degree in the embodiment of the present application, in the above-mentioned target image that gets, reflective degree is higher Region (i.e. high retroreflective regions or strong retroreflective regions) detection to face blackhead can be seriously affected, and other reflective degree are lower Region for face blackhead detection influence degree it is smaller.It is understood that above-mentioned first area image is that reflective degree does not influence Depth detection neural network is stated to the region of the detection pair of blackhead position in the first area image.Above-mentioned second area image is Reflective degree is greater than the region of above-mentioned first area image, it will be appreciated that for above-mentioned second area image is high retroreflective regions or strong anti- Light region, i.e. blackhead position in the second area image can not be detected by above-mentioned depth detection neural network.
In the embodiment of the present application, above-mentioned depth detection neural network can be understood as algorithm of target detection (you only Look once, Yolo).Specifically, above-mentioned target image is input to the Yolo network, which is divided into S × S Grid, it is appreciated that S is integer more than or equal to 1;Then, each grid is responsible for inspection center's point and is fallen in the grid Target, wherein each grid can predict the confidence level (confidence of B bounding box (bounding box) and bounding box Score), the size of bounding box can be characterized with position with 4 values: (x, y, w, h), wherein (x, y) is the center of bounding box Coordinate, and w and h are the width and height of bounding box;Also to predict C class probability value for each grid, expression be by The target that the grid is responsible for the bounding box of prediction belongs to the probability of each classification, i.e., these probability values are in each bounding box confidence level Under conditional probability.In this way, the predicted value of each bounding box actually includes 5 elements: (x, y, w, h, c), wherein preceding 4 tables Show size and the position of bounding box, and the last one value is confidence level.S × S × B target window may finally be predicted, is led to It crosses setting threshold value and removes the target window of possibility, and pass through non-maxima suppression algorithm (non maximum Suppression, NMS) removal redundancy window.
For example, above-mentioned target image is inputted, which is divided into the grid of 7 × 7 (setting S=7), for every A grid all predicts 2 frames (setting B=7), then can predict 7 × 7 × 2 target windows, then can according to threshold value removal The energy lower target window of property removes redundancy window finally by non-maxima suppression algorithm NMS.Implement the embodiment of the present application, It can be quickly detected blackhead position and high retroreflective regions image by above-mentioned depth detection neural network, subsequent detection can be made The speed of the blackhead number of high retroreflective regions image improves, and improves the efficiency of blackhead detection.
Optionally, above-mentioned that above-mentioned target image is input to depth detection neural network, export the black of first area image Head position and the position of second area image, comprising:
The resolution ratio for reducing above-mentioned target image, the target image after being reduced resolution ratio;
Target image after above-mentioned reduction resolution ratio is input to above-mentioned depth detection neural network, exports first area figure The blackhead position of picture and the position of second area image;
Alternatively, enhance the resolution ratio of above-mentioned target image, the target image after obtaining enhancing resolution ratio;
Target image after above-mentioned enhancing resolution ratio is input to above-mentioned depth detection neural network, exports first area figure The blackhead position of picture and the position of second area image.
In the embodiment of the present application, the image being input in above-mentioned depth detection neural network can be set as needed in user Resolution sizes are input in above-mentioned depth detection neural network alternatively, above-mentioned image processing apparatus can also be arranged automatically The resolution sizes of image, it will be appreciated that the image being input in above-mentioned depth detection neural network is above-mentioned target image. For example, when the resolution ratio of the target image is 664 × 664, then the resolution ratio of the target image can be reduced to 448 × 448;When the target image resolution ratio be 224 × 224, then the resolution ratio of the target image can be reduced to 448 × 448. Implement the embodiment of the present application, by reducing the resolution ratio of above-mentioned target image, above-mentioned depth detection neural network can be accelerated Arithmetic speed improves operation efficiency;By enhancing the resolution ratio of above-mentioned target image, the clear of the target image can be increased Degree improves detection accuracy.It is understood that how the embodiment of the present application is for be arranged resolution sizes and specific resolution ratio The numerical value of size is not construed as limiting.
103, above-mentioned second area image is input in deep neural network, is exported black in above-mentioned second area image Head quantity.
Wherein, above-mentioned depth neural network can be understood as backpropagation neural network road (back propagation, BP). Specifically, above-mentioned second area image is high retroreflective regions image, above-mentioned high retroreflective regions image is input to BP nerve In network, which can predict to obtain the blackhead quantity in the high retroreflective regions image.Implement the embodiment of the present application, By individually entering above-mentioned second area image to above-mentioned deep neural network, more accurate blackhead quantity can be obtained, Retroreflective regions be can avoid to the accurate influence for calculating blackhead quantity, improve the accuracy rate of blackhead quantity detection.It is understood that this Shen Embodiment please be go up to be not construed as limiting specific deep neural network.
104, the blackhead quantity in above-mentioned first area image is determined according to the blackhead position of above-mentioned first area image, with And above-mentioned target is determined according to the blackhead quantity in the blackhead quantity and above-mentioned second area image in above-mentioned first area image Blackhead quantity in image.
In the embodiment of the present application, the blackhead position of above-mentioned first area image is obtained by above-mentioned Yolo network, by upper The method of stating can obtain the coordinate of blackhead position, and the number by counting these coordinates can obtain above-mentioned first area image Blackhead quantity;Alternatively, can be examined by scheduling algorithms such as other algorithm of target detection such as background subtraction, optical flow, frame differential method Survey the blackhead quantity in above-mentioned first area image;Again by the blackhead quantity of above-mentioned first area image and above-mentioned second area figure The blackhead quantity of picture is added, and finally obtains the blackhead quantity of entire above-mentioned target image.It is understood that the embodiment of the present application is for such as The specific implementation what calculates blackhead quantity is not construed as limiting.
Implement the embodiment of the present application, can by the blackhead quantity of the stronger second area image of reflective degree individually into Row detection can avoid leading to not the case where accurately calculating blackhead quantity because of factors such as intensities of illumination, and then improve blackhead The accuracy rate of detection.
For image detecting method shown in FIG. 1, deep neural network is trained network model, i.e., to network mould Type is trained, and obtained deep neural network.Therefore, the embodiment of the present application also provides a kind of sides of trained network model Method, referring to fig. 4, Fig. 4 are a kind of flow diagram of deep neural network training method provided by the embodiments of the present application, such as Fig. 4 Shown, which includes:
401, first sample image and the second sample image are obtained, the reflective degree of above-mentioned second sample image is greater than above-mentioned The reflective degree of first sample image, and include in the object and above-mentioned second sample image for including in above-mentioned first sample image Object it is identical.
In the embodiment of the present application, wherein include in the object and above-mentioned second sample image for including in first sample image Object it is identical, it will be appreciated that for the first sample image and second sample image be by same facial image or same What people shot, that is to say, that the first sample image and second sample image be belong to same group of sample image, or Person, the first sample image and second sample image are one-to-one relationships;Get the first sample image and this After two sample images, the blackhead quantity in the first sample image can be determined;It is understood that the first sample image and this second The number of sample image includes at least two groups.
Wherein, which can acquire the first sample image and second sample image, may also pass through it His device acquires the first sample image and second sample image.First sample image is acquired with the image detection device below For the second sample image sample, to illustrate how acquisition first sample image and the second sample image.Above-mentioned acquisition first Sample image and the second sample image, comprising: M sample image of acquisition is as above-mentioned first sample image and above-mentioned second sample Image, wherein sample image is specially nasal area sample image.
Wherein, as the value of above-mentioned M is greater than or equal to 300.In the embodiment of the present application, capturing sample image sample is specially to adopt Collect nasal area sample image, which is at least 300.Wherein, in order to preferably train network model, It may include the image that nose has high retroreflective regions in the nasal area sample image, may also comprise figure of the nose without high retroreflective regions Picture.The embodiment of the present application using which kind of device acquisition nasal area sample image for being not construed as limiting.Mobile phone can be used such as to adopt Camera acquisition etc. can also be used in collection.
Wherein, the nasal area sample image of acquisition is at least 300, is due in the training process, less than 300 The training effect of nasal area sample image is not greater than or the training effect equal to 300 is good.On the other hand, the nose of acquisition When area sample image is greater than or equal to 300, the generalization ability for the network model trained is more preferable.
Optionally, above-mentioned acquisition first sample image and the second sample image, comprising:
Above-mentioned first sample image is obtained under first light source;
Above-mentioned second sample image is obtained under second light source, the intensity of illumination of above-mentioned first light source is less than above-mentioned second light The intensity of illumination in source.
In the embodiment of the present application, above-mentioned image detection device can obtain different sample images by different light sources. Wherein, above-mentioned first sample image is obtained, it will be appreciated that under first light source to pass through clear, the no-reflection that obtain under normal illumination The first sample image, that is to say, that the first sample image obtained under the first light source is clear that this The blackhead quantity of first sample image;Obtain above-mentioned second sample image, it will be appreciated that under second light source for by obtaining under strong light What is taken has second sample image of retroreflective regions, that is to say, that in second sample image obtained under the second light source Blackhead quantity there are zone of ignorances.For example, candle light, kerosene lamp, iodine-tungsten lamp, tengsten lamp, the strong light of photograph can be passed through Above-mentioned first sample image and above-mentioned second sample graph are obtained under lamp, cloud and mist sky, cloudy sky etc. different illumination condition Picture can make sample more abundant, improve the diversity of training sample.Implement the embodiment of the present application, by obtaining under different illumination Sample image is taken, the unicity of sample image is can avoid, the diversity under different illumination scenes of sample image can be increased, To improve the accuracy of deep neural network training.
402, the blackhead quantity of above-mentioned first sample image is determined.
In the embodiment of the present application, the blackhead number of above-mentioned first sample image can be determined by above-mentioned image processing apparatus Amount.For example, which can be input to above-mentioned Yolo network and obtains the seat of blackhead in the first sample image Mark, then by way of counting these coordinate quantity, determines the blackhead quantity in the first sample image;Alternatively, can be with The first sample image is detected by scheduling algorithms such as other algorithm of target detection such as background subtraction, optical flow, frame differential method In blackhead quantity;Or the blackhead quantity of above-mentioned first sample image can be also obtained by way of manually counting.It can manage Solution, the embodiment of the present application are not construed as limiting the specific method of determination of the blackhead quantity of above-mentioned first sample image.
403, the blackhead quantity of above-mentioned second sample image and above-mentioned first sample image is input to above-mentioned depth mind Through training above-mentioned deep neural network in network.
In the embodiment of the present application, above-mentioned deep neural network can be BP neural network.Specifically, first to BP nerve Network carries out netinit, and assigning a section to each connection weight is the random number in [- 1,1], sets error function e, if Determine computational accuracy and learning rate;Then, n-th of training sample and corresponding desired output are randomly selected, wherein n is big In or equal to 1 and be less than or equal to total sample number value;Then, outputting and inputting for each neuron of hidden layer is calculated;Then, sharp With network desired output and reality output, error function is calculated to the partial derivative of each neuron of output layer;Then, output is utilized The output of the partial derivative and each neuron of hidden layer of each neuron of layer updates connection weight, and utilizes each neuron of hidden layer Partial derivative and each neuron of input layer input update connection weight;Then, after correcting the connection weight of model, weight Newly calculate the global error of new model;Then, judge whether "current" model restrains, such as can by judge adjacent error twice it Between difference whether be less than specified value etc.;Otherwise, next incidental learning sample and corresponding desired output are selected, is held Row learns next time;Final training obtains the BP neural network model.Implement the embodiment of the present application, is instructed by the above method Practice, detection accuracy can be effectively improved.It is understood that specific network mould of the embodiment of the present application for above-mentioned deep neural network The specific training method of type and network is not construed as limiting.
Implement the embodiment of the present application, by obtaining sample image under different illumination, the diversity of sample image can be increased, These sample images are input to above-mentioned deep neural network to be trained, it can be by largely including that the sample of different characteristic increases Add trained difficulty, to further improve the accuracy of deep neural network training.
It is understood that Fig. 1 and embodiment of the method shown in Fig. 4 emphasize particularly on different fields, not detailed description in one embodiment Implementation reference may also be made to other embodiments.
It is above-mentioned to illustrate the method for the embodiment of the present application, the device of the embodiment of the present application is provided below.
It is a kind of structural schematic diagram of image detection device provided by the embodiments of the present application referring to Fig. 5, Fig. 5, such as Fig. 5 institute Show, which includes:
First determination unit 501, for determining that target image, above-mentioned target image are the image of blackhead quantity to be detected;
First input-output unit 502, for above-mentioned target image to be input to depth detection neural network, output first The blackhead position of area image and the position of second area image, the reflective degree of above-mentioned second area image are greater than above-mentioned the The reflective degree of one area image;
Second input-output unit 503, for above-mentioned second area image to be input in deep neural network, in output State the blackhead quantity in second area image;
Second determination unit 504, for determining above-mentioned first area figure according to the blackhead position of above-mentioned first area image Blackhead quantity as in, and according to the blackhead in the blackhead quantity and above-mentioned second area image in above-mentioned first area image Quantity determines the blackhead quantity in above-mentioned target image.
Optionally, above-mentioned target image is region corresponding with nose;
It is the provided by the embodiments of the present application a kind of first structural schematic diagram for determining subelement referring to Fig. 6, Fig. 6, such as Fig. 6 institute Show, first determination unit 501 includes:
First determines subelement 5011, after getting facial image, determines that the face in above-mentioned facial image is crucial Point;Wherein, the face key point in above-mentioned facial image includes the first key point, the second key point, third key point and the 4th Key point;Above-mentioned first key point is the leftmost point of the wing of nose in above-mentioned face key point, and above-mentioned second key point is above-mentioned people The point of wing of nose rightmost in face key point, above-mentioned third key point is the point of bridge of the nose the top in above-mentioned face key point, above-mentioned 4th key point is the point of nasal septum bottom in above-mentioned face key point;
Second determines subelement 5012, for according to the abscissa of above-mentioned first key point and the cross of above-mentioned second key point Coordinate determines the first side length of above-mentioned target image;
Third determines subelement 5013, for according to the ordinate of above-mentioned third key point and indulging for the 4th key point Coordinate determines the second side length of above-mentioned target area;
4th determines subelement 5014, for determining above-mentioned target figure according to above-mentioned first side length and above-mentioned second side length Picture.
It optionally, is a kind of structural representation of first input-output unit provided by the embodiments of the present application referring to Fig. 7, Fig. 7 Figure, as shown in fig. 7, above-mentioned first input-output unit 502 includes:
Reduce subelement 5021, the target figure for reducing the resolution ratio of above-mentioned target image, after being reduced resolution ratio Picture;
First input and output subelement 5022, for the target image after above-mentioned reduction resolution ratio to be input to above-mentioned depth Neural network is detected, the blackhead position of first area image and the position of second area image are exported;
Alternatively, Fig. 8 is another first input-output unit structural schematic diagram provided by the embodiments of the present application referring to Fig. 8, As shown in figure 8, above-mentioned first input-output unit 502 includes:
Enhanson 5023, for enhancing the resolution ratio of above-mentioned target image, the target figure after obtaining enhancing resolution ratio Picture;
Second input and output subelement 5024, for the target image after above-mentioned enhancing resolution ratio to be input to above-mentioned depth Neural network is detected, the blackhead position of first area image and the position of second area image are exported.
It optionally, is another image detection device structural schematic diagram provided by the embodiments of the present application referring to Fig. 9, Fig. 9, such as Shown in Fig. 9, above-mentioned apparatus further include:
Acquiring unit 505, for obtaining first sample image and the second sample image, above-mentioned second sample image it is reflective Degree is greater than the reflective degree of above-mentioned first sample image, and the object and above-mentioned second sample for including in above-mentioned first sample image The object for including in this image is identical;
Third determination unit 506, for determining the blackhead quantity of above-mentioned first sample image;
Training unit 507, for inputting the blackhead quantity of above-mentioned second sample image and the first sample image To in above-mentioned deep neural network, above-mentioned deep neural network is trained.
It optionally, is a kind of acquiring unit structural schematic diagram provided by the embodiments of the present application, such as Figure 10 referring to Figure 10, Figure 10 Shown, above-mentioned acquiring unit 505 includes:
First obtains subelement 5051, for obtaining above-mentioned first sample image under first light source;
Second obtains subelement 5052, for obtaining above-mentioned second sample image, above-mentioned first light source under second light source Intensity of illumination be less than above-mentioned second light source intensity of illumination.
It is a kind of structural schematic diagram of image detection device provided by the embodiments of the present application, the image referring to Figure 11, Figure 11 Detection device includes processor 1101, memory 1102 and input/output interface 1103, the processor 1101, memory 1102 It is connected with each other with input/output interface 1103 by bus.
Memory 1102 include but is not limited to be random access memory (random access memory, RAM), it is read-only Memory (read-only memory, ROM), Erasable Programmable Read Only Memory EPROM (erasable programmable Read only memory, EPROM) or portable read-only memory (compact disc read-only memory, CD- ROM), which is used for dependent instruction and data.
Input/output interface 1103, such as can be communicated etc. by the input/output interface with other devices.
Processor 1101 can be one or more central processing units (central processing unit, CPU), In the case that processor 1101 is a CPU, which can be monokaryon CPU, be also possible to multi-core CPU.
Specifically, the realization of each operation can also correspond to corresponding with embodiment of the method shown in Fig. 4 referring to Fig.1 retouch It states.And the realization of each operation can also be to should refer to Fig. 5, Fig. 6, Fig. 7, Fig. 8, Fig. 9 and Installation practice shown in Fig. 10 Corresponding description.
As in one embodiment, processor 1101 can be used for executing method shown in step 101, step 104, for another example should Processor 1101 can also be used to execute method performed by the first determination unit 501, second determination unit 504 etc..
For another example in one embodiment, processor 1101 can be used for determining first sample image and the second sample image, or Target image is determined, alternatively, the first sample image and the second sample image can also be acquired by input/output interface 1103 Or target image etc., the embodiment of the present application are not made for how to obtain first sample image and the second sample image or target image It limits.
For another example in one embodiment, input/output interface 1103, it may also be used for execute 502 He of the first input-output unit Method performed by second input-output unit 503.
It is designed it is understood that Figure 11 illustrate only simplifying for image detection device.In practical applications, at data Reason device can also separately include necessary other elements, including but not limited to any number of input/output interface, processor, Memory etc., and all image detection devices that the embodiment of the present application may be implemented are all within the scope of protection of this application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description It with the specific work process of unit, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, the process Relevant hardware can be instructed to complete by computer program, which can be stored in computer-readable storage medium, should Program is when being executed, it may include such as the process of above-mentioned each method embodiment.And storage medium above-mentioned includes: ROM or deposits at random Store up the medium of the various program storage codes such as memory body RAM, magnetic or disk.

Claims (10)

1. a kind of image detecting method characterized by comprising
Determine that target image, the target image are the image of blackhead quantity to be detected;
The target image is input to depth detection neural network, exports the blackhead position and the secondth area of first area image The position of area image, the reflective degree of the second area image are greater than the reflective degree of the first area image;
The second area image is input in deep neural network, the blackhead quantity in the second area image is exported;
The blackhead quantity in the first area image is determined according to the blackhead position of the first area image, and according to institute The blackhead quantity stated in the blackhead quantity in the image of first area and the second area image determines in the target image Blackhead quantity.
2. the method according to claim 1, wherein the target image is region corresponding with nose;It is described Determine target image, comprising:
After getting facial image, the face key point in the facial image is determined;Wherein, the face in the facial image Key point includes the first key point and the second key point;First key point is that the wing of nose is leftmost in the face key point Point, second key point are the point of wing of nose rightmost in the face key point;
The target image is determined according to first key point and second key point.
3. the method according to claims 1 or 2, which is characterized in that described that the target image is input to depth inspection Neural network is surveyed, the blackhead position of first area image and the position of second area image are exported, comprising:
The resolution ratio for reducing the target image, the target image after being reduced resolution ratio;
The target image after reduction resolution ratio is input to the depth detection neural network, output first area image Blackhead position and the position of second area image;
Alternatively, enhance the resolution ratio of the target image, the target image after obtaining enhancing resolution ratio;
The target image after the enhancing resolution ratio is input to the depth detection neural network, exports firstth area The blackhead position of area image and the position of the second area image.
4. the method according to claim 1, wherein described be input to depth nerve for the second area image In network, before exporting the blackhead quantity in the second area image, the method also includes:
First sample image and the second sample image are obtained, the reflective degree of second sample image is greater than the first sample The reflective degree of image, and the object phase for including in the object and second sample image for including in the first sample image Together;
Determine the blackhead quantity of the first sample image;
The blackhead quantity of second sample image and the first sample image is input in the deep neural network, The training deep neural network.
5. according to the method described in claim 4, it is characterized in that, the acquisition first sample image and the second sample image, Include:
The first sample image is obtained under first light source;
Second sample image is obtained under second light source, the intensity of illumination of the first light source is less than the second light source Intensity of illumination.
6. a kind of image detection device characterized by comprising
First determination unit, for determining that target image, the target image are the image of blackhead quantity to be detected;
First input-output unit exports first area figure for the target image to be input to depth detection neural network The reflective degree of the blackhead position of picture and the position of second area image, the second area image is greater than the first area The reflective degree of image;
Second input-output unit, for the second area image to be input in deep neural network, output described second Blackhead quantity in area image;
Second determination unit is black in the first area image for being determined according to the blackhead position of the first area image Head quantity, and determined according to the blackhead quantity in the blackhead quantity and the second area image in the first area image Blackhead quantity in the target image.
7. device according to claim 6, which is characterized in that first input-output unit includes:
Reduce subelement, the target image for reducing the resolution ratio of the target image, after being reduced resolution ratio;
First input and output subelement, for the target image after the reduction resolution ratio to be input to the depth detection Neural network exports the blackhead position of first area image and the position of second area image;
Alternatively, enhanson, for enhancing the resolution ratio of the target image, the target figure after obtaining enhancing resolution ratio Picture;
Second input and output subelement, for the target image after the enhancing resolution ratio to be input to the depth detection Neural network exports the blackhead position of first area image and the position of second area image.
8. device according to claim 6 or 7, which is characterized in that described device further include:
Acquiring unit, for obtaining first sample image and the second sample image, the reflective degree of second sample image is big In the reflective degree of the first sample image, and the object and second sample image for including in the first sample image In include object it is identical;
Third determination unit, for determining the blackhead quantity of the first sample image;
Training unit, for the blackhead quantity of second sample image and the first sample image to be input to the depth It spends in neural network, the training deep neural network.
9. a kind of image detection device, which is characterized in that including processor, memory and input/output interface, the processor It is interconnected with the memory, the input/output interface by route;Wherein, the memory is stored with program instruction, described When program instruction is executed by the processor, the processor is made to execute the corresponding method as described in claim 1 to 5.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium Program, the computer program include program instruction, and described program instruction makes when being executed by the processor of image detection device The processor perform claim requires method described in 1 to 5 any one.
CN201811309965.3A 2018-11-05 2018-11-05 Image detection method and device Active CN109544516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811309965.3A CN109544516B (en) 2018-11-05 2018-11-05 Image detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811309965.3A CN109544516B (en) 2018-11-05 2018-11-05 Image detection method and device

Publications (2)

Publication Number Publication Date
CN109544516A true CN109544516A (en) 2019-03-29
CN109544516B CN109544516B (en) 2020-11-13

Family

ID=65846521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811309965.3A Active CN109544516B (en) 2018-11-05 2018-11-05 Image detection method and device

Country Status (1)

Country Link
CN (1) CN109544516B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310252A (en) * 2019-04-30 2019-10-08 深圳市四季宏胜科技有限公司 Blackhead absorbs method, apparatus and computer readable storage medium
CN110334229A (en) * 2019-04-30 2019-10-15 王松年 Visual display method, equipment, system and computer readable storage medium
CN110796115A (en) * 2019-11-08 2020-02-14 厦门美图之家科技有限公司 Image detection method and device, electronic equipment and readable storage medium
CN112380962A (en) * 2020-11-11 2021-02-19 成都摘果子科技有限公司 Animal image identification method and system based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106469302A (en) * 2016-09-07 2017-03-01 成都知识视觉科技有限公司 A kind of face skin quality detection method based on artificial neural network
CN107403166A (en) * 2017-08-02 2017-11-28 广东工业大学 A kind of method and apparatus for extracting facial image pore feature
CN107679507A (en) * 2017-10-17 2018-02-09 北京大学第三医院 Facial pores detecting system and method
CN108090450A (en) * 2017-12-20 2018-05-29 深圳和而泰数据资源与云技术有限公司 Face identification method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106469302A (en) * 2016-09-07 2017-03-01 成都知识视觉科技有限公司 A kind of face skin quality detection method based on artificial neural network
CN107403166A (en) * 2017-08-02 2017-11-28 广东工业大学 A kind of method and apparatus for extracting facial image pore feature
CN107679507A (en) * 2017-10-17 2018-02-09 北京大学第三医院 Facial pores detecting system and method
CN108090450A (en) * 2017-12-20 2018-05-29 深圳和而泰数据资源与云技术有限公司 Face identification method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310252A (en) * 2019-04-30 2019-10-08 深圳市四季宏胜科技有限公司 Blackhead absorbs method, apparatus and computer readable storage medium
CN110334229A (en) * 2019-04-30 2019-10-15 王松年 Visual display method, equipment, system and computer readable storage medium
CN110796115A (en) * 2019-11-08 2020-02-14 厦门美图之家科技有限公司 Image detection method and device, electronic equipment and readable storage medium
CN110796115B (en) * 2019-11-08 2022-12-23 厦门美图宜肤科技有限公司 Image detection method and device, electronic equipment and readable storage medium
CN112380962A (en) * 2020-11-11 2021-02-19 成都摘果子科技有限公司 Animal image identification method and system based on deep learning

Also Published As

Publication number Publication date
CN109544516B (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN109544516A (en) Image detecting method and device
CN111178183B (en) Face detection method and related device
CN111062429A (en) Chef cap and mask wearing detection method based on deep learning
CN109635694B (en) Pedestrian detection method, device and equipment and computer readable storage medium
CN110321873A (en) Sensitization picture recognition methods and system based on deep learning convolutional neural networks
CN108038846A (en) Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks
CN105260749B (en) Real-time target detection method based on direction gradient binary pattern and soft cascade SVM
CN104346802B (en) A kind of personnel leave the post monitoring method and equipment
CN108629326A (en) The action behavior recognition methods of objective body and device
CN110490073A (en) Object detection method, device, equipment and storage medium
CN106846362A (en) A kind of target detection tracking method and device
CN105243356B (en) A kind of method and device that establishing pedestrian detection model and pedestrian detection method
CN107730515A (en) Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN104517125B (en) The image method for real time tracking and system of high-speed object
CN112232199A (en) Wearing mask detection method based on deep learning
CN109472193A (en) Method for detecting human face and device
CN103810696B (en) Method for detecting image of target object and device thereof
Song et al. MSFYOLO: Feature fusion-based detection for small objects
CN109753898A (en) A kind of safety cap recognition methods and device
CN105405130A (en) Cluster-based license image highlight detection method and device
CN109840905A (en) Power equipment rusty stain detection method and system
CN109671055A (en) Pulmonary nodule detection method and device
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
CN111339934A (en) Human head detection method integrating image preprocessing and deep learning target detection
Cao et al. YOLO-SF: YOLO for fire segmentation detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518000 Guangdong science and technology innovation and Research Institute, Shenzhen, Shenzhen, Nanshan District No. 6, science and technology innovation and Research Institute, Shenzhen, D 10, 1004, 10

Patentee after: Shenzhen Hetai intelligent home appliance controller Co.,Ltd.

Address before: 518000 Guangdong science and technology innovation and Research Institute, Shenzhen, Shenzhen, Nanshan District No. 6, science and technology innovation and Research Institute, Shenzhen, D 10, 1004, 10

Patentee before: SHENZHEN H&T DATA RESOURCES AND CLOUD TECHNOLOGY Ltd.