CN111339889A - Face optimization method, face optimization device and storage medium - Google Patents

Face optimization method, face optimization device and storage medium Download PDF

Info

Publication number
CN111339889A
CN111339889A CN202010105348.2A CN202010105348A CN111339889A CN 111339889 A CN111339889 A CN 111339889A CN 202010105348 A CN202010105348 A CN 202010105348A CN 111339889 A CN111339889 A CN 111339889A
Authority
CN
China
Prior art keywords
face
score
face image
results
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010105348.2A
Other languages
Chinese (zh)
Inventor
谢凡凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010105348.2A priority Critical patent/CN111339889A/en
Publication of CN111339889A publication Critical patent/CN111339889A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face image optimization method, a face image optimization device and a storage medium. The method comprises the following steps: carrying out quality evaluation on a plurality of face images to obtain a plurality of face scores of each face image; sequencing the plurality of face images according to each face score to obtain a plurality of sequencing results of the plurality of face images; and screening the plurality of face images in sequence according to the plurality of sequencing results to select the target face image. By the mode, the face image with high quality can be selected from the plurality of face images.

Description

Face optimization method, face optimization device and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a method, an apparatus, and a storage medium for face optimization.
Background
At present, image processing techniques have made tremendous progress and development. One application scenario of the image processing technology is the processing of a face image in a surveillance video.
However, in the monitoring situation, the picture taken by the monitoring device is easily blurred due to reasons such as too fast moving speed of the human face, too violent actions of the person in the monitored scene, etc., so that the quality of the human face image obtained from the video taken by the monitoring device is low, and the human face image is difficult to process by using the image processing technology.
Disclosure of Invention
The application provides a face optimization method, a face optimization device and a storage medium, which can solve the problem that in the prior art, the quality of face images obtained from a monitoring video is low.
In order to solve the technical problem, the application adopts a technical scheme that: a face image optimization method is provided, and the method comprises the following steps: carrying out quality evaluation on a plurality of face images to obtain a plurality of face scores of each face image; sequencing the plurality of face images according to each face score to obtain a plurality of sequencing results of the plurality of face images; and screening the plurality of face images in sequence according to the plurality of sequencing results to select the target face image.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a face image optimization apparatus comprising a processor, a memory coupled to the processor, wherein the memory stores program instructions for implementing the method as described above when executed by the processor.
In order to solve the above technical problem, the present application adopts another technical solution that: there is provided a storage medium storing program instructions that when executed enable the aforementioned method to be implemented.
The beneficial effect of this application is: by the mode, the quality of each face image can be evaluated from multiple angles to obtain multiple face scores of each face image, the multiple face images are respectively sorted according to the multiple face scores of each face image to obtain multiple sorting results, and finally the face images with high quality are screened out according to the multiple sorting results.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a preferred method for face images according to the present application;
fig. 2 is a detailed flowchart of S110;
fig. 3 is a detailed flowchart of S110;
fig. 4 is a detailed flowchart of S110;
fig. 5 is a specific flowchart of S130;
FIG. 6 is another detailed flowchart of S130;
FIG. 7 is a schematic structural diagram of an embodiment of the deep learning model of the present application;
FIG. 8 is a schematic structural diagram of an embodiment of a preferred apparatus for face images according to the present application;
FIG. 9 is a schematic structural diagram of an embodiment of a storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. All directional indications (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are only used to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indication is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Fig. 1 is a schematic flow chart of an embodiment of a face image optimization method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 1 is not limited in this embodiment. As shown in fig. 1, the present embodiment may include:
s110: and carrying out quality evaluation on the plurality of face images to obtain various face scores of each face image.
The plurality of face images are face images of the same person, can be multi-frame face images of the same person in a monitoring video, and can be obtained by tracking the face of the same person in the video. After the plurality of face images are obtained, the quality of the plurality of face images can be evaluated to obtain a plurality of face scores of each face image.
The plurality of face scores may include four face scores, which are a first score, a second score, a third score and a fourth score. Specifically, the first score may be a face size score, the second score may be a face clarity score, the third score may be a face pose score, and the fourth score may be a face normality score.
The face size score can be calculated by pixels of a face area in the face image. For example, if the Width of the face region is Width pixels and the Height of the face region is Height pixels, the face size score SizeScore is Width Height.
The face sharpness score can be obtained by inputting a plurality of face images into the edge detection model. After a plurality of face images are acquired, the face images can be input into the edge detection model to obtain the face definition score of each face image.
As shown in fig. 2, inputting a plurality of face images into the edge detection model to obtain the face sharpness score of each face image may specifically include the following sub-steps:
s211: and calculating the edge gradient of each pixel in each face image by using a Laplacian operator.
Before this step, edge detection may be performed on the face image to obtain an edge of a face region in the face image. Then, in this step, the edge gradient of the face region in each face image is calculated by using the laplacian operator.
Wherein, Laplacian operator
Figure BDA0002388354690000041
S212: and taking the sum of the edge gradients of each pixel in each face image as the face definition score of each face image.
The formula for calculating the sum of the edge gradients of the face regions in each face image is as follows:
Figure BDA0002388354690000042
wherein, G (x, y) represents a Laplacian operator, x and y respectively represent the row and column coordinates of the face area, Width represents the Width of the face area, and Height represents the Height of the face area.
The face pose score and the face normality score can be obtained by inputting a plurality of face images into the deep learning model. After a plurality of face images are acquired, the plurality of face images can be input into the deep learning model to obtain the face posture score and the face normality of each face image.
As shown in fig. 3, inputting a plurality of face images into the deep learning model to obtain a face pose score of each face image may specifically include the following sub-steps:
s311: and inputting a plurality of face images into the deep learning model to obtain the pitch angle value, the yaw angle value and the roll angle value of each face image.
S312: and taking the sum of the pitch angle value, the yaw angle value and the roll angle value of each human face image as the human face posture score of each human face image.
Face pose score for each face image
PoseScore=Pitch+Yaw+Roll,
Where Pitch represents the Pitch angle value, Yaw represents the Yaw angle value, and Roll represents the Roll angle value. Whether the face is a side face or a head is low can be measured through the face pose score.
As shown in fig. 4, the step of inputting a plurality of face images into the deep learning model to obtain the face normality score includes the following substeps:
s411: and inputting a plurality of face images into the deep learning model to obtain the face shielding score, the face integrity score and the face illumination balance score of each face image output by the deep learning model.
S412: and taking the sum of the face shielding score, the face integrity score and the face illumination balance score of each face image as the face normality score of each face image.
Specifically, the face normality score of each face image
NormalScore=OcclusionScore+CompletenessScore+IlluminationScore,
Wherein Occlusion score represents a face occlusion score, Completensssscore represents a face integrity score, and Illuminationscore represents a face illumination balance score.
Whether the face image is a normal face or not can be measured through the face normality score, namely whether the phenomena of shielding, defect, unbalanced illumination and the like exist or not.
S120: and sequencing the plurality of face images according to each face score to obtain a plurality of sequencing results of the plurality of face images.
The plurality of sorting results may include a first sorting result, a second sorting result, a third sorting result, and a fourth sorting result, and each sorting result may be sorted from large to small according to the corresponding face score.
Specifically, the first ranking result may be a face size ranking result, the second ranking result may be a face sharpness ranking result, the third ranking result may be a face pose ranking result, and the fourth ranking result may be a face normality ranking result.
S130: and screening the plurality of face images according to the plurality of sequencing results to select the target face image.
The target face image can be screened out by integrating the sorting conditions of each face image in various sorting results.
In a specific embodiment of the present application, multiple rounds of screening may be performed on multiple face images sequentially according to different sorting results, each round of screening selects, from the screening results of the previous round, a specified number or a specified proportion of face images with the highest face score in the sorting results used in the current round as the screening results of the current round, where the screening results of the previous round used in the first round of screening are the multiple face images, and the screening results of the last round are the target face images.
Because each sort result is sorted from big to small according to the corresponding face score, the face image with the highest face score in the designated number or the designated proportion is the face image with the top in the designated number or the designated proportion.
As shown in fig. 5, S130 may specifically include the following sub-steps:
s531: and selecting the face images with the first quantity or the first proportion most ahead in the first sequencing result from the plurality of face images as the screening result of the first round.
For example, if the first ratio is 50%, the top 50% of the first ranking results are selected from the plurality of face images as the first round of screening results.
S532: and selecting the face images with the second quantity or the second proportion most ahead in the second sorting result from the screening results of the first round as the screening results of the second round.
For example, if the first proportion is 50%, the top 50% of the face images in the second ranking result are selected from the first round of screening results as the second round of screening results.
S533: and selecting the face images with the third quantity or the third proportion which are most front in the third sorting result from the screening results of the second round as the screening results of the third round.
For example, if the third ratio is 20%, the top 20% of the face images in the third ranking result are selected from the second round of screening results as the third round of screening results.
S534: and selecting the face images with the fourth quantity or the fourth proportion most ahead in the fourth sorting result from the screening results of the third round as the target face images.
For example, if the fourth number is 1, the top one of the face images in the fourth ranking result is selected as the target face image from the third round of screening results.
As shown in fig. 6, in another specific embodiment of the present application, S130 may include the following sub-steps:
s631: and respectively summing the ranking values of each face image in the various ranking results to obtain the comprehensive ranking value of each face image.
For example, in the conventional 5 face images A, B, C, D, E, the face in the face image B is the largest compared to the other 4 face images, and therefore the ranking value in the face size ranking result is 1. Similarly, the ranking value of the face image B in the face sharpness ranking result is 2, and the ranking value in the face pose is 3, so that the comprehensive ranking value is 6.
S632: and sequencing each face image according to the comprehensive sequencing value of each face image to obtain a comprehensive sequencing result.
Optionally, the comprehensive sorting results are sorted from small to large according to the corresponding comprehensive sorting values.
S633: and selecting a target face image from the plurality of face images according to the comprehensive sequencing result.
The target face image is the face image with the designated number or the designated proportion which is the top in the comprehensive sequencing result, namely the face image with the designated number or the designated proportion with the lowest comprehensive sequencing value in the plurality of face images.
Through the implementation of the embodiment, the quality evaluation can be performed on each face image from multiple angles to obtain multiple face scores of each face image, the multiple face images are respectively sorted according to the multiple face scores of each face image to obtain multiple sorting results, and finally, the face images with high quality are screened out according to the multiple sorting results.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a deep learning model used in the above embodiment. As shown in fig. 7, the deep learning model may include three convolution layers (convolution), three pooling layers (downlink), and a full connection layer (not shown). The method comprises the steps of inputting a face image with the size of 100-60, and outputting a six-dimensional feature vector, wherein elements in the six-dimensional feature vector comprise a face occlusion score (Occlusion score), a face integrity score (Completeness score), a face illumination balance score (Illuminationscore), a Pitch angle value (Pitch), a Yaw angle value (Yaw) and a Roll angle value (Roll) of the face image.
The deep learning model used in the above embodiment is a trained deep learning model, so that the deep learning model can be trained before being used.
The annotated face image can be used to train a deep learning model. The labeling of the face image for training may include: the method comprises the following steps of (1) carrying out face shielding marking, wherein a shielding object is not arranged on a face and is marked as 1, and a shielding object is arranged and is marked as 0; labeling the integrity of the face, wherein the intact face is labeled as 1, and the incomplete face is labeled as 0; the illumination of the face is uniformly marked, the illumination of the face is uniformly marked as 1, and the illumination of the face is not uniformly marked as 0; the pitch angle value, the yaw angle value and the roll angle value of the face image. The values of the face shielding label, the face integrity label and the face illumination balance label can be changed between 0 and 1, and the pitch angle value, the yaw angle value and the roll angle value can be changed between 0 and 90 degrees.
Specifically, the labeled face image can be input into the deep learning model, and parameters of the deep learning model are adjusted according to results output by the deep learning model (face shielding score, face integrity score, face illumination balance score, pitch angle value, yaw angle value and roll angle value of the face image) so as to optimize the deep learning model, so that the final calculation result of the deep learning model is more accurate.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a preferred device for face images according to the present application. As shown in fig. 8, the preferred apparatus 800 for face images includes a processor 810 and a memory 820 coupled to the processor.
Wherein the memory 820 stores program instructions for implementing the method of any of the embodiments described above; the processor 810 is configured to execute program instructions stored by the memory 820 to implement the steps of the above-described method embodiments. The processor 810 may also be referred to as a Central Processing Unit (CPU). Processor 810 may be an integrated circuit chip having signal processing capabilities. The processor 810 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a storage medium. The storage medium 900 of the embodiment of the present application stores program instructions, and the program instructions implement the face image optimization method of the present application when executed. The instructions may form a program file stored in the storage medium in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (12)

1. A face image optimization method is characterized by comprising the following steps:
carrying out quality evaluation on a plurality of face images to obtain a plurality of face scores of each face image;
sorting the plurality of face images according to each face score to obtain a plurality of sorting results of the plurality of face images;
and screening the plurality of face images according to the plurality of sequencing results to select a target face image.
2. The method of claim 1,
the screening the plurality of face images according to the plurality of sorting results to select the target face image comprises:
and sequentially carrying out multiple rounds of screening on the multiple face images according to different sorting results, wherein each round of screening selects the face images with the highest face score in the sorting results used in the round or in a specified quantity or a specified proportion from the sorting results used in the previous round as the screening results of the round, the screening results of the previous round used in the first round of screening are the multiple face images, and the screening results of the last round are the target face images.
3. The method of claim 2,
the multiple face scores comprise a first score, a second score, a third score and a fourth score, the multiple sequencing results comprise a first sequencing result, a second sequencing result, a third sequencing result and a fourth sequencing result, and each sequencing result is sequenced from large to small according to the corresponding face score;
the multi-round screening of the plurality of face images according to the different sorting results in sequence comprises:
selecting the face images with the first quantity or the first proportion most ahead in the first sequencing results from the plurality of face images as the screening results of the first round;
selecting the face images with the second quantity or the second proportion most ahead in the second sorting result from the screening results of the first round as the screening results of the second round;
selecting the face images with the third quantity or the third proportion which are most front in the third sorting result from the screening results of the second round as the screening results of the third round;
and selecting the face images with the fourth quantity or the fourth proportion most ahead in the fourth sorting result from the screening results of the third round as the target face images.
4. The method of claim 1,
the screening the plurality of face images according to the plurality of sorting results to select the target face image comprises:
respectively summing the ranking values of each face image in the various ranking results to obtain a comprehensive ranking value of each face image;
sequencing each face image according to the comprehensive sequencing value of each face image to obtain a comprehensive sequencing result, and sequencing the comprehensive sequencing result from large to small according to the corresponding comprehensive sequencing value;
and selecting the target face image from the plurality of face images according to the comprehensive sorting result.
5. The method of claim 4, wherein the plurality of face scores comprises a first score, a second score, a third score, and a fourth score, and wherein the plurality of ranking results comprises a first ranking result, a second ranking result, a third ranking result, and a fourth ranking result, and wherein each ranking result is ranked from small to large according to the corresponding face score.
6. The method according to claim 3 or 5,
the first score is a face size score, the second score is a face definition score, the third score is a face posture score, the fourth score is a face normality score, the first sorting result is a face size sorting result, the second sorting result is a face definition sorting result, the third sorting result is a face posture sorting result, and the fourth sorting result is a face normality sorting result.
7. The method of claim 6,
the quality evaluation of the plurality of face images to obtain the plurality of face scores of each face image comprises the following steps:
inputting the plurality of face images into an edge detection model to obtain the face definition score of each face image, and inputting the plurality of face images into a deep learning model to obtain the face posture score and the face normality score of each face image.
8. The method of claim 7, wherein the inputting the plurality of face images into the deep learning model to obtain the face normality score comprises:
inputting the plurality of face images into the deep learning model to obtain a face shielding score, a face integrity score and a face illumination balance score of each face image output by the deep learning model;
and taking the sum of the face shielding score, the face integrity score and the face illumination balance score of each face image as the face normality score of each face image.
9. The method of claim 7, wherein the inputting the plurality of face images into the deep learning model to obtain the face pose score of each face image comprises:
inputting the plurality of face images into the deep learning model to obtain a pitch angle value, a yaw angle value and a rolling angle value of each face image;
and taking the sum of the pitch angle value, the yaw angle value and the roll angle value of each human face image as the human face posture score of each human face image.
10. The method of claim 7, wherein the inputting the plurality of face images into the edge detection model to obtain the face clarity score of each face image comprises:
calculating the edge gradient of each pixel in each face image by using a Laplacian operator;
and taking the sum of the edge gradients of each pixel in each face image as the face definition score of each face image.
11. A face image optimization device is characterized by comprising a processor and a memory coupled with the processor, wherein,
the memory stores program instructions for implementing the method of any one of claims 1-10 when executed by the processor.
12. A storage medium, characterized in that the storage medium stores program instructions which, when executed, implement the method of any one of claims 1-10.
CN202010105348.2A 2020-02-20 2020-02-20 Face optimization method, face optimization device and storage medium Pending CN111339889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010105348.2A CN111339889A (en) 2020-02-20 2020-02-20 Face optimization method, face optimization device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010105348.2A CN111339889A (en) 2020-02-20 2020-02-20 Face optimization method, face optimization device and storage medium

Publications (1)

Publication Number Publication Date
CN111339889A true CN111339889A (en) 2020-06-26

Family

ID=71185541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010105348.2A Pending CN111339889A (en) 2020-02-20 2020-02-20 Face optimization method, face optimization device and storage medium

Country Status (1)

Country Link
CN (1) CN111339889A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986163A (en) * 2020-07-29 2020-11-24 深思考人工智能科技(上海)有限公司 Face image selection method and device
CN112084882A (en) * 2020-08-18 2020-12-15 深圳英飞拓科技股份有限公司 Behavior detection method and device and computer readable storage medium
CN113449713A (en) * 2021-09-01 2021-09-28 北京美摄网络科技有限公司 Method and device for cleaning training data of face detection model
CN113536900A (en) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 Method and device for evaluating quality of face image and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN104991914A (en) * 2015-06-23 2015-10-21 腾讯科技(深圳)有限公司 Application recommendation method and server
CN106649647A (en) * 2016-12-09 2017-05-10 北京百度网讯科技有限公司 Ordering method and device for search results based on artificial intelligence
US9971933B1 (en) * 2017-01-09 2018-05-15 Ulsee Inc. Facial image screening method and face recognition system thereof
CN108509622A (en) * 2018-04-03 2018-09-07 广州阿里巴巴文学信息技术有限公司 Article sequencing method, device, computing device and storage medium
CN109035246A (en) * 2018-08-22 2018-12-18 浙江大华技术股份有限公司 A kind of image-selecting method and device of face
WO2019033574A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Electronic device, dynamic video face recognition method and system, and storage medium
CN110197146A (en) * 2019-05-23 2019-09-03 招商局金融科技有限公司 Facial image analysis method, electronic device and storage medium based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN104991914A (en) * 2015-06-23 2015-10-21 腾讯科技(深圳)有限公司 Application recommendation method and server
CN106649647A (en) * 2016-12-09 2017-05-10 北京百度网讯科技有限公司 Ordering method and device for search results based on artificial intelligence
US9971933B1 (en) * 2017-01-09 2018-05-15 Ulsee Inc. Facial image screening method and face recognition system thereof
WO2019033574A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Electronic device, dynamic video face recognition method and system, and storage medium
CN108509622A (en) * 2018-04-03 2018-09-07 广州阿里巴巴文学信息技术有限公司 Article sequencing method, device, computing device and storage medium
CN109035246A (en) * 2018-08-22 2018-12-18 浙江大华技术股份有限公司 A kind of image-selecting method and device of face
CN110197146A (en) * 2019-05-23 2019-09-03 招商局金融科技有限公司 Facial image analysis method, electronic device and storage medium based on deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986163A (en) * 2020-07-29 2020-11-24 深思考人工智能科技(上海)有限公司 Face image selection method and device
CN112084882A (en) * 2020-08-18 2020-12-15 深圳英飞拓科技股份有限公司 Behavior detection method and device and computer readable storage medium
CN113536900A (en) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 Method and device for evaluating quality of face image and computer readable storage medium
CN113449713A (en) * 2021-09-01 2021-09-28 北京美摄网络科技有限公司 Method and device for cleaning training data of face detection model

Similar Documents

Publication Publication Date Title
CN111339889A (en) Face optimization method, face optimization device and storage medium
CN107358149B (en) Human body posture detection method and device
CN114494892B (en) Goods shelf commodity display information identification method, device, equipment and storage medium
CN111047626A (en) Target tracking method and device, electronic equipment and storage medium
US11776213B2 (en) Pose generation apparatus, generation method, and storage medium
CN112906794A (en) Target detection method, device, storage medium and terminal
CN110765903A (en) Pedestrian re-identification method and device and storage medium
CN110765865A (en) Underwater target detection method based on improved YOLO algorithm
CN110176024A (en) Method, apparatus, equipment and the storage medium that target is detected in video
JP2007188294A (en) Method for detecting moving object candidate by image processing, moving object detection method for detecting moving object from moving object candidate, moving object detection apparatus, and moving object detection program
CN113362441A (en) Three-dimensional reconstruction method and device, computer equipment and storage medium
CN113129229A (en) Image processing method, image processing device, computer equipment and storage medium
CN111563492B (en) Fall detection method, fall detection device and storage device
CN110547803A (en) pedestrian height estimation method suitable for overlooking shooting of fisheye camera
CN113837202A (en) Feature point extraction method, image reconstruction method and device
CN110930436B (en) Target tracking method and device
CN110717910B (en) CT image target detection method based on convolutional neural network and CT scanner
WO2020217368A1 (en) Information processing device, information processing method, and information processing program
CN115909415A (en) Image screening method, device, equipment and storage medium
CN116091784A (en) Target tracking method, device and storage medium
CN114724175A (en) Pedestrian image detection network, detection method, training method, electronic device, and medium
CN114639058A (en) Fire smoke image detection method, device, equipment and storage medium
CN114550062A (en) Method and device for determining moving object in image, electronic equipment and storage medium
CN115619698A (en) Method and device for detecting defects of circuit board and model training method
CN113840135A (en) Color cast detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200626

RJ01 Rejection of invention patent application after publication