CN111209874B - Method for analyzing and identifying wearing attribute of human head - Google Patents

Method for analyzing and identifying wearing attribute of human head Download PDF

Info

Publication number
CN111209874B
CN111209874B CN202010022968.XA CN202010022968A CN111209874B CN 111209874 B CN111209874 B CN 111209874B CN 202010022968 A CN202010022968 A CN 202010022968A CN 111209874 B CN111209874 B CN 111209874B
Authority
CN
China
Prior art keywords
target
wearing
preset
historical
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010022968.XA
Other languages
Chinese (zh)
Other versions
CN111209874A (en
Inventor
赖捷锋
王鑫宇
卿天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baimu Technology Co ltd
Original Assignee
Beijing Baimu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baimu Technology Co ltd filed Critical Beijing Baimu Technology Co ltd
Priority to CN202010022968.XA priority Critical patent/CN111209874B/en
Publication of CN111209874A publication Critical patent/CN111209874A/en
Application granted granted Critical
Publication of CN111209874B publication Critical patent/CN111209874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for analyzing and identifying wearing attributes of human heads, which comprises the following steps: detecting and snapshotting a target head image in a video stream to be identified; determining a current wearing attribute of the target head in the target head image based on the wearing attribute model; acquiring target personnel related to the current wearing attribute based on a pre-stored personnel database, and carrying out association marking on the current wearing attribute and the target personnel; and storing the association marking result into an association database. The method is used for realizing effective identification of the wearing attribute of the target head by setting the wearing attribute model.

Description

Method for analyzing and identifying wearing attribute of human head
Technical Field
The invention relates to the technical field of computer vision, in particular to an analysis and identification method for human head wearing attributes.
Background
The object identification is a research hotspot in the field of computer vision, and mainly aims to identify interested targets of people from complex image scenes, so that the analysis and identification of the interested targets in the images are more and more important, wherein in the field of safety production, behaviors violating safety production (such as no safety clothes, no safety helmet and the like) can be found in time by identifying related images of accessories worn by people and the like, enterprises are helped to regulate daily production and life, and potential safety hazards are eliminated as much as possible, so that the invention provides an analysis and identification method for head wearing attributes.
Disclosure of Invention
The invention provides a method for analyzing and identifying wearing attributes of human heads, which is used for realizing effective identification of the wearing attributes of target heads by setting a wearing attribute model.
The invention provides a method for analyzing and identifying wearing attributes of human heads, which comprises the following steps:
detecting and snapshotting a target head image in a video stream to be identified;
determining a current wear attribute of a target head in the target head image based on a wear attribute model;
acquiring target personnel related to the current wearing attribute based on a pre-stored personnel database, and performing association marking on the current wearing attribute and the target personnel;
and storing the association marking result into an association database.
In one possible implementation manner, the process of detecting and capturing the target head image in the video stream to be recognized includes:
cutting the video stream to be identified according to a preset cutting stream to obtain a plurality of sub-video streams;
detecting each sub-video stream, and judging whether a target header exists in the sub-video stream;
and if so, capturing a target head image with the target head based on the sub-video stream, and storing.
In one possible implementation, the process of determining the current wearing property of the target head in the target head image based on the wearing property model includes:
acquiring historical head images, and marking historical wearing attributes of each historical head image, wherein the historical wearing attributes comprise: a hat, no hat, glasses, sunglasses, mask, or mask is worn on the head;
acquiring a target face image in the personnel database, extracting a target face in the target face image, and simultaneously performing region segmentation processing on the extracted target face to obtain a plurality of target region blocks;
extracting the region block characteristics of each target region block, and performing deep training on the region block characteristics of each target region block based on a deep learning model to obtain a face learning model;
matching the marked historical wearing attributes to the corresponding target area blocks, and carrying out fusion processing on the target area blocks to obtain fusion images;
training the fused image based on the face learning model to obtain a wearing attribute model;
and identifying the target head in the target head image based on the wearing attribute model, and determining the corresponding current wearing attribute.
In one possible implementation manner, the obtaining, based on a pre-stored person database, a target person related to the current wearing attribute, and associating and marking the current wearing attribute with the target person includes:
determining a historical wearing probability set of each target person in the person database;
determining the association degrees among the historical wearing attributes in the historical wearing probability set, performing priority ranking on the association degrees, and simultaneously acquiring historical wearing attributes corresponding to the association degrees of the preset number after the priority ranking;
based on a face learning model, identifying a face image corresponding to the current wearing attribute, and determining the face similarity of the face image based on the personnel database;
reserving target persons with the face similarity larger than a preset similarity in the person database, and meanwhile judging the wearing similarity of historical wearing attributes and current wearing attributes corresponding to the association degrees of the previous preset number related to each target person in the reserved target persons;
acquiring the target person with the highest wearing similarity in the reserved target persons as the target person associated with the current wearing attribute;
and carrying out association labeling on the target person associated with the current wearing attribute and the corresponding current wearing attribute.
In one possible implementation manner, the detecting each of the sub-video streams and determining whether a target header exists in the sub-video stream includes:
determining the pixel saturation of each pixel point in the sub-video stream;
acquiring the total pixel saturation of the sub-video stream, and judging that a target head exists in the sub-video stream when the total pixel saturation is greater than a preset saturation;
otherwise, judging that the target header does not exist in the sub-video stream.
In a possible implementation manner, in the process of performing region segmentation processing on the extracted target face to obtain a plurality of target region blocks, the method further includes: judging the smoothness of the edge between each target area block and determining the reasonability of the smoothness of the edge, wherein the process comprises the following steps:
acquiring a first target area block in all the target area blocks, determining an edge curve of the first target area block, and labeling a plurality of preset points on the edge curve;
connecting the marked preset points with a marked curve, judging whether the marked curve is smooth or not based on a preset fitting curve, if so, judging that the edge smoothness of the first target region block is reasonable, and simultaneously judging the reasonability of the edge smoothness of the next target region block;
otherwise, the edge smoothness of the first target area block is judged to be unreasonable.
In one possible implementation manner, when it is determined that the smoothness of the edge of the first target area block is not reasonable, the method further includes: adjusting the marked preset points, wherein the adjusting process comprises the following steps:
performing point-to-point line connection on all the preset points, and judging the first curvature of a first line from the current preset point to the next preset point, the second curvature of a second line from the current preset point to the previous preset point and the third curvature of a third line from the previous preset point to the next preset point;
determining the current point coordinate of the current preset point in the first target region block based on a preset standard coordinate system, and simultaneously determining whether the angle to be estimated of the triangle to be estimated, which is formed by the first curvature, the second curvature and the third curvature, is within a corresponding preset angle range, if so, judging the next preset point;
otherwise, if the angle to be estimated is smaller than the minimum value of the preset angle range, keeping the previous preset point and the next preset point unchanged, and performing first position adjustment on the current preset point until the angle to be estimated of the triangle to be estimated is within the corresponding preset angle range;
if the angle to be estimated is larger than the maximum value of a preset angle range, keeping the current preset point unchanged, and performing second position adjustment on the previous preset point and the next preset point until the angle to be estimated of the triangle to be estimated is within the corresponding preset angle range;
and when the angles to be estimated of all the triangles to be estimated are within a preset angle range, judging that the edge smoothness of the first target area block is reasonable.
In one possible implementation, the process of determining the association between the historical wearing attributes in the historical wearing probability set includes:
determining a wearing attribute type of the historical wearing attribute;
determining the historical wearing attributes of the target person in different historical video streams;
determining the historical wearing frequency of each type of historical wearing attribute related to the target person according to the historical wearing attribute, and combining the historical wearing frequencies of the types of the historical wearing attributes;
storing the determined historical wearing frequency to form the historical wearing probability set;
and determining the association degree between the historical wearing attributes according to the determined historical wearing frequency.
In a possible implementation manner, before the detecting and capturing the target head image in the video stream to be recognized, the method further includes:
screening the video stream to be identified by using a screening method to obtain a target frame possibly comprising the target head image, and capturing the target head image from the target frame;
wherein the screening method comprises the following steps:
step 1, the video stream to be identified is shot by a camera with a fixed position, and a basic picture corresponding to the camera is determined, wherein the basic picture is an unmanned picture; the pixel information matrix of the basic picture is as follows:
Figure BDA0002361458120000051
wherein A represents a pixel information matrix of the base picture; a is ajiThe pixel value of the ith pixel of the jth row; n is the pixel column number of the basic picture; m is the number of pixel lines of the basic picture;
step 2, obtaining a pixel value difference value of a corresponding pixel between each frame of video in the video stream to be identified and the basic picture:
Figure BDA0002361458120000052
wherein, wjiIs the pixel difference of the ith pixel in the jth row of the difference matrix, cjiIs the pixel value of the ith pixel of the jth line of the kth frame picture in the video stream to be identified, ajiThe difference matrix is Δ, which is a pixel value of the ith pixel in the jth row of the base picture, and can be expressed as:
the pixel information matrix of the k frame picture in the video stream to be identified can be represented as:
Figure BDA0002361458120000061
step 3, calculating a detection gap value;
Figure BDA0002361458120000062
wherein σ is the detection difference value, wjiIs the pixel difference, min { c, for the ith pixel in the jth row of the difference matrixj1,cj2,…,cjnIs cj1,cj2,…,cjnMinimum value of (1), wjkIs the pixel of the jth row of the kth pixel in the difference matrixThe difference value n is the column number of the difference value matrix; m is the number of rows of the difference matrix;
and 4, when the detection difference value sigma is larger than a preset threshold value, judging that the current video frame is a target frame possibly comprising a target head image.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a method for analyzing and identifying wearing attributes of a human head according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating fused images according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a triangle to be estimated according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The invention provides a method for analyzing and identifying wearing attributes of human heads, which comprises the following steps of:
step 1: detecting and snapshotting a target head image in a video stream to be identified;
step 2: determining a current wear attribute of a target head in the target head image based on a wear attribute model;
and step 3: acquiring target personnel related to the current wearing attribute based on a pre-stored personnel database, and performing association marking on the current wearing attribute and the target personnel;
and 4, step 4: and storing the association marking result into an association database.
The video stream to be recognized refers to any video segment, and is a video stream including the target head of the target person;
the wearing attribute model is trained in advance and mainly aims to identify the characteristics of whether the head of a target wears glasses, sunglasses, a mask, a hat and the like;
the current wearing attribute may be any one or more combination of wearing glasses, wearing sunglasses, wearing a mask, wearing a hat, for example: wearing a hat and a mask;
the personnel in the personnel database refer to target personnel needing to perform wearing attribute analysis and identification on the target personnel;
the above-mentioned association flag is, for example: if the current wearing attribute of the target person A is that the target person A wears the hat and the glasses are worn, associating the target person A with the hat wearing property and mapping the target person A to the target person A;
and storing the association marking result into an association database for recording and storing the target personnel and the current wearing attribute, so that the data of the target personnel and the current wearing attribute can be called conveniently in the follow-up process, and the targeted research on the wearing attribute of the target personnel can be conveniently carried out.
The beneficial effects of the above technical scheme are: by setting the wearing attribute model, effective identification of the wearing attribute of the target head is realized.
The invention provides a method for analyzing and identifying wearing attributes of human heads, wherein the process of detecting and snapshotting a target head image in a video stream to be identified comprises the following steps:
cutting the video stream to be identified according to a preset cutting stream to obtain a plurality of sub-video streams;
detecting each sub-video stream, and judging whether a target header exists in the sub-video stream;
and if so, capturing a target head image with the target head based on the sub-video stream, and storing.
The above-mentioned performing the cutting processing according to the preset cutting stream may be to cut the video stream to be recognized according to 3 s/frame, which is to obtain the sub-video streams, so as to facilitate parallel processing of a plurality of sub-video streams, thereby saving the efficiency of subsequently judging whether the target head exists in the sub-video streams, and at the same time, to improve the accuracy of recognizing the target head image, if only one sub-video stream exists in the video stream to be recognized, if, at this time, the video stream to be recognized is to recognize and judge whether the target head exists, the accuracy will be greatly reduced.
During the playing process of the sub-video stream, the target head image can be captured through the pixel value or the signal energy value of the frame, and then the target head image is obtained.
The beneficial effects of the above technical scheme are: the method and the device are convenient for multiple parallel processing, save the efficiency of subsequently judging the target head in the sub-video stream, improve the accuracy of identifying the target head portrait and provide a basis for determining the wearing attribute of the target head.
The invention provides a method for analyzing and identifying wearing attributes of human heads, wherein the process of determining the current wearing attributes of target heads in a target head image based on a wearing attribute model comprises the following steps:
acquiring historical head images, and marking historical wearing attributes of each historical head image, wherein the historical wearing attributes comprise: a hat, no hat, glasses, sunglasses, mask, or mask is worn on the head;
acquiring a target face image in the personnel database, extracting a target face in the target face image, and simultaneously performing region segmentation processing on the extracted target face to obtain a plurality of target region blocks;
extracting the region block characteristics of each target region block, and performing deep training on the region block characteristics of each target region block based on a deep learning model to obtain a face learning model;
matching the marked historical wearing attributes to the corresponding target area blocks, and carrying out fusion processing on the target area blocks to obtain fusion images;
training the fused image based on the face learning model to obtain a wearing attribute model;
and identifying the target head in the target head image based on the wearing attribute model, and determining the corresponding current wearing attribute.
The obtained historical head image and the historical wearing attribute are trained for a deep learning model, so that an accurate wearing attribute model can be conveniently obtained;
the extracting of the target face is to extract a region contour of the target face, and perform region segmentation processing on the region contour, such as segmentation of regions of a nose, eyes, eyebrows, a mouth, and the like;
the above-mentioned region block characteristics refer to unique characteristics for the target region block, such as: region block characteristics such as an eyebrow shape, an eyebrow color, and the like for the target region block of the eyebrows;
the matching of the historical wearing attributes to the corresponding target area blocks and the fusion processing of the target area blocks are performed to obtain a fusion image, for example: the target area block a is fused with historical wearing attributes, such as: the cap a, in which the fused image is as shown in fig. 2, and the fusion process may be performed by overlaying the cap a on the target area block a.
The beneficial effects of the above technical scheme are: the wearing attribute model is trained through two types of images, namely the target face image and the fusion image, so that the accuracy of obtaining the wearing attribute is improved.
The invention provides a method for analyzing and identifying wearing attributes of human heads, wherein the process of acquiring target personnel related to the current wearing attributes based on a pre-stored personnel database and carrying out association marking on the current wearing attributes and the target personnel comprises the following steps:
determining a historical wearing probability set of each target person in the person database;
determining the association degrees among the historical wearing attributes in the historical wearing probability set, performing priority ranking on the association degrees, and simultaneously acquiring historical wearing attributes corresponding to the association degrees of the preset number after the priority ranking;
based on a face learning model, identifying a face image corresponding to the current wearing attribute, and determining the face similarity of the face image based on the personnel database;
reserving target persons with the face similarity larger than a preset similarity in the person database, and meanwhile judging the wearing similarity of historical wearing attributes and current wearing attributes corresponding to the association degrees of the previous preset number related to each target person in the reserved target persons;
acquiring the target person with the highest wearing similarity in the reserved target persons as the target person associated with the current wearing attribute;
and carrying out association labeling on the target person associated with the current wearing attribute and the corresponding current wearing attribute.
The historical wearing probability set may be obtained by identifying a historical video stream, determining head wearing attributes of different target persons in the video stream at different times, and storing and classifying the head wearing attributes of each target person at different times, for example, the target persons may be classified, or the wearing attributes may be classified;
the association degree between the historical wearing attributes in the historical wearing probability set is determined based on the wearing frequency;
the priority ranking is to determine the wearing preference of the target person and provide a judgment basis for the wearing similarity, and the priority ranking is generally arranged from high to low;
the historical wearing attributes corresponding to the preset number of association degrees are determined in order to judge the wearing similarity of the wearing attributes more pertinently and further improve the identification precision of the wearing attributes;
firstly, the primary judgment is carried out through the face similarity, and secondly, the secondary judgment is carried out through the attribute similarity, so that the accuracy of determining the target personnel is effectively improved;
the above-mentioned association labeling is, for example, to add a superscript link to the target person, where the superscript link is an exclusive link generated for the current wearing attribute.
The beneficial effects of the above technical scheme are: through comparison and judgment of the two similarity degrees of the face similarity degree and the wearing similarity degree, the target person with the highest wearing similarity degree is selected, accuracy of the associated target person is improved conveniently, the associated marking is carried out on the target person and the corresponding wearing attribute, the timely viewing is facilitated, and the follow-up calling time of the target person can be effectively shortened.
The invention provides a method for analyzing and identifying wearing attributes of human heads, wherein the process of detecting each sub-video stream and judging whether a target head exists in the sub-video stream comprises the following steps:
determining the pixel saturation of each pixel point in the sub-video stream;
acquiring the total pixel saturation of the sub-video stream, and judging that a target head exists in the sub-video stream when the total pixel saturation is greater than a preset saturation;
otherwise, judging that the target header does not exist in the sub-video stream.
The value range of the pixel saturation is between 0 and 255, and the preset saturation is manually preset.
The beneficial effects of the above technical scheme are: by determining the pixel saturation in the sub-video stream, an efficient determination of whether a target header is present in the sub-video stream is facilitated.
The invention provides a method for analyzing and identifying wearing attributes of human heads, which is used for carrying out region segmentation processing on the extracted target human faces to obtain a plurality of target region blocks, and also comprises the following steps: judging the smoothness of the edge between each target area block and determining the reasonability of the smoothness of the edge, wherein the process comprises the following steps:
acquiring a first target area block in all the target area blocks, determining an edge curve of the first target area block, and labeling a plurality of preset points on the edge curve;
connecting the marked preset points with a marked curve, judging whether the marked curve is smooth or not based on a preset fitting curve, if so, judging that the edge smoothness of the first target region block is reasonable, and simultaneously judging the reasonability of the edge smoothness of the next target region block;
otherwise, the edge smoothness of the first target area block is judged to be unreasonable.
The above-mentioned region segmentation processing is performed on the target face, for example, the target face is cut into four horizontal target region blocks of eyes, eyebrows, nose and mouth;
the edge curve is firstly determined for roughly positioning the cut line after cutting, then the preset point is marked, and the cut line is precisely positioned, so that the time for marking the preset point can be effectively saved, and the efficiency for marking the preset point can be improved;
the preset fitting curve is, for example, a standard horizontal cutting line related to the horizontal target region block, and the distance value from the labeled preset point to the standard horizontal cutting line is determined based on the standard horizontal cutting line, and whether the labeled curve is smooth or not is determined according to a variance value obtained from all the obtained distance values.
The beneficial effects of the above technical scheme are: by marking the preset points on the edge curve of the cut target region block, the reasonability of the edge smoothness of the edge curve is conveniently and effectively determined, and timely adjustment is facilitated.
The invention provides a method for analyzing and identifying wearing attributes of human heads, which further comprises the following steps of when the smoothness of the edge of a first target area block is judged to be unreasonable: adjusting the marked preset points, wherein the adjusting process comprises the following steps:
performing point-to-point line connection on all the preset points, and judging the first curvature of a first line from the current preset point to the next preset point, the second curvature of a second line from the current preset point to the previous preset point and the third curvature of a third line from the previous preset point to the next preset point;
determining the current point coordinate of the current preset point in the first target region block based on a preset standard coordinate system, and simultaneously determining whether the angle to be estimated of the triangle to be estimated, which is formed by the first curvature, the second curvature and the third curvature, is within a corresponding preset angle range, if so, judging the next preset point;
otherwise, if the angle to be estimated is smaller than the minimum value of the preset angle range, keeping the previous preset point and the next preset point unchanged, and performing first position adjustment on the current preset point until the angle to be estimated of the triangle to be estimated is within the corresponding preset angle range;
if the angle to be estimated is larger than the maximum value of a preset angle range, keeping the current preset point unchanged, and performing second position adjustment on the previous preset point and the next preset point until the angle to be estimated of the triangle to be estimated is within the corresponding preset angle range;
and when the angles to be estimated of all the triangles to be estimated are within a preset angle range, judging that the edge smoothness of the first target area block is reasonable.
The above-mentioned region segmentation processing is performed on the target face, for example, the target face is cut into four horizontal target region blocks of eyes, eyebrows, nose and mouth;
as shown in fig. 3, wherein f1 represents the current preset point, f0 represents the previous preset point, f2 represents the next preset point, and the connecting line between f1 and f0 represents the first curvature of the first line, and the connecting line between f1 and f2 represents the second curvature of the second line; a connecting line of f2 and f0 represents the third bending degree of the third line, and the formed triangle to be estimated is f0f1f 2;
the standard coordinate system may be a two-dimensional coordinate system, and the current coordinate of the current preset point may be regarded as the origin of coordinates (0, 0);
the corresponding angle to be estimated is determined based on a preset fitting curve F in the transverse target area block, such as an angle F, and the preset angle range is set to be [160 °,175 ° ];
as shown in fig. 3, a preset fitting curve F is set above a triangle to be estimated as F0F1F2, if the angle F to be estimated is smaller than 160 °, the previous preset point F0 and the next preset point F1 are kept unchanged, and the first position adjustment is performed on the current preset point F0, that is, the adjustment is performed in the direction of the preset fitting curve F;
if the angle F to be estimated is larger than 175 °, the current preset point F1 is kept unchanged, and the second position adjustment is performed on the previous preset point F0 and the next preset point F2, that is, the adjustment is performed in the direction close to the preset fitting curve F.
The beneficial effects of the above technical scheme are: the smoothness of the marking curve is adjusted by adjusting the preset points, and the reasonability of the edge smoothness is ensured.
The invention provides a method for analyzing and identifying human head wearing attributes, wherein the process of determining the association degree between the historical wearing attributes in the historical wearing probability set comprises the following steps:
determining a wearing attribute type of the historical wearing attribute;
determining the historical wearing attributes of the target person in different historical video streams;
determining the historical wearing frequency of each type of historical wearing attribute related to the target person according to the historical wearing attribute, and combining the historical wearing frequencies of the types of the historical wearing attributes;
storing the determined historical wearing frequency to form the historical wearing probability set;
and determining the association degree between the historical wearing attributes according to the determined historical wearing frequency.
The wearing attribute categories include: caps, masks, and the like;
the above-mentioned historical wearing frequency of each type of historical wearing attribute related to the target person is determined according to the historical wearing attribute, and the historical wearing frequency of the combined type of historical wearing attribute is determined, such as:
for example: for the target person a, for example, in the 20-eye header image, the number of times of wearing the hat is 10, the number of times of wearing the mask is 20, and the number of times of wearing the hat and the mask together is 10, and at this time, the hat wearing frequency is 50%, the mask wearing frequency is 100%, the hat and mask wearing frequency is 50%, and the degree of association between the corresponding hat and mask is 50%.
The beneficial effects of the above technical scheme are: by determining the wearing attribute type and the wearing attribute frequency, the association degree between the historical wearing attributes is more effectively determined.
The invention provides a method for analyzing and identifying wearing attributes of human heads, which comprises the following steps before detecting and snapshotting a target head image in a video stream to be identified:
screening the video stream to be identified by using a screening method to obtain a target frame possibly comprising the target head image, and capturing the target head image from the target frame;
wherein the screening method comprises the following steps:
step 1, the video stream to be identified is shot by a camera with a fixed position, and a basic picture corresponding to the camera is determined, wherein the basic picture is an unmanned picture; the pixel information matrix of the basic picture is as follows:
Figure BDA0002361458120000141
wherein A represents a pixel information matrix of the base picture; a is ajiThe pixel value of the ith pixel of the jth row; n is the pixel column number of the basic picture; m is the number of pixel lines of the basic picture;
step 2, obtaining a pixel value difference value of a corresponding pixel between each frame of video in the video stream to be identified and the basic picture:
Figure BDA0002361458120000151
wherein, wjiIs the pixel difference of the ith pixel in the jth row of the difference matrix, cjiIs the pixel value of the ith pixel of the jth line of the kth frame picture in the video stream to be identified, ajiThe difference matrix is Δ, which is a pixel value of the ith pixel in the jth row of the base picture, and can be expressed as:
Figure BDA0002361458120000152
the pixel information matrix of the k frame picture in the video stream to be identified can be represented as:
Figure BDA0002361458120000153
step 3, calculating a detection gap value;
Figure BDA0002361458120000154
wherein σ is the detection difference value, wjiIs the pixel difference, min { c, for the ith pixel in the jth row of the difference matrixj1,cj2,…,cjnIs cj1,cj2,…,cjnMinimum value of (1), wjkThe pixel difference value of the kth pixel of the jth row in the difference value matrix is obtained, and n is the column number of the difference value matrix; m is the number of rows of the difference matrix;
and 4, when the detection difference value sigma is larger than a preset threshold value, judging that the current video frame is a target frame possibly comprising a target head image.
The beneficial effects of the above technical scheme are that: by utilizing the technology, the difference value between the pixel information of each frame of the video to be identified and the pixel information of the basic frame can be obtained according to the pixel information of the basic frame, and then the detection difference value is obtained, so that the target frame possibly comprising the target head image is screened out according to the detection difference value, screening work before operation is provided for detecting and snapshotting the target head image in the video stream to be identified, thus the workload of detecting and snapshotting the target head image in the video stream to be identified is reduced, the difference is enlarged during difference calculation, and the difference can be screened out for further detection and snapshotting when the change is small.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A method for analyzing and identifying wearing attributes of human heads is characterized by comprising the following steps:
detecting and snapshotting a target head image in a video stream to be identified;
determining a current wear attribute of a target head in the target head image based on a wear attribute model;
acquiring target personnel related to the current wearing attribute based on a pre-stored personnel database, and performing association marking on the current wearing attribute and the target personnel;
storing the correlation marking result into a correlation database;
the process of acquiring the target person related to the current wearing attribute based on the pre-stored person database and performing association marking on the current wearing attribute and the target person comprises the following steps:
determining a historical wearing probability set of each target person in the person database;
determining the association degrees among the historical wearing attributes in the historical wearing probability set, performing priority ranking on the association degrees, and simultaneously acquiring historical wearing attributes corresponding to the association degrees of the preset number after the priority ranking;
based on a face learning model, identifying a face image corresponding to the current wearing attribute, and determining the face similarity of the face image based on the personnel database;
reserving target persons with the face similarity larger than a preset similarity in the person database, and meanwhile judging the wearing similarity of historical wearing attributes and current wearing attributes corresponding to the association degrees of the previous preset number related to each target person in the reserved target persons;
acquiring the target person with the highest wearing similarity in the reserved target persons as the target person associated with the current wearing attribute;
and carrying out association labeling on the target person associated with the current wearing attribute and the corresponding current wearing attribute.
2. The analytical recognition method according to claim 1, wherein the process of detecting and capturing the target head image in the video stream to be recognized comprises:
cutting the video stream to be identified according to a preset cutting stream to obtain a plurality of sub-video streams;
detecting each sub-video stream, and judging whether a target header exists in the sub-video stream;
and if so, capturing a target head image with the target head based on the sub-video stream, and storing.
3. The analytical recognition method of claim 1, wherein determining the current wear attribute of the target head in the target head image based on a wear attribute model comprises:
acquiring historical head images, and marking historical wearing attributes of each historical head image, wherein the historical wearing attributes comprise: a hat, no hat, glasses, sunglasses, mask, or mask is worn on the head;
acquiring a target face image in the personnel database, extracting a target face in the target face image, and simultaneously performing region segmentation processing on the extracted target face to obtain a plurality of target region blocks;
extracting the region block characteristics of each target region block, and performing deep training on the region block characteristics of each target region block based on a deep learning model to obtain a face learning model;
matching the marked historical wearing attributes to the corresponding target area blocks, and carrying out fusion processing on the target area blocks to obtain fusion images;
training the fused image based on the face learning model to obtain a wearing attribute model;
and identifying the target head in the target head image based on the wearing attribute model, and determining the corresponding current wearing attribute.
4. The analysis and recognition method according to claim 2, wherein said detecting each of said sub-video streams and determining whether a target header exists in said sub-video stream comprises:
determining the pixel saturation of each pixel point in the sub-video stream;
acquiring the total pixel saturation of the sub-video stream, and judging that a target head exists in the sub-video stream when the total pixel saturation is greater than a preset saturation;
otherwise, judging that the target header does not exist in the sub-video stream.
5. The analysis and recognition method according to claim 3, wherein, in the process of performing region segmentation processing on the extracted target face to obtain a plurality of target region blocks, the method further comprises: judging the smoothness of the edge between each target area block and determining the reasonability of the smoothness of the edge, wherein the process comprises the following steps:
acquiring a first target area block in all the target area blocks, determining an edge curve of the first target area block, and labeling a plurality of preset points on the edge curve;
connecting the marked preset points with a marked curve, judging whether the marked curve is smooth or not based on a preset fitting curve, if so, judging that the edge smoothness of the first target region block is reasonable, and simultaneously judging the reasonability of the edge smoothness of the next target region block;
otherwise, the edge smoothness of the first target area block is judged to be unreasonable.
6. The analysis and identification method according to claim 5, wherein when it is determined that the smoothness of the edge of the first target region block is not reasonable, further comprising: adjusting the marked preset points, wherein the adjusting process comprises the following steps:
performing point-to-point line connection on all the preset points, and judging the first curvature of a first line from the current preset point to the next preset point, the second curvature of a second line from the current preset point to the previous preset point and the third curvature of a third line from the previous preset point to the next preset point;
determining the current point coordinate of the current preset point in the first target region block based on a preset standard coordinate system, and simultaneously determining whether the angle to be estimated of the triangle to be estimated, which is formed by the first curvature, the second curvature and the third curvature, is within a corresponding preset angle range, if so, judging the next preset point;
otherwise, if the angle to be estimated is smaller than the minimum value of the preset angle range, keeping the previous preset point and the next preset point unchanged, and performing first position adjustment on the current preset point until the angle to be estimated of the triangle to be estimated is within the corresponding preset angle range;
if the angle to be estimated is larger than the maximum value of a preset angle range, keeping the current preset point unchanged, and performing second position adjustment on the previous preset point and the next preset point until the angle to be estimated of the triangle to be estimated is within the corresponding preset angle range;
and when the angles to be estimated of all the triangles to be estimated are within a preset angle range, judging that the edge smoothness of the first target area block is reasonable.
7. The analytical identification method of claim 1, wherein determining a degree of association between the historical wear attributes in the historical wear probability set comprises:
determining a wearing attribute type of the historical wearing attribute;
determining the historical wearing attributes of the target person in different historical video streams;
determining the historical wearing frequency of each type of historical wearing attribute related to the target person according to the historical wearing attribute, and combining the historical wearing frequencies of the types of the historical wearing attributes;
storing the determined historical wearing frequency to form the historical wearing probability set;
and determining the association degree between the historical wearing attributes according to the determined historical wearing frequency.
8. The analytical recognition method of claim 1,
before the detecting and capturing the target head image in the video stream to be recognized, the method further comprises the following steps:
screening the video stream to be identified by using a screening method to obtain a target frame possibly comprising the target head image, and capturing the target head image from the target frame;
wherein the screening method comprises the following steps:
step 1, the video stream to be identified is shot by a camera with a fixed position, and a basic picture corresponding to the camera is determined, wherein the basic picture is an unmanned picture; the pixel information matrix of the basic picture is as follows:
Figure FDA0002665868890000041
wherein A represents a pixel information matrix of the base picture; a is ajiThe pixel value of the ith pixel of the jth row; n is the pixel column number of the basic picture; m is the pixel row of the basic pictureCounting;
step 2, obtaining a pixel value difference value of a corresponding pixel between each frame of video in the video stream to be identified and the basic picture:
Figure FDA0002665868890000051
wherein, wjiIs the pixel difference of the ith pixel in the jth row of the difference matrix, cjiIs the pixel value of the ith pixel of the jth line of the kth frame picture in the video stream to be identified, ajiThe difference matrix is D, which is a pixel value of the ith pixel in the jth row of the base picture, and can be expressed as:
Figure FDA0002665868890000052
the pixel information matrix of the k frame picture in the video stream to be identified can be represented as:
Figure FDA0002665868890000053
step 3, calculating a detection gap value;
Figure FDA0002665868890000054
wherein s is a detection difference value, wjiIs the pixel difference, min { c, for the ith pixel in the jth row of the difference matrixj1,cj2,…,cjnIs cj1,cj2,…,cjnMinimum value of (1), wjkThe pixel difference value of the kth pixel of the jth row in the difference value matrix is obtained, and n is the column number of the difference value matrix; m is the number of rows of the difference matrix;
and 4, when the detection difference value s is larger than a preset threshold value, judging that the current video frame is a target frame possibly comprising a target head image.
CN202010022968.XA 2020-01-09 2020-01-09 Method for analyzing and identifying wearing attribute of human head Active CN111209874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010022968.XA CN111209874B (en) 2020-01-09 2020-01-09 Method for analyzing and identifying wearing attribute of human head

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010022968.XA CN111209874B (en) 2020-01-09 2020-01-09 Method for analyzing and identifying wearing attribute of human head

Publications (2)

Publication Number Publication Date
CN111209874A CN111209874A (en) 2020-05-29
CN111209874B true CN111209874B (en) 2020-11-06

Family

ID=70786064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010022968.XA Active CN111209874B (en) 2020-01-09 2020-01-09 Method for analyzing and identifying wearing attribute of human head

Country Status (1)

Country Link
CN (1) CN111209874B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084953B (en) * 2020-09-10 2024-05-10 济南博观智能科技有限公司 Face attribute identification method, system, equipment and readable storage medium
CN112906651B (en) * 2021-03-25 2023-07-11 中国联合网络通信集团有限公司 Target detection method and device
CN113762108A (en) * 2021-08-23 2021-12-07 浙江大华技术股份有限公司 Target identification method and device
CN115761866A (en) * 2022-09-23 2023-03-07 深圳市欧瑞博科技股份有限公司 User intelligent identification method, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109416692A (en) * 2016-06-02 2019-03-01 柯达阿拉里斯股份有限公司 Method for making and being distributed the product centered on media of one or more customizations
CN109492528A (en) * 2018-09-29 2019-03-19 天津卡达克数据有限公司 A kind of recognition methods again of the pedestrian based on gaussian sum depth characteristic

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100495436C (en) * 2006-12-15 2009-06-03 浙江大学 Decomposing method for three-dimensional object shapes based on user easy interaction
CN103714077B (en) * 2012-09-29 2017-10-20 日电(中国)有限公司 Method, the method and device of retrieval verification of object retrieval
CN108596011A (en) * 2017-12-29 2018-09-28 中国电子科技集团公司信息科学研究院 A kind of face character recognition methods and device based on combined depth network
CN108921034A (en) * 2018-06-05 2018-11-30 北京市商汤科技开发有限公司 Face matching process and device, storage medium
CN109241852B (en) * 2018-08-10 2021-01-12 广州杰赛科技股份有限公司 Face recognition method and device with additional features and computer equipment
CN110110611A (en) * 2019-04-16 2019-08-09 深圳壹账通智能科技有限公司 Portrait attribute model construction method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109416692A (en) * 2016-06-02 2019-03-01 柯达阿拉里斯股份有限公司 Method for making and being distributed the product centered on media of one or more customizations
CN109492528A (en) * 2018-09-29 2019-03-19 天津卡达克数据有限公司 A kind of recognition methods again of the pedestrian based on gaussian sum depth characteristic

Also Published As

Publication number Publication date
CN111209874A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111209874B (en) Method for analyzing and identifying wearing attribute of human head
CN111460962B (en) Face recognition method and face recognition system for mask
CN105809144B (en) A kind of gesture recognition system and method using movement cutting
US8027521B1 (en) Method and system for robust human gender recognition using facial feature localization
TWI383325B (en) Face expressions identification
CN104063722B (en) A kind of detection of fusion HOG human body targets and the safety cap recognition methods of SVM classifier
CN111563452B (en) Multi-human-body gesture detection and state discrimination method based on instance segmentation
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
CN110991315A (en) Method for detecting wearing state of safety helmet in real time based on deep learning
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN102902986A (en) Automatic gender identification system and method
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
CN111860291A (en) Multi-mode pedestrian identity recognition method and system based on pedestrian appearance and gait information
CN112149761A (en) Electric power intelligent construction site violation detection method based on YOLOv4 improved algorithm
CN111666845B (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
CN109034247B (en) Tracking algorithm-based higher-purity face recognition sample extraction method
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
WO2013075295A1 (en) Clothing identification method and system for low-resolution video
CN114445879A (en) High-precision face recognition method and face recognition equipment
CN110826408A (en) Face recognition method by regional feature extraction
JPH04101280A (en) Face picture collating device
Strueva et al. Student attendance control system with face recognition based on neural network
CN111222473B (en) Analysis and recognition method for clustering faces in video
CN113537019A (en) Detection method for identifying wearing of safety helmet of transformer substation personnel based on key points
WO2020232697A1 (en) Online face clustering method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant