CN113536917B - Dressing recognition method, system, electronic device and storage medium - Google Patents

Dressing recognition method, system, electronic device and storage medium Download PDF

Info

Publication number
CN113536917B
CN113536917B CN202110647145.0A CN202110647145A CN113536917B CN 113536917 B CN113536917 B CN 113536917B CN 202110647145 A CN202110647145 A CN 202110647145A CN 113536917 B CN113536917 B CN 113536917B
Authority
CN
China
Prior art keywords
human body
dressing
feature set
detection
detection target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110647145.0A
Other languages
Chinese (zh)
Other versions
CN113536917A (en
Inventor
魏乃科
潘华东
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110647145.0A priority Critical patent/CN113536917B/en
Publication of CN113536917A publication Critical patent/CN113536917A/en
Application granted granted Critical
Publication of CN113536917B publication Critical patent/CN113536917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a dressing identification method, a system, an electronic device and a storage medium, wherein component area division is carried out on a detection image of a detection target according to human body part key points by acquiring the human body part key points of the detection target, a component area diagram is obtained, dressing feature extraction is carried out on each component area diagram, a detection feature set is acquired, and under the condition that the similarity between the detection feature set and a standard feature set is larger than a preset threshold value, the detection target is judged to accord with preset dressing, so that the problem of low dressing identification efficiency is solved, and accurate and efficient dressing identification is realized.

Description

Dressing recognition method, system, electronic device and storage medium
Technical Field
The present application relates to the field of image recognition, and in particular, to a dressing recognition method, system, electronic device, and storage medium.
Background
In many production environments, there are strict restrictions on the dressing requirements of workers, and once irregular dressing workers appear in the working environment, potential safety hazards exist in the working scene, thus causing unnecessary economic loss. In the prior art, a method for detecting the dressing of a worker is generally to install a video camera outside a working scene, shoot the worker entering the working scene through the video camera, and finally judge whether the dressing of the worker entering the working scene is standard or not by a monitoring person. But the human participation in the identification consumes the energy of the monitoring personnel. Some schemes propose to replace manual work through image recognition technology to realize dressing recognition, but have the problems of low recognition rate and poor adaptability.
At present, no effective solution is proposed for solving the problem of low dressing recognition efficiency in the related art.
Disclosure of Invention
The embodiment of the application provides a dressing identification method, a dressing identification system, an electronic device and a storage medium, which are used for at least solving the problem of low dressing identification efficiency in the related art.
In a first aspect, an embodiment of the present application provides a method for identifying a garment, including:
Acquiring human body part key points of a detection target, and dividing part areas of a detection image of the detection target according to the human body part key points to obtain a part area diagram;
Performing dressing feature extraction on each component area diagram to obtain a detection feature set;
And under the condition that the similarity between the detection feature set and the standard feature set is larger than a preset threshold, judging that the detection target accords with a preset dressing, wherein the standard feature set corresponds to the preset dressing.
In some embodiments, the detecting feature set includes a global feature set, and the performing dressing feature extraction on each of the component area diagrams, and obtaining the detecting feature set includes:
dividing the subareas of the component area diagrams, and extracting the average color of each subarea;
Gray processing is carried out on the average color of each subarea, and low-pass filtering is carried out, so that a global feature map is obtained;
And performing dressing feature extraction on the global feature map through a neural network to obtain the global feature set.
In some embodiments, the detecting feature set includes a contour feature set, and the performing dressing feature extraction on each of the component area diagrams, and obtaining the detecting feature set includes:
band-pass filtering is carried out on each component area diagram to obtain a contour feature diagram;
And performing dressing feature extraction on the outline feature map through a neural network to obtain the outline feature set.
In some embodiments, the detecting feature set includes a detail feature set, and the performing dressing feature extraction on each of the component area diagrams, and obtaining the detecting feature set includes:
performing sliding window traversal matching on each component area graph according to a preset specific mark;
The detail feature set is acquired through an image in a sliding window matched with the specific identification.
In some embodiments, the method includes, before acquiring a human body part key point of a detection target and performing component region division on a detection image of the detection target according to the human body part key point:
acquiring human body characteristics of the detection target, and judging the human body state of the detection target according to the human body characteristics;
and under the condition that the human body state accords with a preset state, acquiring a detection image of the detection target.
In some of these embodiments, the human body features include human body joint point features and human body segmentation features, and the human body states include human body pose, human body angle, and human body occlusion.
In some of these embodiments, the component region includes a left leg region, a right leg region, a left hand region, a right hand region, and an ventral-thoracic region.
In a second aspect, an embodiment of the present application provides a wear identification system, including: an image acquisition device and a server device, wherein;
the image acquisition device is used for acquiring a detection image of a detection target;
The server device is configured to execute the dressing recognition method described in the first aspect.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the dressing identification method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the garment recognition method as described in the first aspect above.
Compared with the prior art, the dressing recognition method system, the electronic device and the storage medium provided by the embodiment of the application have the advantages that the human body part key points of the detection target are obtained, the part area diagram is obtained by dividing the part area of the detection image of the detection target according to the human body part key points, the dressing feature extraction is carried out on each part area diagram, the detection feature set is obtained, and under the condition that the similarity between the detection feature set and the standard feature set is larger than the preset threshold value, the detection target is judged to be in accordance with the preset dressing, so that the problem of low dressing recognition efficiency is solved, and the accurate and efficient dressing recognition is realized.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a hardware configuration block diagram of a terminal of a dressing recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of a garment identification method according to an embodiment of the present application;
FIG. 3 is a flow chart of another method of garment identification according to an embodiment of the present application;
FIG. 4 is a flowchart of a dressing identification method according to a preferred embodiment of the present application;
FIG. 5 is a schematic illustration of key points of a human body part in a dressing identification method according to a preferred embodiment of the present application;
FIG. 6 is a schematic view of the division of the component areas in the dressing recognition method according to the preferred embodiment of the present application;
FIG. 7 is a schematic diagram of global feature extraction in a dressing identification method according to a preferred embodiment of the application;
FIG. 8 is a schematic drawing of contour feature extraction in a dressing recognition method according to a preferred embodiment of the present application;
FIG. 9 is a detailed feature extraction schematic diagram in a dressing identification method according to a preferred embodiment of the application;
fig. 10 is a block diagram of a garment identification system according to an embodiment of the application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means greater than or equal to two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The method embodiment provided in this embodiment may be executed in a terminal, a computer or a similar computing device. Taking the example of running on a terminal, fig. 1 is a block diagram of the hardware structure of the terminal according to the dressing recognition method according to the embodiment of the present application. As shown in fig. 1, the terminal may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting on the structure of the terminal described above. For example, the terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the method for recognizing the attachment in the embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, to implement the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
The present embodiment provides a dressing recognition method, and fig. 2 is a flowchart of the dressing recognition method according to an embodiment of the present application, as shown in fig. 2, the flowchart including the steps of:
Step S201, obtaining human body part key points of the detection target, and dividing the part area of the detection image of the detection target according to the human body part key points to obtain a part area diagram. The key points of the human body parts can be parts such as crotch, ankle, arms, shoulders and the like which are easy to capture and detect in the human body movement process, so that the human body images are divided into part areas, such as the human body images are divided into a head part, an upper body image and a lower body image, in dressing detection, the upper body image corresponds to the upper garment of the detection target, and the lower body image corresponds to the trousers of the detection target, and the dressing images of different parts can be better processed and compared. The method for dividing the region according to the key parts of the human body is more reasonable and accurate than the method for directly dividing the detection picture into an upper part, a middle part and a lower part to correspond to different dressing. In some embodiments, the human target is divided into the following five component areas by human body part key points: dividing the area where the left waist, left knee and left foot head connecting line is located into a left leg area; dividing the area where the connecting line of the right waist, the right knee and the right foot head is located into a right leg area; dividing the area where the left shoulder, the left elbow and the left hand head connecting line are located into left hand areas; dividing the area where the right shoulder, the right elbow and the right hand head connecting line are located into a right hand area; and dividing the area defined by the left shoulder, the right shoulder, the left waist and the right waist into an abdomen-chest area. The feature extraction and comparison can be performed on the trunk portion and the sleeve portion, respectively, so that the accuracy of wearing recognition can be further improved.
Step S202, performing dressing feature extraction on each component area diagram to obtain a detection feature set. And respectively carrying out dressing feature extraction on the images of the component areas to obtain dressing features of the current detection targets in the component areas. The set of detection features may be a set of dressing features corresponding to each component area, i.e. each component area may have different dressing features. The apparel feature may be a contour, texture, color, or other specific indicia of the portion of the apparel, such as buttons, badges, and the like.
Step S203, determining that the detection target meets a preset dress when the similarity between the detection feature set and the standard feature set is greater than a preset threshold. And (3) comparing the detected feature set obtained in the step S202 with a standard feature set, wherein the standard feature set is a feature set extracted from a preset dress image. And comparing the detection feature set of each component area with the standard feature set, outputting similarity, and if the similarity is larger than a preset threshold value, indicating that the dressing of the detection target is consistent with the preset dressing, namely, the dressing of the detection target accords with the preset dressing. Alternatively, in some application scenarios, the dressing with the similarity smaller than the preset threshold may be set to conform to the preset dressing. For example, designating the standard feature set reflects a wearing wear prohibited, only a wearing wear having a similarity smaller than a preset threshold is permitted.
Through the steps, the detection image of the detection target is subjected to component area division according to the key points of the human body parts to obtain a component area diagram, and the component area diagram is subjected to dressing feature extraction and comparison, so that dressing identification is performed. On the one hand, the method is more practical in that the part area is divided through the key points of the human body parts than the detected images are divided directly, and the part area diagram can better correspond to the clothing images of all parts of the human body, so that the detected feature set is consistent with the area corresponding to the standard feature set when feature extraction and comparison are carried out; on the other hand, the scheme can flexibly cope with different dressing recognition requirements only by switching different standard feature sets, can even meet the requirement of recognizing various dressing scenes at the same time, and improves the dressing recognition efficiency and accuracy.
In some embodiments, the detection feature set includes a global feature set, and the performing the dressing feature extraction on each component area graph to obtain the detection feature set includes: dividing each part region diagram into subareas, and extracting the average color of each subarea; carrying out gray scale operation on the component area graph, and carrying out low-pass filtering to obtain a global feature graph; and performing dressing feature extraction on the global feature map through a neural network to obtain a global feature set. Preferably, the process of acquiring the detection feature set is as follows: dividing the part region picture into N-N subregions, and extracting average colors in the subregions; and (3) gray processing is carried out according to the average color of each subarea, the colored picture is changed into a gray picture, then low-pass filtering is carried out, so that detail characteristic information can be removed, global characteristics are left, then the processed image is subjected to characteristic extraction through a neural network, and a characteristic layer is output as a global characteristic set.
In some embodiments, the detection feature set includes a contour feature set, and the step of extracting the dressing feature for each component area map includes: band-pass filtering is carried out on each component area diagram to obtain a contour feature diagram; and performing dressing feature extraction on the contour feature map through a neural network to obtain a contour feature set. Preferably, the process of acquiring the detection feature set is as follows: bandpass filtering, such as a butterworth bandpass filter, a gaussian bandpass filter, etc., is performed on the component area image, removing background and detail information, leaving contour information. And extracting the characteristics of the processed image through a neural network to obtain a contour characteristic set.
In some embodiments, the detection feature set includes a detail feature set, and the step of performing dressing feature extraction on each component area map to obtain the detection feature set includes: and carrying out sliding window traversal matching on each component area graph according to the preset specific mark. The specific identification may be a portion of the garment that is more differentiated from other portions, such as a pocket, a clasp, a drawing or identification on clothing, etc. Preferably, the size of the sliding window used for the sliding window traversal is adapted to the size of the specific identification. In the process of traversing the part area diagram, under the condition that the image in the sliding window is matched with a preset specific mark, the image in the sliding window is stored, and finally a detail feature set is obtained.
It should be noted that the global feature set, the contour feature set, and the detail feature set provide three different dressing feature extraction dimensions, and can adapt to different recognition requirements. In the practical application process, the dressing recognition can be performed through the extraction and comparison of any one of the feature sets, or a mode of combining and comparing multiple feature sets can be adopted, so that the dressing features are extracted from multiple dimensions, and the recognition efficiency and accuracy are further improved. The comparison of each dimension may be progressive, for example, comparing the global feature set with the contour feature set, where the two comparison results are both greater than a threshold value, so as to determine that the dressing of the detection target meets the preset dressing. The comparison of each dimension can also be embodied in a weighted form, the comparison results of different dimensions have different weights, and only when the weighted result of the comparison result of each dimension is greater than a threshold value, the dressing of the detection target is judged to be in accordance with the preset dressing. The specific determination mode may be selected according to the practical application, and is not limited herein.
In some embodiments, fig. 3 is a flowchart of another dressing recognition method according to an embodiment of the present application, as shown in fig. 3, the method for dressing recognition further includes the following steps before acquiring a human body part key point of a detection target and performing component area division on a detection image of the detection target according to the human body part key point:
Step S301, obtaining human body characteristics of the detection target, and judging the human body state of the detection target according to the human body characteristics. In this step, the human body characteristics of the detection target are also located, and the human body characteristics may be human body joint points such as wrists, elbows, shoulders, crotch, knees, and the like, human body positioning points where the head, face, hands, waist, feet, and the like are easily located, or a dividing line formed by connecting the joint points or the human body positioning points, such as a dividing line formed by connecting the left waist and the right waist, thereby dividing the human body into a head, an upper body, a lower body, and the like. From these human body characteristics, the human body state in which the detection target is located, for example, whether the detection target is walking upright, whether it is the front or the back, and the like, can be analyzed. In some embodiments, the human body state includes a human body posture, a human body angle and a human body shelter, the human body posture may include a squat posture, a sitting posture, an upright posture, and the like, the human body angle includes a front surface, a side surface, and a back surface, and the human body shelter includes an upper body shelter, a lower body shelter, and the like.
Step S302, under the condition that the human body state accords with the preset state, a detection image of the detection target is obtained. After determining the human body state in step S301, the detected image of the detection target in the preset state may be selected for subsequent wearing analysis. The preset state can better represent the dressing, and the folded human body state of the dressing is reduced as much as possible, for example, the human body posture is vertical, the human body angle is front, and the human body is in a non-shielding state.
Through the steps, in the process of acquiring the detection image of the detection target, the current human body state of the detection target can be acquired through the human body characteristic information, so that the detection image is screened, the front or back of the detection target is selected to be the detection image to be analyzed, the image with folding or too high overlapping degree is screened out, the quality of the detection image can be effectively improved, and the identification accuracy is improved.
The embodiments of the present application will be described and illustrated below by means of preferred embodiments.
Fig. 4 is a flowchart of a garment recognition method according to a preferred embodiment of the present application, and as shown in fig. 4, the garment recognition method includes the steps of:
in step S401, a video is input and a target is detected. Detecting a human body in a video frame in real time to obtain a detection target;
In step S402, target tracking is detected. Tracking the detection target, generating an ID, and tracking the detection target according to the ID. The method aims at acquiring as many detection images as possible of detection targets so as to screen out images which are convenient for dressing feature extraction in the subsequent steps;
Step S403, human body feature analysis. Human body characteristics of detection targets in the analysis video are analyzed, wherein the human body characteristics can comprise human body joint point detection and human body segmentation. The human body state is determined through human body feature analysis, and comprises human body postures such as squatting, sitting, lying, standing, bending and the like, human body angles such as front, side, back and the like, and human body shielding states such as upper body shielding, lower body shielding, head shielding and no shielding.
Step S404, detecting image selection. Screening the detection targets in the tracking process according to the human body state obtained by human body feature analysis, and selecting an image of the detection targets in a back vertical non-shielding state or in a front vertical non-shielding state as a detection image to be analyzed.
The screened detection image is acquired, and the detection image is further required to be divided into component areas. First, the key points of the human body parts of the detection target in the image are acquired, and fig. 5 is a schematic diagram of the key points of the human body parts in the dressing recognition method according to the preferred embodiment of the present application, and as shown in fig. 5, the key points of the human body parts include a left hand head, a right hand head, a left elbow, a right elbow, a left shoulder, a right shoulder, a left waist, a right waist, a left knee, a right knee, a left foot head, a right foot head, and the like. Fig. 6 is a schematic view illustrating division of component areas in the dressing recognition method according to the preferred embodiment of the present application, and as shown in fig. 6, the detected image is divided into five component areas, namely, left leg areas 5, including the split areas of left waist, left knee, left foot-head connection; a right leg region 4 including a divided region of a right waist, a right knee, and a right foot-and-head line; a left hand region 3 including a left shoulder, a left elbow, and a split region of a left hand head line; the right hand region 1 comprises a right shoulder, a right elbow, a segmentation region of a right hand head connecting line and an abdomen chest region 2, and comprises an abdomen chest segmentation region defined by a left shoulder, a right shoulder, a left waist and a right waist.
Step S405, a detection feature set is acquired. And respectively carrying out dressing feature extraction on the five component areas. The dressing feature comprises three dimensions, a global feature dimension, a contour feature dimension and a detail feature dimension.
Fig. 7 is a schematic diagram of global feature extraction in a dressing recognition method according to a preferred embodiment of the present application, and fig. 7 corresponds to a chest region, as shown in fig. 7, in the global feature dimension, a part region picture is first divided into n×n regions, and average colors in the regions are extracted; and performing gray processing according to the average color of the sub-region pictures, performing low-pass filtering on the obtained gray pictures, removing detail feature information, leaving global features, performing feature extraction on the processed images through a neural network, and outputting a feature layer as a global feature set.
Fig. 8 is a schematic drawing of contour feature extraction in a dressing recognition method according to a preferred embodiment of the present application, and fig. 8 corresponds to a chest region, and as shown in fig. 8, in the contour feature dimension, band-pass filtering is performed on a component region image, such as a butterworth band-pass filter, a gaussian band-pass filter, etc., background and detail information is removed, and contour information is left; and extracting the characteristics of the processed image through a neural network to obtain a contour characteristic set.
Fig. 9 is a schematic diagram of detailed feature extraction in a dressing recognition method according to a preferred embodiment of the present application, and fig. 9 corresponds to a chest region, as shown in fig. 9, in the detail feature dimension, emphasis is placed on specific identification of a recognition region, specific identification is set in a standard block region, and then sliding window traversal matching is performed in a picture of a block region to be recognized by a human body. And searching for similar identifiers and acquiring a detail feature set.
The feature sets of the dimensions finally form a detection feature set corresponding to the detection target;
Step S406, obtaining a standard dressing picture. The standard dressing picture is used for acquiring a standard feature set;
And S407, extracting the characteristics. Standard feature extraction is respectively carried out through the feature extraction methods of three dimensions in the step S405;
Step S408, obtaining a standard feature set;
and S409, comparing the feature sets. The detection feature set is acquired through step S405. And comparing the test feature set with the features in the standard feature set, and outputting a comprehensive comparison analysis result. And the feature comparison process sequentially judges whether the global features, the outline features and the detail features are similar or not, and finally whether the dressing is compliant or not is obtained. The feature comparison method can adopt a method for judging whether the similarity between the detected feature set and the standard feature set exceeds a set threshold value.
In step S410, the comparison result is output. And under the condition that the similarity between the detection feature set and the standard feature set exceeds a set threshold, indicating that the dressing of the detection target meets the preset dressing requirement.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
According to the dressing identification method, human body characteristic analysis is carried out on the tracked human body target, so that a detection image convenient for characteristic comparison can be obtained, and the dressing identification analysis quality is ensured. The dressing comparison area is divided into a plurality of component areas, and each component area is compared independently, so that an analysis boundary is defined, and more targeted characteristics can be extracted by the model. The feature extraction simulation human eye judgment flow is divided into three dimensions, namely global features, contour features and detail features, and the auxiliary neural network extracts features more specifically. In addition, the dressing recognition type is switched, a preset model is not required to be switched, and only the replacement of the standard dressing characteristic is required. In the characteristic comparison process, the visual characteristics of the person are simulated, the multidimensional characteristic matching judgment is carried out, and the problem of reduced recognition rate caused by dressing deformation and the like is solved. And by combining with the information of the joint points of the human body, the problem of regional division is solved, and the problem of inconsistent characteristic comparison regions can be well avoided.
The present embodiment also provides a wear identification system, which is used to implement the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 10 is a block diagram of a garment identification system according to an embodiment of the present application, and as shown in fig. 10, the garment identification system 100 includes: an image acquisition device 1001 and a server device 1002. Wherein the image acquisition apparatus 1001 is configured to acquire a detection image of a detection target; the server device 1002 is configured to perform the dressing recognition method described above.
The system may include a functional module or a program module, and the method may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
Acquiring human body part key points of a detection target, and dividing part areas of a detection image of the detection target according to the human body part key points to obtain a part area diagram;
Performing dressing feature extraction on each component area diagram to obtain a detection feature set;
And under the condition that the similarity between the detection feature set and the standard feature set is larger than a preset threshold, judging that the detection target accords with a preset dressing, wherein the standard feature set corresponds to the preset dressing.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In addition, in combination with the dressing recognition method in the above embodiment, the embodiment of the present application may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the dressing identification methods of the above embodiments.
It should be understood by those skilled in the art that the technical features of the above-described embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above-described embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (7)

1. A dressing recognition method, comprising:
Acquiring human body part key points of a detection target, and dividing part areas of a detection image of the detection target according to the human body part key points to obtain a part area diagram;
Performing dressing feature extraction on each component area diagram to obtain a detection feature set;
judging that the detection target accords with a preset dressing under the condition that the similarity between the detection feature set and the standard feature set is larger than a preset threshold, wherein the standard feature set corresponds to the preset dressing;
The detecting feature set includes a global feature set, and the performing dressing feature extraction on each component area graph, the obtaining the detecting feature set includes:
dividing the subareas of the component area diagrams, and extracting the average color of each subarea;
Gray processing is carried out on the average color of each subarea, and low-pass filtering is carried out, so that a global feature map is obtained;
performing dressing feature extraction on the global feature map through a neural network to obtain the global feature set;
the detecting feature set includes a contour feature set, the performing dressing feature extraction on each of the component area diagrams, and obtaining the detecting feature set includes:
band-pass filtering is carried out on each component area diagram to obtain a contour feature diagram;
Performing dressing feature extraction on the outline feature map through a neural network to obtain the outline feature set;
The detecting feature set includes a detail feature set, and the performing dressing feature extraction on each component area diagram includes:
performing sliding window traversal matching on each component area graph according to a preset specific mark; the size of the sliding window is adapted to the size of the specific mark when the sliding window traverses and matches;
The detail feature set is acquired through an image in a sliding window matched with the specific identification.
2. The dressing recognition method according to any one of claims 1, wherein, before acquiring a human body part key point of a detection target and performing component area division on a detection image of the detection target according to the human body part key point, the method comprises:
acquiring human body characteristics of the detection target, and judging the human body state of the detection target according to the human body characteristics;
and under the condition that the human body state accords with a preset state, acquiring a detection image of the detection target.
3. The garment identification method of claim 2, wherein the human body features include human body joint point features and human body segmentation features, and the human body state includes human body posture, human body angle, and human body occlusion.
4. The method of claim 1, wherein the component area comprises a left leg area, a right leg area, a left hand area, a right hand area, and an abdominal chest area.
5. A garment identification system, comprising: an image acquisition device and a server device; wherein;
the image acquisition device is used for acquiring a detection image of a detection target;
the server apparatus is for performing the dressing recognition method according to any one of claims 1 to 4.
6. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the garment identification method of any one of claims 1 to 4.
7. A storage medium having a computer program stored therein, wherein the computer program is configured to perform the garment identification method of any one of claims 1 to 4 when run.
CN202110647145.0A 2021-06-10 2021-06-10 Dressing recognition method, system, electronic device and storage medium Active CN113536917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110647145.0A CN113536917B (en) 2021-06-10 2021-06-10 Dressing recognition method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110647145.0A CN113536917B (en) 2021-06-10 2021-06-10 Dressing recognition method, system, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113536917A CN113536917A (en) 2021-10-22
CN113536917B true CN113536917B (en) 2024-06-07

Family

ID=78124794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110647145.0A Active CN113536917B (en) 2021-06-10 2021-06-10 Dressing recognition method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113536917B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989858B (en) * 2021-12-28 2022-04-08 上海安维尔信息科技股份有限公司 Work clothes identification method and system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250843A (en) * 2016-07-28 2016-12-21 北京师范大学 A kind of method for detecting human face based on forehead region and system
CN107808142A (en) * 2017-11-09 2018-03-16 北京小米移动软件有限公司 Eyeglass detection method and device
CN108845998A (en) * 2018-04-03 2018-11-20 南昌奇眸科技有限公司 A kind of trademark image retrieval matching process
WO2019007004A1 (en) * 2017-07-04 2019-01-10 北京大学深圳研究生院 Image feature extraction method for person re-identification
CN109255312A (en) * 2018-08-30 2019-01-22 罗普特(厦门)科技集团有限公司 A kind of abnormal dressing detection method and device based on appearance features
CN109344841A (en) * 2018-08-10 2019-02-15 北京华捷艾米科技有限公司 A kind of clothes recognition methods and device
CN110555393A (en) * 2019-08-16 2019-12-10 北京慧辰资道资讯股份有限公司 method and device for analyzing pedestrian wearing characteristics from video data
CN111553327A (en) * 2020-05-29 2020-08-18 上海依图网络科技有限公司 Clothing identification method, device, equipment and medium
CN111626210A (en) * 2020-05-27 2020-09-04 上海科技大学 Person dressing detection method, processing terminal, and storage medium
CN111696172A (en) * 2019-03-12 2020-09-22 北京京东尚科信息技术有限公司 Image labeling method, device, equipment and storage medium
CN112560741A (en) * 2020-12-23 2021-03-26 中国石油大学(华东) Safety wearing detection method based on human body key points
CN112614213A (en) * 2020-12-14 2021-04-06 杭州网易云音乐科技有限公司 Facial expression determination method, expression parameter determination model, medium and device
CN112633196A (en) * 2020-12-28 2021-04-09 浙江大华技术股份有限公司 Human body posture detection method and device and computer equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250843A (en) * 2016-07-28 2016-12-21 北京师范大学 A kind of method for detecting human face based on forehead region and system
WO2019007004A1 (en) * 2017-07-04 2019-01-10 北京大学深圳研究生院 Image feature extraction method for person re-identification
CN107808142A (en) * 2017-11-09 2018-03-16 北京小米移动软件有限公司 Eyeglass detection method and device
CN108845998A (en) * 2018-04-03 2018-11-20 南昌奇眸科技有限公司 A kind of trademark image retrieval matching process
CN109344841A (en) * 2018-08-10 2019-02-15 北京华捷艾米科技有限公司 A kind of clothes recognition methods and device
CN109255312A (en) * 2018-08-30 2019-01-22 罗普特(厦门)科技集团有限公司 A kind of abnormal dressing detection method and device based on appearance features
CN111696172A (en) * 2019-03-12 2020-09-22 北京京东尚科信息技术有限公司 Image labeling method, device, equipment and storage medium
CN110555393A (en) * 2019-08-16 2019-12-10 北京慧辰资道资讯股份有限公司 method and device for analyzing pedestrian wearing characteristics from video data
CN111626210A (en) * 2020-05-27 2020-09-04 上海科技大学 Person dressing detection method, processing terminal, and storage medium
CN111553327A (en) * 2020-05-29 2020-08-18 上海依图网络科技有限公司 Clothing identification method, device, equipment and medium
CN112614213A (en) * 2020-12-14 2021-04-06 杭州网易云音乐科技有限公司 Facial expression determination method, expression parameter determination model, medium and device
CN112560741A (en) * 2020-12-23 2021-03-26 中国石油大学(华东) Safety wearing detection method based on human body key points
CN112633196A (en) * 2020-12-28 2021-04-09 浙江大华技术股份有限公司 Human body posture detection method and device and computer equipment

Also Published As

Publication number Publication date
CN113536917A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
TW202022782A (en) Method and image matching method for neural network training and device thereof
CN108200334B (en) Image shooting method and device, storage medium and electronic equipment
CN109948459A (en) A kind of football movement appraisal procedure and system based on deep learning
WO2019237721A1 (en) Garment dimension data identification method and device, and user terminal
CN111950321B (en) Gait recognition method, device, computer equipment and storage medium
CN110738154A (en) pedestrian falling detection method based on human body posture estimation
CN112633196A (en) Human body posture detection method and device and computer equipment
CN106227827A (en) Image of clothing foreground color feature extracting method and costume retrieval method and system
US11450148B2 (en) Movement monitoring system
CN102486816A (en) Device and method for calculating human body shape parameters
CN116092199B (en) Employee working state identification method and identification system
CN113536917B (en) Dressing recognition method, system, electronic device and storage medium
CN110555393A (en) method and device for analyzing pedestrian wearing characteristics from video data
CN108875654A (en) A kind of face characteristic acquisition method and device
CN109829418A (en) A kind of punch card method based on figure viewed from behind feature, device and system
CN114049683A (en) Post-healing rehabilitation auxiliary detection system, method and medium based on three-dimensional human skeleton model
WO2013160663A2 (en) A system and method for image analysis
CN113544738A (en) Portable acquisition equipment for human body measurement data and method for collecting human body measurement data
CN111126102A (en) Personnel searching method and device and image processing equipment
CN115830712B (en) Gait recognition method, device, equipment and storage medium
CN115331304A (en) Running identification method
CN114663912A (en) Method and device for intelligently detecting whether dressing of police is standard, electronic equipment and storage medium
CN112949606B (en) Method and device for detecting wearing state of work clothes, storage medium and electronic device
CN114140414A (en) Non-contact human body measuring method and device and electronic equipment
CN113569775A (en) Monocular RGB input-based mobile terminal real-time 3D human body motion capture method and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant