CN113723308A - Detection method, system, equipment and storage medium of epidemic prevention suite based on image - Google Patents

Detection method, system, equipment and storage medium of epidemic prevention suite based on image Download PDF

Info

Publication number
CN113723308A
CN113723308A CN202111015815.3A CN202111015815A CN113723308A CN 113723308 A CN113723308 A CN 113723308A CN 202111015815 A CN202111015815 A CN 202111015815A CN 113723308 A CN113723308 A CN 113723308A
Authority
CN
China
Prior art keywords
image
epidemic prevention
pedestrian
head
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111015815.3A
Other languages
Chinese (zh)
Other versions
CN113723308B (en
Inventor
谭黎敏
赵钊
龚霁程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Westwell Information Technology Co Ltd
Original Assignee
Shanghai Westwell Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Westwell Information Technology Co Ltd filed Critical Shanghai Westwell Information Technology Co Ltd
Priority to CN202111015815.3A priority Critical patent/CN113723308B/en
Publication of CN113723308A publication Critical patent/CN113723308A/en
Application granted granted Critical
Publication of CN113723308B publication Critical patent/CN113723308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a detection method, a system, equipment and a storage medium of an epidemic prevention suite based on an image, wherein the method comprises the following steps: inputting at least one video frame as an image sample into a first neural network for identifying pedestrian heads to obtain a pedestrian head image corresponding to each pedestrian head in the image sample, and inputting a second neural network for identifying at least one head epidemic prevention unit in the pedestrian head image; generating a body search region downward of the image based on a position of each pedestrian head image; inputting a third neural network for identifying at least one body epidemic prevention unit in a body search area in a video frame to be identified; and judging whether the pedestrian corresponding to the pedestrian head image completely wears the epidemic prevention suite or not according to the output results of the second neural network and the third neural network. The invention can accurately identify the unmanned detection of the epidemic prevention suite under the scene of interweaving a plurality of pedestrians based on the existing monitoring equipment, thereby greatly reducing the personnel cost for detection and improving the identification accuracy.

Description

Detection method, system, equipment and storage medium of epidemic prevention suite based on image
Technical Field
The invention belongs to the field of visual monitoring, and particularly relates to a method, a system, equipment and a storage medium for detecting an epidemic prevention suite based on an image.
Background
Under the existing epidemic prevention needs and the epidemic prevention conditions of various scenes of the society, doctors or supervisors are required to wear epidemic prevention equipment to stand in sensitive areas or entrance guard places, and whether various wearing devices (masks, gloves, glasses and the like) are worn by each pedestrian according to regulations or not is artificially checked, so that the labor cost is extremely high, and the popularization is difficult. Furthermore, the conventional detection device (e.g., video monitor) can be used only for image detection of a single device (e.g., single detection mask), and cannot simultaneously perform image detection of multiple wearable devices.
In an actual detection scene, a lot of pedestrians walk, the proportion difference of each pedestrian in an image is large due to different distances of lenses, and the situation that the pedestrians shield local body parts is caused, so that the accurate detection result obtained by the existing image detection method is restricted, the calculation amount in the identification process is large, and the identification accuracy is low.
Therefore, the invention provides a detection method, a system, equipment and a storage medium of an image-based epidemic prevention suite.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present invention and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a detection method, a system, equipment and a storage medium of an epidemic prevention suite based on an image, overcomes the difficulties in the prior art, can accurately identify the unmanned detection of the epidemic prevention suite under the scene of interweaving a plurality of pedestrians based on the existing monitoring equipment, greatly reduces the personnel cost for detection and improves the identification accuracy.
The embodiment of the invention provides a detection method of an epidemic prevention suite based on an image, which comprises the following steps:
respectively inputting at least one video frame as an image sample into a first neural network for identifying pedestrian heads, and obtaining a pedestrian head image corresponding to each pedestrian head in the image sample;
inputting a second neural network for identifying at least one head epidemic prevention unit in the pedestrian head image to obtain an output result of whether at least one preset head epidemic prevention unit exists;
generating a body search region below the image based on the position of each pedestrian head image;
inputting a third neural network for identifying at least one body epidemic prevention unit in the body search area in the video frame to be identified, and obtaining an output result of whether the body search area has at least one preset body epidemic prevention unit; and
and judging whether the pedestrian corresponding to the pedestrian head image completely wears the epidemic prevention suite or not according to the output results of the second neural network and the third neural network.
Preferably, the epidemic prevention singles for the head comprise at least one of a mask, glasses and a head cover.
Preferably, the generating of the body search region downward of the image based on the position of each of the pedestrian head images includes:
and based on the position of each pedestrian head image as a reference, extending the pedestrian head image towards the ground direction in the image, simultaneously expanding the pedestrian head image towards two sides in the horizontal direction to form a closed body search area, and establishing a mapping relation between the body search area and the pedestrian head image.
Preferably, after the body search area is generated below the image based on the position of each pedestrian head image, before the third neural network for identifying at least one body epidemic prevention unit is input in the body search area in the video frame to be identified, and whether the body search area has at least one preset body epidemic prevention unit is obtained, the method further comprises the following steps:
and carrying out pattern tracking on the positions of the pedestrian head images of the current video frame in the subsequent video frame, and generating a body search area in real time on the basis of the positions of similar patterns of the pedestrian head images in the subsequent video frame.
Preferably, the pattern tracking based on the pedestrian head image of the current video frame at the position in the subsequent video frame, and the real-time generation of the body search region based on the similar pattern of the pedestrian head image at the position in the subsequent video frame comprises:
corresponding to each pedestrian head, firstly identifying a historical video frame of the pedestrian head image as a sub-video paragraph head frame, and using a current video frame as a video paragraph ending frame to form a video paragraph corresponding to each pedestrian head;
deleting video frames in the video passage of each pedestrian head, wherein the distance between the heads of other pedestrians and the body searching area in the video passage is smaller than or equal to a preset distance;
and taking the residual video frames as video frames to be identified in the subsequent step.
Preferably, the body epidemic prevention singles comprise at least one of gloves and protective clothing.
Preferably, the image area occupied by the body search region is positively correlated with the image area occupied by the pedestrian head image.
Preferably, the determining, according to the output results of the second neural network and the third neural network, whether the pedestrian corresponding to the pedestrian head image completely wears the epidemic prevention suite includes:
and judging whether the pedestrian wears the preset types of head epidemic prevention single products and body epidemic prevention single products at the same time, if so, wearing the complete epidemic prevention suite by the pedestrian, and if not, not wearing the complete epidemic prevention suite by the pedestrian, and executing corresponding operation.
Preferably, the inputting at least one video frame as an image sample into a first neural network for identifying a pedestrian head, and obtaining a pedestrian head image corresponding to each pedestrian head in the image sample further includes:
training a first neural network by adopting various image samples with various pedestrian heads;
training the second neural network by adopting image samples which are not worn with the head epidemic prevention single product, worn with at least one head epidemic prevention single product and matched with the complete head epidemic prevention single product;
and training the third neural network by adopting image samples which are not worn with the body epidemic prevention single product, worn with at least one body epidemic prevention single product and matched with the complete body epidemic prevention single product.
The embodiment of the present invention further provides a detection system of an image-based epidemic prevention suite, which is used for implementing the detection method of the image-based epidemic prevention suite, and the detection system of the image-based epidemic prevention suite includes:
the first identification module is used for inputting at least one video frame as an image sample into a first neural network for identifying the head of a pedestrian to obtain a pedestrian head image corresponding to each pedestrian head in the image sample,
the second identification module is used for inputting a second neural network for identifying at least one head epidemic prevention single product in the pedestrian head image and obtaining an output result of whether at least one preset head epidemic prevention single product exists or not;
a search area module which generates a body search area towards the lower part of the image based on the position of each pedestrian head image;
the third identification module is used for inputting a third neural network for identifying at least one body epidemic prevention unit in the body search area in the video frame to be identified, and obtaining an output result of whether the body search area has at least one preset body epidemic prevention unit; and
and the kit judging module is used for judging whether the pedestrian corresponding to the pedestrian head image completely wears the epidemic prevention kit according to the output results of the second neural network and the third neural network.
The embodiment of the invention also provides detection equipment of an epidemic prevention suite based on images, which comprises:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the method of detecting an image-based epidemic prevention suite described above via execution of executable instructions.
Embodiments of the present invention also provide a computer-readable storage medium for storing a program, which when executed implements the steps of the detection method of the image-based epidemic prevention suite.
The detection method, the system, the equipment and the storage medium of the epidemic prevention suite based on the image overcome the difficulty in the prior art, can accurately identify the unmanned detection of the epidemic prevention suite under the scene of interweaving a plurality of pedestrians based on the existing monitoring equipment, greatly reduces the personnel cost for detection and improves the identification accuracy.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method of detecting an image-based epidemic prevention suite of the present invention.
Figures 2 to 6 are schematic process steps of a detection method using the image-based epidemic prevention kit of the invention.
FIG. 7 is a schematic diagram of the structure of the inspection system of the image-based epidemic prevention suite of the invention
FIG. 8 is a schematic diagram of the structure of the inspection equipment of the image-based epidemic prevention suite of the present invention. And
fig. 9 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Reference numerals
1 video monitoring
11 pedestrian A
Pedestrian head image of 111 pedestrian a
112 body search region of pedestrian a
12 pedestrian B
Pedestrian head image of 121 pedestrian B
122 body search region of pedestrian B
21 image sample
22 first neural network
23 pedestrian head image
24 body search area
25 second neural network
26 third neural network
27 information of judgment result
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings so that those skilled in the art to which the present application pertains can easily carry out the present application. The present application may be embodied in many different forms and is not limited to the embodiments described herein.
Reference throughout this specification to "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," or the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics shown may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of different embodiments or examples presented in this application can be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the expressions of the present application, "plurality" means two or more unless specifically defined otherwise.
In order to clearly explain the present application, components that are not related to the description are omitted, and the same reference numerals are given to the same or similar components throughout the specification.
Throughout the specification, when a device is referred to as being "connected" to another device, this includes not only the case of being "directly connected" but also the case of being "indirectly connected" with another element interposed therebetween. In addition, when a device "includes" a certain component, unless otherwise stated, the device does not exclude other components, but may include other components.
When a device is said to be "on" another device, this may be directly on the other device, but may also be accompanied by other devices in between. When a device is said to be "directly on" another device, there are no other devices in between.
Although the terms first, second, etc. may be used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, the first interface and the second interface are represented. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" include plural forms as long as the words do not expressly indicate a contrary meaning. The term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but does not exclude the presence or addition of other features, regions, integers, steps, operations, elements, and/or components.
Although not defined differently, including technical and scientific terms used herein, all terms have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. Terms defined in commonly used dictionaries are to be additionally interpreted as having meanings consistent with those of related art documents and the contents of the present prompts, and must not be excessively interpreted as having ideal or very formulaic meanings unless defined.
FIG. 1 is a flow chart of a method of detecting an image-based epidemic prevention suite of the present invention. As shown in FIG. 1, the detection method of the epidemic prevention suite based on the image comprises the following steps:
s110, inputting at least one video frame as an image sample into a first neural network for identifying the head of the pedestrian, and obtaining a pedestrian head image corresponding to each pedestrian head in the image sample.
S120, inputting a second neural network for identifying at least one head epidemic prevention single product in the pedestrian head image, and obtaining an output result of whether at least one preset head epidemic prevention single product exists.
And S130, generating a body search area towards the lower part of the image based on the position of the head image of each pedestrian.
And S140, carrying out pattern tracking on the pedestrian head image based on the current video frame at the position in the subsequent video frame, and generating a body search area in real time based on the similar pattern of the pedestrian head image at the position in the subsequent video frame. In this embodiment, an existing tracking algorithm for tracking based on a position change of a local image in different video frames of a video may be adopted, but not limited thereto. In this embodiment, an image tracking technology may be adopted, which means that an object photographed in a camera is located through some manner (such as image recognition, infrared, ultrasonic, etc.), and the camera is instructed to track the object, so that the object is always kept within the field of view of the camera. In the narrow sense of the "image tracking" technology, tracking and photographing are performed by means of "image recognition". The "Image Recognition" is a process of performing NCAST Image differentiation and clustering calculation (for example, NCAST flexible Image Recognition Tracking System) directly using an Image captured by a camera, recognizing the position of a target object, and instructing the camera to track the object. Because the modes of infrared and ultrasonic waves are influenced by the environment, and special identification auxiliary equipment is worn, the modes are gradually replaced by the technology of image identification in practical application.
S150, inputting a third neural network for identifying at least one body epidemic prevention unit in a body search area in the video frame to be identified, and obtaining an output result whether the body search area has at least one preset body epidemic prevention unit.
And S160, judging whether the pedestrian corresponding to the pedestrian head image completely wears the epidemic prevention suite or not according to the output results of the second neural network and the third neural network, but not limited to this.
In a preferred embodiment, in step S120, the epidemic prevention singles on the head includes at least one of a mask, glasses, and a headgear, but not limited thereto.
In a preferred embodiment, step S130 includes: based on the position of each pedestrian head image as a reference, extending towards the ground direction in the image, simultaneously extending towards two sides in the horizontal direction to form a closed body search area, and establishing a mapping relation between the body search area and the pedestrian head image, but not limited to this.
In a preferred embodiment, step S140 includes:
and S141, corresponding to each pedestrian head, firstly identifying a historical video frame of the image of the pedestrian head as a head frame of a sub-video paragraph, and using a current video frame as a tail frame of a video paragraph to form a video paragraph corresponding to each pedestrian head.
And S142, deleting video frames in the video paragraph of each pedestrian head, wherein the distance between the heads of other pedestrians and the body search area in the video paragraph is smaller than or equal to the preset distance.
S143, the remaining video frames are used as the video frames to be identified in step S150, but not limited thereto. So that the advantage of video continuity can be utilized to find out the video frame that the body of the pedestrian is not shielded by other pedestrians, thereby shooting a complete picture and improving the judgment accuracy.
In a preferred embodiment, in step S150, the body epidemic prevention singles includes at least one of gloves and protective clothing, but not limited thereto.
In a preferred embodiment, the image area occupied by the body search area is positively correlated with the image area occupied by the pedestrian head image, but not limited thereto. So as to reasonably control the convolution area of pattern recognition and reduce the operation amount.
In a preferred embodiment, step S160 includes: and judging whether the pedestrians wear preset types of head epidemic prevention single products and body epidemic prevention single products at the same time, if so, wearing the complete epidemic prevention suite by the pedestrians, and if not, not wearing the complete epidemic prevention suite by the pedestrians, and executing corresponding operation, but not limited to the above.
In a preferred embodiment, step S110 further includes:
the first neural network is trained using images of various types of pedestrian heads.
And training a second neural network by adopting various image samples without wearing the head epidemic prevention single product, wearing at least one head epidemic prevention single product and matching with the complete head epidemic prevention single product.
And training a third neural network by adopting various image samples without wearing a body epidemic prevention unit, wearing at least one body epidemic prevention unit and matching with the complete body epidemic prevention unit, but not limited to the above. In the present invention, the neural networks of different recognition objects are trained based on various neural network training methods by the prior art or future technologies, which are not described herein again.
Figures 2 to 6 are schematic process steps of a detection method using the image-based epidemic prevention kit of the invention. As shown in fig. 2 to 6, the first neural network 22 is trained in advance using images with various types of pedestrian heads. The second neural network 25 is trained by adopting various image samples without wearing the head epidemic prevention single product, wearing at least one head epidemic prevention single product and matching with the complete head epidemic prevention single product. The third neural network 26 is trained by using various image samples without wearing a body epidemic prevention unit, wearing at least one body epidemic prevention unit and matching with a complete body epidemic prevention unit, but not limited thereto.
Referring to fig. 2, the corridor is monitored by using a video monitor 1, and it is checked whether pedestrians (a pedestrian a and a pedestrian B passing by in the present embodiment) passing through the corridor completely wear all epidemic prevention units required for epidemic prevention.
Referring to fig. 3 and 4, in the present embodiment, the preset epidemic prevention kit is that everyone needs to wear glasses, mask and gloves completely (three-piece set).
By inputting the video frame V4 captured in real time by the video monitor 1 as the image samples 21 respectively into the first neural network 22 for identifying the head of a pedestrian, the pedestrian head image 111 corresponding to the head of the pedestrian a and the pedestrian head image 121 corresponding to the head of the pedestrian B in the image samples are obtained.
The second neural network 22 for identifying at least one head epidemic prevention unit is respectively input into the pedestrian head image 111 and the pedestrian head image 121, and whether at least one preset head epidemic prevention unit exists is obtained, wherein glasses and a mask are identified in the pedestrian head image 111, and only glasses are identified in the pedestrian head image 121.
Based on the position of the pedestrian head image 111 in the image sample 21 of the video frame V4 as a reference, the image extends towards the ground direction in the image, and simultaneously extends towards two sides in the horizontal direction to form a closed body search area 112, and a mapping relation between the body search area 112 and the pedestrian head image 111 is established. Similarly, based on the position of the pedestrian head image 121 in the image sample 21 of the video frame V4 as a reference, the pedestrian head image extends toward the ground in the image, and expands toward both sides in the horizontal direction to form a closed body search region 122, and a mapping relationship between the body search region 122 and the pedestrian head image 121 is established.
The body search area 112 in the video frame V4 is input to the third neural network 24 that identifies at least one body epidemic prevention unit (including gloves), and the output result of whether the body search area has at least one preset body epidemic prevention unit is obtained, and only one glove is identified in the body search area 112. Similarly, the body search area 122 in the video frame V4 is input to the third neural network 24 that identifies at least one body epidemic prevention unit (including gloves), and the output result of whether the body search area has at least one preset body epidemic prevention unit is obtained, and two gloves are identified in the body search area 122. It can be seen that, at this time, since the body part of the pedestrian B blocks the pedestrian a, the video monitor 1 cannot shoot the whole body of the pedestrian a, and in order to overcome this disadvantage, the present invention further includes a step of determining whether the target pedestrian is blocked by other pedestrians:
referring to fig. 5 and 6, pattern tracking is performed based on the position of the pedestrian head image of the current video frame in the subsequent video frame, and the body search region 112 is generated in real time based on the position of the similar pattern of the pedestrian head image 111 in the subsequent video frame, and an existing tracking algorithm for tracking based on the position change of the local image in different video frames of the video can be adopted.
And (3) corresponding to each pedestrian head, firstly identifying a historical video frame of the image of the pedestrian head as a head frame of a sub-video paragraph, and using a current video frame as a tail frame of a video paragraph to form a video paragraph corresponding to each pedestrian head. The video paragraphs in this embodiment are video paragraphs composed of video frames V1, V2, V3, V4, V5, V6, V7, and V8, and video frames in which the distance between the head of another pedestrian (the pedestrian head image of pedestrian B) and the body search region (the body search region of pedestrian a) in the video paragraph is less than or equal to a preset distance are deleted from the video paragraphs of the head of each pedestrian, wherein all of the V3, V4, and V5 cause incomplete recognition of pedestrian a because pedestrian B blocks pedestrian a, so the three frames are removed, and the remaining video frames V1, V2, V6, V7, and V8 are used as video frames to be recognized in subsequent steps.
Inputting the third neural network 26 for identifying at least one body epidemic prevention unit in the body search area 112 of the video frame to be identified (taking the video frame V6 as an example), and obtaining whether the body search area has at least one preset body epidemic prevention unit, wherein gloves are identified in the body search area 112 of the pedestrian A, and gloves are not identified in the body search area 122 of the pedestrian B.
And judging whether the pedestrians A and B completely wear the epidemic prevention suite or not according to the output results of the second neural network 25 and the third neural network 26 to obtain judgment result information 27, wherein the pedestrians A completely wear the epidemic prevention suite, the pedestrians B do not completely wear the epidemic prevention suite, corresponding operation is executed, and the video monitoring 1 plays the preset corresponding audio frequency of the corresponding pedestrians B without completely wearing the epidemic prevention suite through a microphone of the video monitoring 1, namely please wear the mask and the gloves.
In a variation, the method further comprises adding a face recognition device so as to obtain the name of a pedestrian who does not wear an epidemic prevention unit such as a mask, and sending a prompt short message to a mobile phone of the pedestrian based on a preset mapping table. Or playing a prompt voice with its name "XXX (pedestrian B), please wear the mask and gloves"
The image-based detection method for the epidemic prevention suite overcomes the difficulties in the prior art, can accurately identify the unmanned detection of the epidemic prevention suite in a scene where a plurality of pedestrians are interwoven based on the existing monitoring equipment, greatly reduces the personnel cost for detection, and improves the identification accuracy.
Fig. 7 is a schematic diagram of the structure of the detection system of the image-based epidemic prevention suite of the invention. As shown in fig. 7, an embodiment of the present invention further provides an image-based epidemic prevention suite detection system 5, which is configured to implement the above-mentioned image-based epidemic prevention suite detection method, and includes:
the first identifying module 51 is configured to input at least one video frame as an image sample into a first neural network for identifying a pedestrian head, and obtain a pedestrian head image corresponding to each pedestrian head in the image sample.
The second identification module 52 inputs a second neural network identifying at least one head epidemic prevention unit in the pedestrian head image to obtain whether at least one preset head epidemic prevention unit exists.
And a search region module 53 for generating a body search region downward from the image based on the position of the head image of each pedestrian.
And the pattern tracking module 54 carries out pattern tracking on the position of the pedestrian head image of the current video frame in the subsequent video frame and generates a body search area in real time on the basis of the position of the similar pattern of the pedestrian head image in the subsequent video frame.
The third identifying module 55 inputs a third neural network for identifying at least one body epidemic prevention unit in the body search area in the video frame to be identified, and obtains whether the body search area has at least one preset body epidemic prevention unit.
And the kit judgment module 56 judges whether the pedestrian corresponding to the pedestrian head image completely wears the epidemic prevention kit according to the output results of the second neural network and the third neural network.
The detection system of the epidemic prevention suite based on the image overcomes the difficulties in the prior art, can accurately identify the unmanned detection of the epidemic prevention suite under the scene of interweaving a plurality of pedestrians based on the existing monitoring equipment, greatly reduces the personnel cost for detection and improves the identification accuracy.
In a preferred embodiment, the search area module 53 extends toward the ground in the image based on the position of each image of the head of the pedestrian, and extends toward two sides in the horizontal direction to form a closed body search area, and establishes a mapping relationship between the body search area and the image of the head of the pedestrian, but not limited thereto.
In a preferred embodiment, pattern tracking module 54 includes: and (3) corresponding to each pedestrian head, firstly identifying a historical video frame of the image of the pedestrian head as a head frame of a sub-video paragraph, and using a current video frame as a tail frame of a video paragraph to form a video paragraph corresponding to each pedestrian head. And deleting video frames in the video paragraph of each pedestrian head, wherein the distance between the heads of other pedestrians and the body search area in the video paragraph is less than or equal to the preset distance. The remaining video frames are used as the video frames to be identified in the third identification module 55, but not limited thereto.
In a preferred embodiment, the epidemic prevention singles comprises at least one of a mask, glasses and a head cover, but not limited thereto.
In a preferred embodiment, the body epidemic prevention singles comprise at least one of gloves and protective clothing, but not limited thereto.
In a preferred embodiment, the image area occupied by the body search area is positively correlated with the image area occupied by the pedestrian head image, but not limited thereto.
In a preferred embodiment, the kit determining module 56 determines whether the pedestrian wears the head epidemic prevention unit and the body epidemic prevention unit of the preset types at the same time, if yes, the pedestrian wears the complete epidemic prevention kit, if no, the pedestrian does not wear the complete epidemic prevention kit, and the corresponding operation is executed, but not limited thereto.
In a preferred embodiment, further comprising: the neural network module 50 trains a first neural network using images of various types of pedestrian heads. And training a second neural network by adopting various image samples without wearing the head epidemic prevention single product, wearing at least one head epidemic prevention single product and matching with the complete head epidemic prevention single product. And training the second neural network by adopting various image samples without wearing a body epidemic prevention unit, wearing at least one body epidemic prevention unit and matching with a complete body epidemic prevention unit, but not limited to the above.
The embodiment of the invention also provides detection equipment of the epidemic prevention suite based on the image, which comprises a processor. A memory having stored therein executable instructions of the processor. Wherein the processor is configured to perform the steps of the method of detecting an image-based epidemic prevention suite via execution of the executable instructions.
As described above, the detection device of the epidemic prevention suite based on the image overcomes the difficulties in the prior art, can accurately identify the unmanned detection of the epidemic prevention suite under the scene of interweaving a plurality of pedestrians based on the existing monitoring device, greatly reduces the personnel cost of the detection, and improves the identification accuracy.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
Fig. 8 is a schematic structural view of the inspection apparatus of the image-based epidemic prevention kit of the present invention. An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 8. The electronic device 600 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiment of the invention also provides a computer readable storage medium for storing a program, and the steps of the detection method of the epidemic prevention suite based on the image are realized when the program is executed. In some possible embodiments, the aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of this specification, when the program product is run on the terminal device.
As shown above, when the program of the computer-readable storage medium of this embodiment is executed, the difficulty in the prior art is overcome, and the unmanned detection of the epidemic prevention suite in the scene where multiple pedestrians are interlaced can be accurately identified based on the existing monitoring device, so that the personnel cost for detection is greatly reduced, and the identification accuracy is improved.
Fig. 9 is a schematic structural diagram of a computer-readable storage medium of the present invention. Referring to fig. 9, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In summary, the method, the system, the equipment and the storage medium for detecting the epidemic prevention suite based on the image overcome the difficulties in the prior art, can accurately identify the unmanned detection of the epidemic prevention suite under the scene of interweaving a plurality of pedestrians based on the existing monitoring equipment, greatly reduce the personnel cost for detection and improve the identification accuracy.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (12)

1. The detection method of the epidemic prevention suite based on the image is characterized by comprising the following steps:
respectively inputting at least one video frame as an image sample into a first neural network for identifying pedestrian heads, and obtaining a pedestrian head image corresponding to each pedestrian head in the image sample;
inputting a second neural network for identifying at least one head epidemic prevention unit in the pedestrian head image to obtain an output result of whether at least one preset head epidemic prevention unit exists;
generating a body search region below the image based on the position of each pedestrian head image;
inputting a third neural network for identifying at least one body epidemic prevention unit in the body search area in the video frame to be identified, and obtaining an output result of whether the body search area has at least one preset body epidemic prevention unit; and
and judging whether the pedestrian corresponding to the pedestrian head image completely wears the epidemic prevention suite or not according to the output results of the second neural network and the third neural network.
2. The method of claim 1, wherein the epidemic prevention kit comprises at least one of a mask, glasses, and a headgear.
3. The method for detecting an image-based epidemic prevention suite according to claim 1, wherein the generating a body search area towards the lower part of the image based on the position of each pedestrian head image comprises:
and based on the position of each pedestrian head image as a reference, extending the pedestrian head image towards the ground direction in the image, simultaneously expanding the pedestrian head image towards two sides in the horizontal direction to form a closed body search area, and establishing a mapping relation between the body search area and the pedestrian head image.
4. The method for detecting an image-based epidemic prevention suite according to claim 1, wherein after generating a body search area below the image based on the position of each pedestrian head image, before inputting a third neural network for identifying at least one body epidemic prevention unit in the body search area in the video frame to be identified and obtaining whether the body search area has at least one preset body epidemic prevention unit, the method further comprises the following steps:
and carrying out pattern tracking on the positions of the pedestrian head images of the current video frame in the subsequent video frame, and generating a body search area in real time on the basis of the positions of similar patterns of the pedestrian head images in the subsequent video frame.
5. The method of claim 4, wherein the pattern tracking of the pedestrian head image based on the current video frame in the position of the subsequent video frame, the real-time generation of the body search region based on the position of the similar pattern of the pedestrian head image in the subsequent video frame comprises:
corresponding to each pedestrian head, firstly identifying a historical video frame of the pedestrian head image as a sub-video paragraph head frame, and using a current video frame as a video paragraph ending frame to form a video paragraph corresponding to each pedestrian head;
deleting video frames in the video passage of each pedestrian head, wherein the distance between the heads of other pedestrians and the body searching area in the video passage is smaller than or equal to a preset distance;
and taking the residual video frames as video frames to be identified in the subsequent step.
6. The method of claim 1, wherein the body epidemic prevention unit comprises at least one of a glove and a protective garment.
7. The method of claim 1, wherein the image area occupied by the body search area is positively correlated to the image area occupied by the pedestrian head image.
8. The method for detecting an image-based epidemic prevention suite according to claim 1, wherein the step of judging whether the pedestrian corresponding to the pedestrian head image is completely worn in the epidemic prevention suite according to the output results of the second neural network and the third neural network comprises the steps of:
and judging whether the pedestrian wears the preset types of head epidemic prevention single products and body epidemic prevention single products at the same time, if so, wearing the complete epidemic prevention suite by the pedestrian, and if not, not wearing the complete epidemic prevention suite by the pedestrian, and executing corresponding operation.
9. The method for detecting an image-based epidemic prevention suite according to claim 1, wherein the inputting at least one video frame as an image sample into a first neural network for identifying pedestrian heads respectively further comprises, before obtaining a pedestrian head image corresponding to each pedestrian head in the image sample:
training the first neural network by adopting various image samples with various pedestrian heads;
training the second neural network by adopting image samples which are not worn with the head epidemic prevention single product, worn with at least one head epidemic prevention single product and matched with the complete head epidemic prevention single product;
and training the third neural network by adopting image samples which are not worn with the body epidemic prevention single product, worn with at least one body epidemic prevention single product and matched with the complete body epidemic prevention single product.
10. An inspection system for an image-based epidemic prevention suite, which is used for realizing the inspection method for the image-based epidemic prevention suite according to claim 1, and comprises the following steps:
the first identification module is used for inputting at least one video frame as an image sample into a first neural network for identifying the head of a pedestrian to obtain a pedestrian head image corresponding to each pedestrian head in the image sample,
the second identification module is used for inputting a second neural network for identifying at least one head epidemic prevention single product in the pedestrian head image and obtaining an output result of whether at least one preset head epidemic prevention single product exists or not;
a search area module which generates a body search area towards the lower part of the image based on the position of each pedestrian head image;
the third identification module is used for inputting a third neural network for identifying at least one body epidemic prevention unit in the body search area in the video frame to be identified, and obtaining an output result of whether the body search area has at least one preset body epidemic prevention unit; and
and the kit judging module is used for judging whether the pedestrian corresponding to the pedestrian head image completely wears the epidemic prevention kit according to the output results of the second neural network and the third neural network.
11. An image-based epidemic prevention suite detection apparatus, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the method of detecting an image-based epidemic prevention suite according to any one of claims 1 to 9 via execution of executable instructions.
12. A computer-readable storage medium storing a program, which when executed by a processor implements the steps of the method for image-based epidemic prevention suite detection according to any one of claims 1 to 9.
CN202111015815.3A 2021-08-31 2021-08-31 Image-based epidemic prevention kit detection method, system, equipment and storage medium Active CN113723308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111015815.3A CN113723308B (en) 2021-08-31 2021-08-31 Image-based epidemic prevention kit detection method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111015815.3A CN113723308B (en) 2021-08-31 2021-08-31 Image-based epidemic prevention kit detection method, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113723308A true CN113723308A (en) 2021-11-30
CN113723308B CN113723308B (en) 2023-08-22

Family

ID=78680009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111015815.3A Active CN113723308B (en) 2021-08-31 2021-08-31 Image-based epidemic prevention kit detection method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113723308B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110007950A1 (en) * 2009-07-11 2011-01-13 Richard Deutsch System and method for monitoring protective garments
US20130282609A1 (en) * 2012-04-20 2013-10-24 Honeywell International Inc. Image recognition for personal protective equipment compliance enforcement in work areas
CN106407911A (en) * 2016-08-31 2017-02-15 乐视控股(北京)有限公司 Image-based eyeglass recognition method and device
KR101817104B1 (en) * 2017-09-14 2018-01-11 (주)우정비에스씨 Automatic Undressing System For Protective Clothing
US20200184278A1 (en) * 2014-03-18 2020-06-11 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN111539338A (en) * 2020-04-26 2020-08-14 深圳前海微众银行股份有限公司 Pedestrian mask wearing control method, device, equipment and computer storage medium
CN111553266A (en) * 2020-04-27 2020-08-18 杭州宇泛智能科技有限公司 Identification verification method and device and electronic equipment
CN111582183A (en) * 2020-05-11 2020-08-25 广州中科智巡科技有限公司 Mask identification method and system in public place
CN111931623A (en) * 2020-07-31 2020-11-13 南京工程学院 Face mask wearing detection method based on deep learning
CN112183471A (en) * 2020-10-28 2021-01-05 西安交通大学 Automatic detection method and system for standard wearing of epidemic prevention mask of field personnel
CN112287827A (en) * 2020-10-29 2021-01-29 南通中铁华宇电气有限公司 Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN112597867A (en) * 2020-12-17 2021-04-02 佛山科学技术学院 Face recognition method and system for mask, computer equipment and storage medium
KR102268065B1 (en) * 2020-11-09 2021-06-24 한국씨텍(주) Visitor management device for prevention of infection
AU2021103468A4 (en) * 2021-06-18 2021-08-12 Kiran Kumar Chandriah Smart Bus with AI Based Face Mask Detection System in Pandemic Situations Using Raspberry PI
CN113283296A (en) * 2021-04-20 2021-08-20 晋城鸿智纳米光机电研究院有限公司 Helmet wearing detection method, electronic device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110007950A1 (en) * 2009-07-11 2011-01-13 Richard Deutsch System and method for monitoring protective garments
US20130282609A1 (en) * 2012-04-20 2013-10-24 Honeywell International Inc. Image recognition for personal protective equipment compliance enforcement in work areas
US20200184278A1 (en) * 2014-03-18 2020-06-11 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN106407911A (en) * 2016-08-31 2017-02-15 乐视控股(北京)有限公司 Image-based eyeglass recognition method and device
KR101817104B1 (en) * 2017-09-14 2018-01-11 (주)우정비에스씨 Automatic Undressing System For Protective Clothing
CN111539338A (en) * 2020-04-26 2020-08-14 深圳前海微众银行股份有限公司 Pedestrian mask wearing control method, device, equipment and computer storage medium
CN111553266A (en) * 2020-04-27 2020-08-18 杭州宇泛智能科技有限公司 Identification verification method and device and electronic equipment
CN111582183A (en) * 2020-05-11 2020-08-25 广州中科智巡科技有限公司 Mask identification method and system in public place
CN111931623A (en) * 2020-07-31 2020-11-13 南京工程学院 Face mask wearing detection method based on deep learning
CN112183471A (en) * 2020-10-28 2021-01-05 西安交通大学 Automatic detection method and system for standard wearing of epidemic prevention mask of field personnel
CN112287827A (en) * 2020-10-29 2021-01-29 南通中铁华宇电气有限公司 Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
KR102268065B1 (en) * 2020-11-09 2021-06-24 한국씨텍(주) Visitor management device for prevention of infection
CN112597867A (en) * 2020-12-17 2021-04-02 佛山科学技术学院 Face recognition method and system for mask, computer equipment and storage medium
CN113283296A (en) * 2021-04-20 2021-08-20 晋城鸿智纳米光机电研究院有限公司 Helmet wearing detection method, electronic device and storage medium
AU2021103468A4 (en) * 2021-06-18 2021-08-12 Kiran Kumar Chandriah Smart Bus with AI Based Face Mask Detection System in Pandemic Situations Using Raspberry PI

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张新;: "天地伟业科技助力疫情防控", 中国公共安全, no. 03 *
牛作东;覃涛;李捍东;陈进军;: "改进RetinaFace的自然场景口罩佩戴检测算法", 计算机工程与应用, no. 12 *

Also Published As

Publication number Publication date
CN113723308B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
US10417503B2 (en) Image processing apparatus and image processing method
CN110705405B (en) Target labeling method and device
JP7040463B2 (en) Analysis server, monitoring system, monitoring method and program
KR101972918B1 (en) Apparatus and method for masking a video
CN109657533A (en) Pedestrian recognition methods and Related product again
US20190332856A1 (en) Person's behavior monitoring device and person's behavior monitoring system
US11048917B2 (en) Method, electronic device, and computer readable medium for image identification
WO2020006964A1 (en) Image detection method and device
CN110610127B (en) Face recognition method and device, storage medium and electronic equipment
CN112149615B (en) Face living body detection method, device, medium and electronic equipment
CN109544870B (en) Alarm judgment method for intelligent monitoring system and intelligent monitoring system
US11551407B1 (en) System and method to convert two-dimensional video into three-dimensional extended reality content
CN110889314A (en) Image processing method, device, electronic equipment, server and system
US20220300774A1 (en) Methods, apparatuses, devices and storage media for detecting correlated objects involved in image
JP6405606B2 (en) Image processing apparatus, image processing method, and image processing program
CN111027434B (en) Training method and device of pedestrian recognition model and electronic equipment
CN113723308B (en) Image-based epidemic prevention kit detection method, system, equipment and storage medium
CN110728249A (en) Cross-camera identification method, device and system for target pedestrian
CN114663796A (en) Target person continuous tracking method, device and system
CN113792569B (en) Object recognition method, device, electronic equipment and readable medium
CN113657219A (en) Video object detection tracking method and device and computing equipment
KR20210001438A (en) Method and device for indexing faces included in video
CN113780213B (en) Method, system, equipment and storage medium for pedestrian recognition based on monitoring
CN115546737B (en) Machine room monitoring method
CN116884078B (en) Image pickup apparatus control method, monitoring device, and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 503-3, 398 Jiangsu Road, Changning District, Shanghai 200050

Applicant after: Shanghai Xijing Technology Co.,Ltd.

Address before: Room 503-3, 398 Jiangsu Road, Changning District, Shanghai 200050

Applicant before: SHANGHAI WESTWELL INFORMATION AND TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant