CN113743248A - Identity information extraction method, device, electronic device and storage medium - Google Patents

Identity information extraction method, device, electronic device and storage medium Download PDF

Info

Publication number
CN113743248A
CN113743248A CN202110937391.XA CN202110937391A CN113743248A CN 113743248 A CN113743248 A CN 113743248A CN 202110937391 A CN202110937391 A CN 202110937391A CN 113743248 A CN113743248 A CN 113743248A
Authority
CN
China
Prior art keywords
target object
monitoring
identity information
distance
monitoring scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110937391.XA
Other languages
Chinese (zh)
Inventor
唐晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110937391.XA priority Critical patent/CN113743248A/en
Publication of CN113743248A publication Critical patent/CN113743248A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an identity information extraction method, an identity information extraction device, an electronic device and a storage medium, wherein the monitoring distance between a target object and other objects in a monitoring scene is acquired in real time based on video data in the monitoring scene, under the condition that the monitoring distance is smaller than a preset safety distance, images of the objects, with the monitoring distance between the other objects and the target object being smaller than the safety distance, are captured, contact object images of the target object are obtained, and under the condition that the target object falls down, identity information of the contact object is matched from a corresponding information base according to the contact object images. The method and the device realize the calculation of the distance between other objects and the target object in the monitoring scene, realize the further screening of other objects in the monitoring scene based on the distance, and narrow the range of identity information extraction, thereby improving the efficiency of extracting the identity information of people possibly contacting with a person who falls down when a fall event occurs in the monitoring scene.

Description

Identity information extraction method, device, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an identity information extraction method, an identity information extraction device, an electronic device, and a storage medium.
Background
In public places, when vulnerable groups such as old people, children or disabled people fall down, the efficiency of directly extracting the identity information of the contact person of the falling object from the video data acquired by the field monitoring equipment is low because the data acquired by the monitoring equipment is huge and has more interference and is not subjected to targeted filtering and screening, so that the responsibility confirmation basis cannot be timely extracted to prove that the rescuers are not responsible, and the enthusiasm of surrounding groups for rescuing the vulnerable groups when the vulnerable groups fall down is low.
In the related art, no effective solution is provided at present for the problem that the extraction efficiency of the identity information of people who may contact with a specific group is low when the specific group falls down in a public place.
Disclosure of Invention
The embodiment provides an identity information extraction method, an identity information extraction device, an electronic device and a storage medium, which are used for solving the problem that in the related art, when a fall event occurs in a specific scene, the efficiency of extracting identity information of people possibly contacting with a fallen person is low.
In a first aspect, in this embodiment, an identity information extraction method is provided, including:
acquiring monitoring distances between a target object and other objects in a monitoring scene in real time based on video data in the monitoring scene;
under the condition that the monitoring distance is smaller than a preset safety distance, capturing an image of an object, of which the monitoring distance with the target object is smaller than the safety distance, in the other objects to obtain a contact object image of the target object;
and matching the identity information of the contact object from a corresponding information base according to all the contact object images acquired before the target object falls under the condition that the target object is detected to fall.
In some embodiments, the method further includes, when the contact object image is marked with a time point indicating a time when the contact object image is captured, and in a case that it is detected that the target object falls, matching the identity information of the contact object from a corresponding information base according to all the contact object images acquired before the target object falls, including:
under the condition that the target object is detected to fall, acquiring a contact object image of the marked time point in a preset time interval before the target object falls;
and matching the identity information of the contact object from the corresponding information base according to the contact object image in the preset time interval.
In some embodiments, the obtaining, in real time, a monitoring distance between a target object and another object in a monitoring scene based on video data in the monitoring scene includes:
counting the crowd density of the target object in the monitoring scene within a preset range under the condition that the target object is detected to exist in the monitoring scene based on the video data in the monitoring scene;
and under the condition that the crowd density is greater than a preset density threshold value, acquiring the monitoring distance between the target object in the monitoring scene and other objects in the preset range in real time.
In some embodiments, the obtaining, in real time, a monitoring distance between a target object and another object in a monitoring scene based on video data in the monitoring scene further includes:
and tracking the target object in real time based on the video data in the monitoring scene, and determining the monitoring distance between the target object and the other objects based on a tracking result.
In some of these embodiments, the method further comprises:
and determining the monitoring distance between the target object and the other objects according to the radar data in the monitoring scene under the condition that the tracking of the target object is interrupted based on the video data in the monitoring scene.
In some embodiments, before acquiring, in real time, a monitoring distance between a target object and another object in a monitoring scene based on video data in the monitoring scene, the method further includes:
and identifying the target object in the monitoring scene based on the video data by using a preset target detection model.
In some embodiments, in the case that it is detected that the target object falls, matching the identity information of the contact object from a corresponding information base according to all the contact object images acquired before the target object falls further includes:
if the contact object is a pedestrian, matching the identity information of the contact object from a face information base;
and if the contact object is a vehicle, matching the identity information of the contact object from the license plate information base.
In a second aspect, in this embodiment, there is provided an identity information extraction apparatus for determining responsibility for falling of a target subject under a monitoring scene, the apparatus comprising: first acquisition module, second acquisition module and matching module, wherein:
the first acquisition module is used for acquiring the monitoring distance between a target object and other objects in a monitoring scene in real time based on video data in the monitoring scene;
the second obtaining module is configured to, when the monitoring distance is smaller than a preset safety distance, capture an image of an object, of which the monitoring distance to the target object is smaller than the safety distance, among the other objects, to obtain a contact object image of the target object;
the matching module is used for matching the identity information of the contact object from a corresponding information base according to all the contact object images acquired before the target object falls under the condition that the target object is detected to fall.
In a third aspect, in this embodiment, there is provided an electronic apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the identity information extraction method of the first aspect is implemented.
In a fourth aspect, in this embodiment, there is provided a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the identity information extraction method according to the first aspect when executing the computer program.
In a fifth aspect, in the present embodiment, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the identity information extraction method described in the first aspect above.
The identity information extraction method, the identity information extraction device, the electronic device and the storage medium are used for acquiring the monitoring distance between a target object and other objects in a monitoring scene in real time based on video data in the monitoring scene, capturing images of the objects, of which the monitoring distance between the other objects and the target object is smaller than the safety distance, in the case that the monitoring distance is smaller than the preset safety distance to obtain contact object images of the target object, and matching the identity information of the contact object from a corresponding information base according to all the contact object images acquired before the target object falls down in the case that the target object is detected to fall down. The method and the device realize the calculation of the distance between other objects and the target object in the monitoring scene, realize the further screening of other objects in the monitoring scene based on the distance, and narrow the range of identity information extraction, thereby improving the efficiency of extracting the identity information of people possibly contacting with a person who falls down when a fall event occurs in the monitoring scene.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a terminal of an identity information extraction method of the related art;
fig. 2 is a flowchart of an identity information extraction method according to the present embodiment;
fig. 3 is a flowchart of another identity information extraction method according to the present embodiment;
fig. 4 is a block diagram of the configuration of the identity information extraction apparatus of the present embodiment.
Detailed Description
For a clearer understanding of the objects, aspects and advantages of the present application, reference is made to the following description and accompanying drawings.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having," and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or a similar computing device. For example, the method is executed on a terminal, and fig. 1 is a block diagram of a hardware structure of the terminal according to the identity information extraction method of the embodiment. As shown in fig. 1, the terminal may include one or more processors 102 (only one shown in fig. 1) and a memory 104 for storing data, wherein the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the terminal described above. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the image display method in the present embodiment, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network described above includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, an identity information extraction method is provided, and fig. 2 is a flowchart of the identity information extraction method of this embodiment, as shown in fig. 2, the flowchart includes the following steps:
step S210, acquiring a monitoring distance between the target object and another object in the monitoring scene in real time based on the video data in the monitoring scene.
The video data in the monitoring scene may be real-time video data collected by the monitoring device in a visible area of the monitoring device. The monitoring scene may be a public scene, such as an outdoor scene of a square, a street, and a park, or an indoor scene of a business hall, a hotel lobby, a shopping mall, and the like. The monitoring scenario may be other scenarios, and is not specifically limited herein. The target objects in the monitoring scene can be the old, the children and the disabled, and can also be specific target people set according to the requirements of the actual application scene. Other objects in the monitoring scene may be moving objects such as pedestrians and vehicles, which appear in the visible area of the monitoring device together with the target object. The monitoring distance may be an actual distance in the monitoring scene between the target object and another object calculated according to the video data.
In an actual application scenario, when a target object falls due to an external force, a distance between a contact object causing the target object to fall and the target object is smaller than distances between other objects not in contact with the target object in the scene. Therefore, the monitoring distance between other objects in the monitoring scene and the target object can be acquired in real time, so that the object which is in contact with the target object can be identified subsequently. The target object in the monitoring scene can be tracked, and the monitoring distance between the target object and other objects is calculated in different video frames, so that the monitoring distance can be acquired in real time. Specifically, the monitoring distance between the target object and other objects in the monitored scene is calculated based on the video data, and may be calculated by establishing a pixel coordinate system in a plurality of video frames.
In addition, the monitoring device applied to the monitoring scene can be a camera with conventional functions of video detection, video playback, object identification and the like, and can also be a radar camera with video shooting and radar detection functions. When the radar camera is used for monitoring, radar data in the monitoring scene can be collected to supplement video data in the monitoring scene. For example, in the case where there is occlusion or light interference in the monitored scene, and the target object cannot be continuously tracked by using the video data, or the monitoring distance between the target object and another object cannot be calculated by using the video data, the monitoring distance may also be determined by using radar data.
And step S220, capturing images of objects with the monitoring distance smaller than the safety distance between the other objects and the target object under the condition that the monitoring distance is smaller than the preset safety distance to obtain contact object images of the target object.
The safe distance can be set according to the requirements of the actual application scene. The next target object in the monitoring scene may correspond to a plurality of different monitoring distances, and when one or more of the monitoring distances are smaller than a preset safety distance, it may be considered that there is a possibility that the one or more other objects whose monitoring distances from the target object are smaller than the safety distance contact the target object. Capturing images of all objects with monitoring distances to the target object smaller than the safety distance from the video data, and specifically extracting a part only containing the contact object from the video frame as the contact object image.
Step S230, when it is detected that the target object falls down, matching the identity information of the contact object from the corresponding information base according to all the contact object images acquired before the target object falls down.
After the monitoring equipment detects that the target object appears in the monitoring scene, the video recording function can be started, and video data in the monitoring scene are stored. When it is detected that the target object falls down, video data within a predetermined time interval before the target object falls down can be intercepted from the stored video data as video evidence, and the contact object image with the marked time point within the time interval is selected from the grabbed contact object images as a data source for extracting the identity information of the subsequent person or vehicle. The identity information is extracted according to the contact object image of the target object in the time interval before falling down, and compared with a mode of extracting the identity information based on all other objects in the video data, the method further screens the other objects in the video data, so that the range of identity identification is reduced, and the efficiency of extracting the identity information is improved.
Whether the target object falls can be judged through a fall detection algorithm. Specifically, skeleton information of the target object can be extracted through a background difference method and a morphological algorithm, whether the target object falls is roughly judged according to the proportion of the skeleton, and whether the target object has a descending trend is analyzed through a motion trend under the condition that the target object is judged to fall, so that false falling screening is performed. After the target object is detected to fall down, the video data can be directly intercepted, and a corresponding contact object image is obtained.
Additionally, the identity information of the contact object is matched from the corresponding information base, and specifically, the identity information of the contact object is matched in a personnel information system of a public security department or a vehicle information system of a traffic department. For example, when the contact object is a human body object, the identity information of the contact object may be acquired by performing similarity comparison with a human face picture in a personal information system based on the contact object image. After obtaining the identity information of the contacted object, the identity information can be used together with the video data as a basis for later responsibility identification and search of the troubled object causing the target object to fall.
In the above steps S210 to S230, based on the video data in the monitoring scene, the monitoring distance between the target object and another object in the monitoring scene is obtained in real time, and when the monitoring distance is smaller than the preset safety distance, the image of the object whose monitoring distance with the target object is smaller than the safety distance among the other objects is captured to obtain the contact object image of the target object, and when it is detected that the target object falls down, the identity information of the contact object is matched from the corresponding information base according to all the contact object images obtained before the target object falls down. The method and the device realize the calculation of the distance between other objects and the target object in the monitoring scene, realize the further screening of other objects in the monitoring scene based on the distance, and narrow the range of identity information extraction, thereby improving the efficiency of extracting the identity information of people possibly contacting with a person who falls down when a fall event occurs in the monitoring scene.
Further, in an embodiment, based on the step S230, time points are marked in the contact object image, where the time points are used to indicate the time when the contact object image is captured, and when it is detected that the target object falls, the identity information of the contact object is matched from the corresponding information base according to all contact object images acquired before the target object falls, which specifically includes the following steps:
in step S231, in the case where it is detected that the target object falls, a contact object image is acquired in which the marked time point is within a preset time interval before the target object falls.
The current time point is marked for the contact object image, and the method can be used for checking the time range in the subsequent identity information extraction.
Step S232, matching the identity information of the contact object from the corresponding information base according to the contact object image in the preset time interval.
Additionally, in an embodiment, based on the step S210, the method for acquiring the monitoring distance between the target object and the other object in the monitoring scene in real time based on the video data in the monitoring scene specifically includes the following steps:
step S211, counting the crowd density in the preset range of the target object in the monitoring scene under the condition that the target object is detected to exist in the monitoring scene based on the video data in the monitoring scene.
Step S212, under the condition that the crowd density is larger than the preset density threshold value, the monitoring distance between the target object and other objects in the preset range in the monitoring scene is obtained in real time.
Specifically, because the probability of personnel collision and falling under the crowd dense scene is higher than that under the crowd sparse scene, the monitoring distance between the target object and other objects within the preset range is calculated under the condition that the crowd density is greater than the preset density threshold value, so that the calculation amount of a computer system can be reduced, and the calculation efficiency is improved.
In the above steps S211 to S212, when it is detected that the target object exists in the monitoring scene, the crowd density of the target object within the preset range is counted, and when the crowd density is greater than the preset density threshold, the monitoring distance between the target object and another object in the monitoring scene is obtained in real time, so that when the target object exists in the crowd dense scene, the contact object image of the target object can be obtained in time, and therefore, the efficiency of extracting the identity information of a person who may contact with a person who falls can be improved when the target object falls in the crowd dense scene.
Further, in an embodiment, based on the step S210, the monitoring distance between the target object and another object in the monitoring scene is obtained in real time based on the video data in the monitoring scene, and the method specifically includes the following steps:
step S213, tracking the target object in real time based on the video data in the monitoring scene, and determining the monitoring distance between the target object and other objects based on the tracking result.
Specifically, after the target object is identified, the target object may be tracked by using a video tracking algorithm, such as centret, and after the target object is tracked, coordinate information of the target object in a pixel coordinate system may be located in each frame of video data, and a monitoring distance between the target object and other objects in the monitoring scene in each frame of video may be calculated based on the coordinate information.
In step S213, by tracking the target object in real time and determining the monitoring distance between the target object and another object based on the tracking result, the contact object information whose monitoring distance is smaller than the safety distance can be detected in time, so as to improve the accuracy of capturing the contact object image based on the monitoring distance.
Further, in an embodiment, based on the step S213, the method further includes the following steps:
step S214, determining the monitoring distance between the target object and other objects according to the radar data in the monitoring scene under the condition that the tracking of the target object is interrupted based on the video data in the monitoring scene.
Specifically, when the tracking of the target object is interrupted due to the activity of people and vehicles or light interference in the monitoring scene, the target object can be continuously tracked by collecting radar data in the monitoring scene. When a target object appears in a monitoring scene, video tracking and radar tracking can be performed on the target object respectively, and then radar data and video data are associated by using the corresponding relation between radar and video, so that the monitoring distance between the target object and other objects is obtained. The corresponding relationship between the radar and the video may be a conversion relationship between a radar coordinate system and a pixel coordinate system.
Additionally, when the monitoring distance between the other object and the target object in the monitoring scene is detected by the radar, the coordinate value of the target object and the coordinate value of the other object may be detected by the radar, respectively, and the monitoring distance between the target object and the other object may be determined based on the coordinate value of the target object and the coordinate value of the other object. Preferably, other objects and target objects in the monitoring scene in the radar camera may be used to respectively detect, or a radar device independent of the camera may be used to detect the monitoring scene, which is not specifically limited in this embodiment.
In the step S214, when the target object tracking is interrupted based on the video data in the monitoring scene, the monitoring distance between the target object and another object is determined according to the radar data in the monitoring scene, so that the interference and influence of the environmental factors in the monitoring scene are reduced, and the target object is accurately tracked and the monitoring distance is accurately calculated.
Additionally, in an embodiment, based on the step S210, before acquiring the monitoring distance between the target object and the other object in the monitoring scene in real time based on the video data in the monitoring scene, the method may further include the following steps:
step S215, identifying a target object in the monitoring scene based on the video data by using a preset target detection model.
The target detection model may be a Convolutional Neural Networks (CNN) model. By utilizing the target detection model, the age of the human body object in the monitoring scene is detected, the human body object meeting the preset age requirement is taken as the target object, and the target object is tracked. For example, an age range of the target object may be set in advance, and the human body object corresponding to the age range obtained by the target detection model may be set as the target object.
Additionally, in one embodiment, in the case that it is detected that the target object falls, matching the identity information of the contact object from the corresponding information base according to all the contact object images acquired before the target object falls, further includes the following steps:
in step S233, if the contact object is a pedestrian, the identity information of the contact object is matched from the face information base.
Step S234, if the contact object is a vehicle, matching the identity information of the contact object from the license plate information base.
In the above steps S210 to S234, the target object is tracked in real time based on the video data in the monitoring scene, the monitoring distance between the target object and another object is determined based on the tracking result, and the contact object whose monitoring distance to the target object is smaller than the safety distance can be monitored in time; under the condition that the tracking of the target object is interrupted, the monitoring distance between the target object and other objects is determined according to the radar data in the monitoring scene, so that the accuracy and robustness of monitoring distance calculation can be improved; the target object under the monitoring scene is identified by using a preset target detection model, so that the target object can be automatically detected, and false detection and missing detection caused by artificial subjective judgment are avoided; under the condition that the target object is detected to fall, the contact object image of the marked time point in the preset time interval before the target object falls is obtained, and the identity information of the contact object is matched from the corresponding information base according to the contact object image in the preset time interval, so that other objects in the monitoring scene are further screened, the range of identity information extraction is narrowed, and the efficiency of extracting the identity information of people possibly contacting with the falling person when the falling event occurs in the monitoring scene is improved.
The embodiment also provides a method for extracting the identity information. Fig. 3 is a flow chart of the method, as shown in fig. 3, the flow includes the following steps:
step S310, detecting a disadvantaged group in a visible area of a camera, and marking the disadvantaged group as a target object;
step S320, the camera starts the video recording;
step S330, determining the safe distance between the target object and other objects;
step S340, marking the target object as an object which can be sensed by radar;
step S350, detecting the distance between the target object and other objects by the video;
step S360, judging whether the target object is shielded, if so, executing step S370, otherwise, executing step S380;
step S370, radar detects the distance between the target object and other objects;
step S380, if the distance between the target object and other objects is smaller than the safe distance, taking the other objects as contact objects, capturing images of the contact objects and marking time points;
step S390, searching a contact object image in a time span under the condition that the target object is monitored to fall down;
step S400, searching pictures from a face library/license plate library to be matched with images of contact objects, and acquiring hit-and-hit object information;
step S410, the hit-and-problem object information and the video data are filed as the basis for responsibility confirmation.
In this embodiment, an identity information extraction apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of which has been already made is omitted. The terms "module," "unit," "subunit," and the like as used below may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of the configuration of the identity information extraction device 40 of the present embodiment, and as shown in fig. 4, the identity information extraction device 40 includes: a first acquisition module 42, a second acquisition module 44, and a matching module 46, wherein:
the first obtaining module 42 is configured to obtain, in real time, a monitoring distance between a target object and another object in a monitoring scene based on video data in the monitoring scene;
the second obtaining module 44 is configured to, when the monitoring distance is smaller than the preset safety distance, capture an image of an object whose monitoring distance to the target object is smaller than the safety distance among other objects, and obtain a contact object image of the target object;
and the matching module 46 is configured to, when it is detected that the target object falls, match the identity information of the contact object from the corresponding information base according to all the contact object images acquired before the target object falls.
The identity information extracting device 40 acquires the monitoring distance between the target object and other objects in the monitoring scene in real time based on the video data in the monitoring scene, captures images of objects with the monitoring distance between the other objects and the target object being smaller than the safety distance under the condition that the monitoring distance is smaller than the preset safety distance to obtain contact object images of the target object, and matches the identity information of the contact objects from the corresponding information base according to all the contact object images acquired before the target object falls under the condition that the target object is detected to fall. The method and the device realize the calculation of the distance between other objects and the target object in the monitoring scene, realize the further screening of other objects in the monitoring scene based on the distance, and narrow the range of identity information extraction, thereby improving the efficiency of extracting the identity information of people possibly contacting with a person who falls down when a fall event occurs in the monitoring scene.
In one embodiment, the matching module 46 is further configured to, in a case that it is detected that the target object falls, acquire a contact object image of which the marked time point is within a preset time interval before the target object falls; and matching the identity information of the contact object from the corresponding information base according to the contact object image in the preset time interval.
In one embodiment, the first obtaining module 42 is further configured to, when detecting that a target object exists in the monitored scene based on the video data in the monitored scene, count the crowd density in the monitored scene within a preset range of the target object; and under the condition that the crowd density is greater than a preset density threshold value, acquiring the monitoring distance between the target object and other objects in a preset range in the monitoring scene in real time.
In one embodiment, the first obtaining module 42 is further configured to track the target object in real time based on video data in the monitored scene, and determine the monitoring distance between the target object and other objects based on the tracking result.
In one embodiment, the identity information extraction device 40 further includes a third obtaining module, where the third obtaining module is configured to determine, according to the radar data in the monitoring scene, the monitoring distance between the target object and another object when the target object is interrupted in tracking based on the video data in the monitoring scene.
In one embodiment, the identity information extracting apparatus 40 further includes a recognition module, and the recognition module is configured to recognize a target object in the monitored scene based on the video data by using a preset target detection model.
In one embodiment, the matching module 46 is further configured to match the identity information of the contact object from the face information base if the contact object is a pedestrian; and if the contact object is a vehicle, matching the identity information of the contact object from the license plate information base.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
There is also provided in this embodiment an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
acquiring monitoring distances between a target object and other objects in a monitoring scene in real time based on video data in the monitoring scene;
under the condition that the monitoring distance is smaller than the preset safety distance, capturing images of objects, of which the monitoring distances to the target objects are smaller than the safety distance, in other objects to obtain contact object images of the target objects;
and matching the identity information of the contact object from the corresponding information base according to all contact object images acquired before the target object falls under the condition that the target object is detected to fall.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
under the condition that the target object is detected to fall, acquiring a contact object image of the marked time point in a preset time interval before the target object falls;
and matching the identity information of the contact object from the corresponding information base according to the contact object image in the preset time interval.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
counting the crowd density in a preset range of a target object in a monitoring scene under the condition that the target object is detected to exist in the monitoring scene based on the video data in the monitoring scene;
and under the condition that the crowd density is greater than a preset density threshold value, acquiring the monitoring distance between the target object and other objects in a preset range in the monitoring scene in real time.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
and tracking the target object in real time based on the video data in the monitoring scene, and determining the monitoring distance between the target object and other objects based on the tracking result.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
and determining the monitoring distance between the target object and other objects according to the radar data in the monitoring scene under the condition that the tracking of the target object is interrupted based on the video data in the monitoring scene.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
and identifying the target object in the monitoring scene based on the video data by using a preset target detection model.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
if the contact object is a pedestrian, matching the identity information of the contact object from the face information base;
and if the contact object is a vehicle, matching the identity information of the contact object from the license plate information base.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementations, and details are not described again in this embodiment.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of identity information extraction. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In addition, in combination with the identity information extraction method provided in the foregoing embodiment, a storage medium may also be provided in this embodiment. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the identity information extraction methods in the above embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (11)

1. An identity information extraction method, comprising:
acquiring monitoring distances between a target object and other objects in a monitoring scene in real time based on video data in the monitoring scene;
under the condition that the monitoring distance is smaller than a preset safety distance, capturing an image of an object, of which the monitoring distance with the target object is smaller than the safety distance, in the other objects to obtain a contact object image of the target object;
and matching the identity information of the contact object from a corresponding information base according to all the contact object images acquired before the target object falls under the condition that the target object is detected to fall.
2. The identity information extraction method according to claim 1, wherein time points are marked in the contact object image, the time points are used for indicating the time when the contact object image is captured, and in the case that it is detected that the target object falls, the identity information of the contact object is matched from a corresponding information base according to all the contact object images acquired before the target object falls, and the method includes:
under the condition that the target object is detected to fall, acquiring a contact object image of the marked time point in a preset time interval before the target object falls;
and matching the identity information of the contact object from the corresponding information base according to the contact object image in the preset time interval.
3. The identity information extraction method according to claim 1, wherein the obtaining of the monitoring distance between the target object and the other object in the monitoring scene in real time based on the video data in the monitoring scene includes:
counting the crowd density of the target object in the monitoring scene within a preset range under the condition that the target object is detected to exist in the monitoring scene based on the video data in the monitoring scene;
and under the condition that the crowd density is greater than a preset density threshold value, acquiring the monitoring distance between the target object in the monitoring scene and other objects in the preset range in real time.
4. The identity information extraction method according to claim 1, wherein the obtaining of the monitoring distance between the target object and the other object in the monitoring scene in real time based on the video data in the monitoring scene further comprises:
and tracking the target object in real time based on the video data in the monitoring scene, and determining the monitoring distance between the target object and the other objects based on a tracking result.
5. The identity information extraction method of claim 4, further comprising:
and determining the monitoring distance between the target object and the other objects according to the radar data in the monitoring scene under the condition that the tracking of the target object is interrupted based on the video data in the monitoring scene.
6. The identity information extraction method according to claim 1, wherein before acquiring the monitoring distance between the target object and the other object in the monitoring scene in real time based on the video data in the monitoring scene, the method further comprises:
and identifying the target object in the monitoring scene based on the video data by using a preset target detection model.
7. The identity information extraction method according to any one of claims 1 to 6, wherein, when it is detected that the target object has fallen, matching the identity information of the contact object from a corresponding information library according to all the contact object images acquired before the target object has fallen, further comprises:
if the contact object is a pedestrian, matching the identity information of the contact object from a face information base;
and if the contact object is a vehicle, matching the identity information of the contact object from the license plate information base.
8. An identity information extraction device for responsibility confirmation of falling of a target object under a monitoring scene, the identity information extraction device comprising: first acquisition module, second acquisition module and matching module, wherein:
the first acquisition module is used for acquiring the monitoring distance between a target object and other objects in a monitoring scene in real time based on video data in the monitoring scene;
the second obtaining module is configured to, when the monitoring distance is smaller than a preset safety distance, capture an image of an object, of which the monitoring distance to the target object is smaller than the safety distance, among the other objects, to obtain a contact object image of the target object;
the matching module is used for matching the identity information of the contact object from a corresponding information base according to all the contact object images acquired before the target object falls under the condition that the target object is detected to fall.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the identity information extraction method of any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the identity information extraction method of any one of claims 1 to 7 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the identity information extraction method of any one of claims 1 to 7.
CN202110937391.XA 2021-08-16 2021-08-16 Identity information extraction method, device, electronic device and storage medium Pending CN113743248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110937391.XA CN113743248A (en) 2021-08-16 2021-08-16 Identity information extraction method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110937391.XA CN113743248A (en) 2021-08-16 2021-08-16 Identity information extraction method, device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113743248A true CN113743248A (en) 2021-12-03

Family

ID=78731244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110937391.XA Pending CN113743248A (en) 2021-08-16 2021-08-16 Identity information extraction method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113743248A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058518A (en) * 2023-08-03 2023-11-14 南方电网数字电网研究院有限公司 Deep learning target detection method and device based on YOLO improvement and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058518A (en) * 2023-08-03 2023-11-14 南方电网数字电网研究院有限公司 Deep learning target detection method and device based on YOLO improvement and computer equipment
CN117058518B (en) * 2023-08-03 2024-05-03 南方电网数字电网研究院有限公司 Deep learning target detection method and device based on YOLO improvement and computer equipment

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
JP6905850B2 (en) Image processing system, imaging device, learning model creation method, information processing device
WO2019153193A1 (en) Taxi operation monitoring method, device, storage medium, and system
CN109815818B (en) Target person tracking method, system and related device
US20220092881A1 (en) Method and apparatus for behavior analysis, electronic apparatus, storage medium, and computer program
CN108009466B (en) Pedestrian detection method and device
CN105051754A (en) Method and apparatus for detecting people by a surveillance system
CN111563480A (en) Conflict behavior detection method and device, computer equipment and storage medium
CN110738178A (en) Garden construction safety detection method and device, computer equipment and storage medium
CN110263680B (en) Image processing method, device and system and storage medium
CN112001230A (en) Sleeping behavior monitoring method and device, computer equipment and readable storage medium
CN112052815A (en) Behavior detection method and device and electronic equipment
CN113191293B (en) Advertisement detection method, device, electronic equipment, system and readable storage medium
CN110781735A (en) Alarm method and system for identifying on-duty state of personnel
CN115171260A (en) Intelligent access control system based on face recognition
CN113743248A (en) Identity information extraction method, device, electronic device and storage medium
CN108198433B (en) Parking identification method and device and electronic equipment
CN113627321A (en) Image identification method and device based on artificial intelligence and computer equipment
CN113673399A (en) Method and device for monitoring area, electronic equipment and readable storage medium
WO2023124451A1 (en) Alarm event generating method and apparatus, device, and storage medium
CN115761564A (en) Follower list processing method, abnormal travel event detection method and system
CN113645439B (en) Event detection method and system, storage medium and electronic device
CN112435479B (en) Target object violation detection method and device, computer equipment and system
CA3230780A1 (en) Vision-based sports timing and identification system
CN109960995B (en) Motion data determination system, method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination