CN113449627B - Personnel tracking method based on AI video analysis and related device - Google Patents

Personnel tracking method based on AI video analysis and related device Download PDF

Info

Publication number
CN113449627B
CN113449627B CN202110705650.6A CN202110705650A CN113449627B CN 113449627 B CN113449627 B CN 113449627B CN 202110705650 A CN202110705650 A CN 202110705650A CN 113449627 B CN113449627 B CN 113449627B
Authority
CN
China
Prior art keywords
person
data
camera
target person
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110705650.6A
Other languages
Chinese (zh)
Other versions
CN113449627A (en
Inventor
陈海波
程琳莉
王孟阳
张信伟
何云龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenlan Technology Wuhan Co ltd
Original Assignee
Shenlan Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenlan Technology Wuhan Co ltd filed Critical Shenlan Technology Wuhan Co ltd
Priority to CN202110705650.6A priority Critical patent/CN113449627B/en
Publication of CN113449627A publication Critical patent/CN113449627A/en
Application granted granted Critical
Publication of CN113449627B publication Critical patent/CN113449627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application provides a personnel tracking method and a related device based on AI video analysis, the method is applied to a personnel tracking system, the personnel tracking system comprises a plurality of cameras, and the method comprises the following steps: the method comprises the steps that first video data are obtained through a first camera at first time, so that first interesting characteristic data are obtained, and when the first interesting characteristic data are detected to meet preset conditions, the physical feature data of target personnel are obtained; judging whether a second camera for acquiring second video data exists in a preset time interval from the first time; when the second camera exists, second interested feature data are obtained, and when the second interested feature data are detected to meet the preset conditions, the position data of the first camera and the position data of the second camera are sequentially stored in a first queue; and drawing the action track of the target person according to the position data in the first queue. The method automatically tracks the target personnel and improves the tracking efficiency.

Description

Personnel tracking method based on AI video analysis and related device
Technical Field
The present application relates to the field of computer vision, and in particular, to a person tracking method, apparatus, electronic device, person tracking system, and computer-readable storage medium based on AI video analysis.
Background
With the development of society and the increase of emergencies, more and more application scenes of a personnel tracking system are needed. At present, the existing personnel tracking system is still a traditional monitoring camera and is additionally provided with a monitoring display, and tracking is realized in a manual mode. Because the monitoring equipment can only pass through the mode of artifical control, can't realize intelligent recognition, after the incident, the target takes place to remove, and relevant personnel need carry out a large amount of switching operations and just can lock the incident once more, can't make real-time tracking to the removal of target object, afterwards because can't judge accurately that the incident time leads to the drawback such as artifical look up video record in a large number, waste time and energy, efficiency is very low.
Disclosure of Invention
The application aims to provide a person tracking method, a person tracking device, electronic equipment, a person tracking system and a computer readable storage medium based on AI video analysis, which can automatically track target persons without manually tracing the action tracks of the target persons, thereby improving the tracking efficiency.
The purpose of the application is realized by adopting the following technical scheme:
in a first aspect, the present application provides a person tracking method based on AI video analysis, which is applied to a person tracking system, where the person tracking system includes a plurality of cameras disposed at different locations, and the method includes: at a first time, acquiring first video data of a target person by using a first camera, acquiring first interesting feature data of the target person based on the first video data, and acquiring the physical feature data of the target person when the first interesting feature data is detected to meet the preset condition; judging whether a second camera for acquiring second video data of the target person exists or not based on the physical feature data of the target person within a preset time interval from the first time; when the second camera does not exist, storing the position data of the first camera to a first queue; when the second camera exists, second interested feature data of the target person are obtained based on the second video data, and when the second interested feature data are detected to meet the preset conditions, the position data of the first camera and the position data of the second camera are sequentially stored in the first queue; acquiring kth video data of a person to be determined by using a kth camera, acquiring interesting feature data of the person to be determined based on the kth video data, and judging whether the person to be determined is the target person or not when the interesting feature data of the person to be determined meets the preset condition, wherein the kth camera is one of the cameras, the value range of k is one or more integers greater than 2, and the value of k is one of the value ranges every time; when the person to be determined is the target person, storing the position data of the kth camera to the first queue; and drawing the action track of the target person according to the position data in the first queue.
The technical scheme has the beneficial effects that on one hand, after the first interesting characteristic data meeting the preset conditions are obtained, the target person becomes a current concerned object, and at the moment, whether the video data of the target person appears or not can be detected based on the physical feature data of the target person. Specifically, whether a second camera for acquiring second video data of the target person exists or not can be judged within a preset time interval, when the second camera does not exist, it is indicated that new interesting feature data of the target person is not obtained, the state of the target person is unclear, the target person needs to be tracked, and at the moment, the position data of the first camera can be stored in a first queue, so that the position data of the first camera can be conveniently used for tracking subsequently;
when the second camera exists and the second interested feature data are detected to meet the preset conditions, the first interested feature data and the second interested feature data both meet the preset conditions, the probability that the state of the target person is abnormal is high, the target person needs to be tracked, at the moment, the position data of the first camera and the position data of the second camera can be sequentially stored in the first queue, and the tracking is convenient to subsequently combine the position data of the first camera and the position data of the second camera.
On the other hand, after the personnel needing to be tracked are determined, corresponding interested feature data can be obtained by utilizing video data aiming at all the personnel, the personnel meeting the preset conditions are obtained through screening, when the interested feature data of the undetermined personnel corresponding to the kth camera is detected to meet the preset conditions, whether the undetermined personnel are the target personnel can be judged, and when the undetermined personnel are the target personnel, the position data of the kth camera is stored into the first queue, so that the position data in the first queue are the position data of the target personnel, and the action track of the target personnel can be drawn according to the position data in the first queue. With the expansion of the location data in the first queue, continuous tracking of the target person can be achieved.
In summary, the camera can be used to obtain video data of the target person, so as to obtain feature data of interest of the target person, when it is detected that the feature data of interest of the target person meets a preset condition, it can be determined whether the target person needs to be tracked, if so, the position data of the camera can be stored in the first queue, and the action track of the target person is drawn by using the position data in the first queue.
In some optional embodiments, the method for acquiring the feature data of interest based on the video data comprises: and inputting the video data into a convolutional neural network to obtain corresponding interested feature data. The technical scheme has the advantages that the corresponding interesting features can be extracted from the video data by utilizing the convolutional neural network, and compared with the method for manually acquiring the interesting feature data, the method does not need to observe target personnel in a short distance, and is high in efficiency and high in automation degree.
The prior art with publication number CN112418055A discloses a scheduling method and a person trajectory tracking method based on video analysis, in which multiple single classifiers are used to construct a cascade enhancement classifier through weighted accumulation, so as to identify and detect the body temperature of a person, the training is complex, the identification rate is low, and in the embodiment of the present application, a convolutional neural network is used to extract corresponding interesting features from video data, the training is simple, and the identification rate is high.
In some optional embodiments, the physical characteristic data comprises at least part of body characteristic data and/or at least part of face characteristic data. The technical scheme has the advantages that the physical characteristic data can be combined in various ways, can be independent body characteristic data or human face characteristic data, and can also be combined by the body characteristic data and the human face characteristic data, and partial body characteristic data and partial human face characteristic data can be used as the physical characteristic data, so that the difficulty in acquiring the physical characteristic data is greatly reduced, and the calculation amount is reduced.
In some optional embodiments, the determining whether the person to be determined is the target person includes: acquiring physical feature data of the undetermined person; acquiring similarity between the undetermined person and the target person based on the physical feature data of the undetermined person and the physical feature data of the target person; and when the similarity is not less than a preset similarity threshold, determining that the undetermined person is the target person. The technical scheme has the advantages that the similarity between the undetermined person and the target person can be obtained by combining the body appearance characteristic data of the undetermined person and the body appearance characteristic data of the target person, when the similarity is not smaller than a preset similarity threshold value, the similarity between the undetermined person and the target person is higher, and the undetermined person can be determined to be the target person.
In some optional embodiments, the drawing the action trajectory of the target person according to the position data in the first queue includes: loading a map and a three-dimensional model of a preset area by using a browser; drawing a plurality of tracking points at corresponding positions of a three-dimensional model based on the position data in the first queue; and connecting the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point to the first queue. The technical scheme has the advantages that the plurality of tracking points can be connected according to the sequence of storing the position data corresponding to each tracking point to the first queue, so that the action track connected by the target person according to the time sequence is obtained, and the action path and the action range of the target person can be browsed more visually by combining a map and a three-dimensional model of a preset area.
In some optional embodiments, the method further comprises: and drawing an animation of the three-dimensional point moving along the connecting line so as to simulate the dynamic effect of the moving process of the target person, wherein the three-dimensional point passes through the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point into the first queue. The technical scheme has the advantages that the dynamic effect of the moving process of the target person can be shown by drawing the animation of the three-dimensional point moving along the connecting line, so that the action sequence and the action path of the target person can be shown more intuitively.
In some optional embodiments, the browser loads the map and the three-dimensional model of the preset area through a JavaScript-based GIS library. The technical scheme has the advantages that the JavaScript-based GIS library has good expandability and can be seamlessly integrated with other information services in the browser, so that flexible and changeable GIS application is established.
In some optional embodiments, the method further comprises: when the first interesting characteristic data are detected to meet the preset conditions, generating an alarm record aiming at the target personnel; when the second camera does not exist or the second interested feature data meets the preset condition, marking the target person as a tracked person. The technical scheme has the beneficial effects that on one hand, when the first interesting characteristic data is detected to meet the preset condition, the target person is more likely to be abnormal person, and an alarm record aiming at the target person can be generated, so that the follow-up investigation is facilitated;
on the other hand, when the second camera does not exist, it is indicated that new interesting feature data of the target person is not obtained, the state of the target person is not clear, and the target person needs to be tracked at the moment; or when it is detected that the second feature data of interest meets the preset condition, the first feature data of interest and the second feature data of interest both meet the preset condition, the probability of the target person being abnormal is high, and the target person also needs to be tracked at this time.
In some optional embodiments, the characteristic data of interest is body temperature data, and the preset condition is not less than a preset temperature threshold. The technical scheme has the advantages that the interested characteristic data can be body temperature data, the preset condition can be not less than a preset temperature threshold value, and people with abnormal body temperature can be tracked by combining the interested characteristic data and the preset condition.
In some optional embodiments, the feature data of interest is behavior data, and the preset condition is occurrence of abnormal behavior, where the abnormal behavior includes any one of: smoking; theft; and (6) falling. The technical scheme has the advantages that the interested characteristic data can be behavior data, the preset condition can be abnormal behavior, and people who do abnormal behavior can be tracked by combining the interested characteristic data and the preset condition.
In a second aspect, the present application provides a personnel tracking device based on AI video analysis, which is applied to a personnel tracking system, the personnel tracking system includes a plurality of cameras disposed at different positions, the device includes: the body appearance acquisition module is used for acquiring first video data of a target person by using a first camera at a first time, acquiring first interesting feature data of the target person based on the first video data, and acquiring body appearance feature data of the target person when the first interesting feature data are detected to meet the preset condition; the tracking judgment module is used for judging whether a second camera for acquiring second video data of the target person exists or not based on the physical feature data of the target person within a preset time interval from the first time; the primary storage module is used for storing the position data of the first camera to a first queue when the second camera does not exist; when the second camera exists, second interesting feature data of the target person are obtained based on the second video data, and when the second interesting feature data are detected to meet the preset conditions, the position data of the first camera and the position data of the second camera are sequentially stored in the first queue; the system comprises a personnel judgment module, a first camera and a second camera, wherein the personnel judgment module is used for acquiring kth video data of a person to be determined by utilizing the kth camera, acquiring interesting feature data of the person to be determined based on the kth video data, and judging whether the person to be determined is the target person or not when the interesting feature data of the person to be determined meet the preset condition, the kth camera is one of the cameras, the value range of k is one or more integers larger than 2, and the value of k is one of the value ranges each time; a continuous storage module, configured to store the position data of the kth camera to the first queue when the pending person is the target person; and the track drawing module is used for drawing the action track of the target person according to the position data in the first queue.
In some optional embodiments, the physical feature module is configured to input the first video data into a convolutional neural network, so as to obtain the first feature data of interest; the primary storage module is used for inputting the second video data into the convolutional neural network to obtain second interested feature data; and the personnel judgment module is used for inputting the kth video data into the convolutional neural network to obtain the kth interested feature data.
In some optional embodiments, the physical characteristic data comprises at least part of body characteristic data and/or at least part of face characteristic data.
In some optional embodiments, the people determination module comprises: the physical feature unit is used for acquiring physical feature data of the undetermined person; a similarity obtaining unit, configured to obtain a similarity between the undetermined person and the target person based on the physical feature data of the undetermined person and the physical feature data of the target person; and the personnel determining unit is used for determining that the undetermined personnel is the target personnel when the similarity is not less than a preset similarity threshold.
In some optional embodiments, the trajectory drawing module comprises: the model loading unit is used for loading the map and the three-dimensional model of the preset area by using a browser; a tracking point drawing unit for drawing a plurality of tracking points at corresponding positions of a three-dimensional model based on the position data in the first queue; and the tracking point connecting unit is used for connecting the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point into the first queue.
In some optional embodiments, the apparatus further comprises: and the animation drawing module is used for drawing an animation of the three-dimensional point moving along the connecting line so as to simulate the dynamic effect of the moving process of the target person, and the three-dimensional point passes through the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point to the first queue.
In some optional embodiments, the browser loads the map and the three-dimensional model of the preset area through a JavaScript-based GIS library.
In some optional embodiments, the apparatus further comprises: the record generation module is used for generating an alarm record aiming at the target person when the first interested feature data is detected to meet the preset condition; and the personnel marking module is used for marking the target personnel as the tracked personnel when the second camera does not exist or the second interested characteristic data is detected to meet the preset condition.
In some optional embodiments, the characteristic data of interest is body temperature data, and the preset condition is not less than a preset temperature threshold.
In some optional embodiments, the feature data of interest is behavior data, and the preset condition is occurrence of abnormal behavior, where the abnormal behavior includes any one of: smoking; theft; and (6) falling.
In a third aspect, the present application provides an electronic device, which is applied to a person tracking system, where the person tracking system includes a plurality of cameras disposed at different positions; the electronic device comprises a memory storing a computer program and a processor implementing the steps of any of the above methods when the processor executes the computer program.
In a fourth aspect, the present application provides a person tracking system, where the person tracking system includes a plurality of cameras disposed at different locations, and the person tracking system further includes any one of the above electronic devices.
In some alternative embodiments, the camera and the electronic device are integrated.
In a fifth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.
Drawings
The present application is further described below with reference to the drawings and examples.
Fig. 1 is a schematic flowchart of a person tracking method based on AI video analysis according to an embodiment of the present application;
fig. 2 is a schematic flowchart of determining whether a pending person is a target person according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating a process of drawing an action track according to an embodiment of the present disclosure;
fig. 4 is a schematic partial flowchart of a person tracking method based on AI video analysis according to an embodiment of the present application;
fig. 5 is a partial schematic flow chart of another person tracking method based on AI video analysis according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a person tracking apparatus based on AI video analysis according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a person determination module according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a trajectory drawing module according to an embodiment of the present disclosure;
fig. 9 is a schematic partial structural diagram of a person tracking apparatus based on AI video analysis according to an embodiment of the present application;
fig. 10 is a schematic partial structural diagram of another person tracking apparatus based on AI video analysis according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 12 is a schematic structural diagram of a person tracking system provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of a program product for implementing a person tracking method based on AI video analysis according to an embodiment of the present application.
Detailed Description
The present application is further described with reference to the accompanying drawings and the detailed description, and it should be noted that, in the present application, the embodiments or technical features described below may be arbitrarily combined to form a new embodiment without conflict.
Referring to fig. 1, an embodiment of the present application provides a person tracking method based on AI video analysis, which is applied to a person tracking system, where the person tracking system includes a plurality of cameras disposed at different positions, and the method includes steps S101 to S106. The plurality of cameras may include a first camera, a second camera, and a kth camera, where a value range of k is one or more integers greater than 2 and k is one of the value ranges at a time.
Step S101: the method comprises the steps that first video data of a target person are obtained through a first camera at the first time, first interesting feature data of the target person are obtained based on the first video data, and when the fact that the first interesting feature data meet preset conditions is detected, the physical feature data of the target person are obtained.
In a particular application, the physical characteristic data may include at least part of body characteristic data and/or at least part of face characteristic data. The at least part of the body feature data may include part of the body feature data or all of the body feature data, the body feature data may include one or more of height data, hair style data, hand contour data and leg contour data, and the at least part of the face feature data may include part of the face feature data or all of the face feature data.
In a specific application, the method for acquiring the physical feature data of the target person in step S101 may be: acquiring physical feature data of the target person based on the first video data;
or acquiring the physical feature data of the target person based on the pre-stored physical feature data of all the persons.
Therefore, the physical feature data can be combined in various ways, the physical feature data can be independent physical feature data or human face feature data, and can also be the combination of the physical feature data and the human face feature data, and part of the physical feature data and part of the human face feature data can be used as the physical feature data, so that the difficulty in acquiring the physical feature data is greatly reduced, and the operation amount is reduced.
In a specific application, the characteristic data of interest may be body temperature data, and the preset condition may be not less than a preset temperature threshold. The preset temperature threshold may be a preset temperature threshold, such as 37.3 ℃, 38 ℃ or 39 ℃.
Therefore, the interested characteristic data can be body temperature data, the preset condition can be not less than a preset temperature threshold value, and the person with abnormal body temperature can be tracked by combining the interested characteristic data and the preset condition.
In a specific application, the feature data of interest may be behavior data, the preset condition may be occurrence of abnormal behavior, and the abnormal behavior may include any one of the following: smoking; theft; and (6) falling.
Therefore, the interested characteristic data can be behavior data, the preset condition can be abnormal behavior, and the person who makes the abnormal behavior can be tracked by combining the interested characteristic data and the preset condition.
Step S102: and judging whether a second camera for acquiring second video data of the target person exists or not based on the physical feature data of the target person within a preset time interval from the first time. The preset time interval may be a preset time interval, such as 1 hour, 2 hours, or 3 hours.
Step S103: when the second camera does not exist, storing the position data of the first camera to a first queue; when the second camera exists, second interesting feature data of the target person are obtained based on the second video data, and when the second interesting feature data are detected to meet the preset conditions, the position data of the first camera and the position data of the second camera are sequentially stored in the first queue.
In a specific application, when the second camera exists and it is detected that the second feature of interest data does not meet the preset condition, the target person is determined to be a person who does not need to be tracked.
Thus, when it is detected that the second feature of interest data does not satisfy the preset condition, the first feature of interest data is inconsistent with the second feature of interest data, for reasons that may be: the abnormal state of the target person is relieved within the preset time interval, or the data collected by the camera is wrong, at the moment, the possibility of the abnormal state of the target person is low, the target person does not need to be tracked, and the target person can be determined to be a person who does not need to be tracked.
Step S104: acquiring kth video data of a person to be determined by using a kth camera, acquiring interested feature data of the person to be determined based on the kth video data, and judging whether the person to be determined is the target person or not when the interested feature data of the person to be determined meet the preset condition, wherein the kth camera is one of the cameras, the value range of k is one or more integers greater than 2, and the value of k is one of the value ranges every time.
Referring to fig. 2, in some embodiments, the method for determining whether the person to be determined is the target person in step S104 may include steps S201 to S203.
Step S201: and acquiring the physical feature data of the undetermined person.
Step S202: and acquiring the similarity between the undetermined person and the target person based on the physical feature data of the undetermined person and the physical feature data of the target person.
Step S203: and when the similarity is not less than a preset similarity threshold, determining that the undetermined person is the target person. The preset similarity threshold may be a preset similarity threshold, for example, 80%, 85%, or 90%.
Therefore, the similarity between the undetermined person and the target person can be obtained by combining the body appearance characteristic data of the undetermined person and the body appearance characteristic data of the target person, when the similarity is not smaller than a preset similarity threshold value, the similarity between the undetermined person and the target person is higher, and the undetermined person can be determined to be the target person.
In a specific application, when the similarity is smaller than a preset similarity threshold, it is determined that the undetermined person is not the target person, the undetermined person is marked as a new target person, and steps S101 to S106 may be performed to perform trajectory tracking on the new target person.
Step S105: and when the pending person is the target person, storing the position data of the kth camera to the first queue.
The value range of k is not limited in the embodiment of the application, and the value range of k may be a single integer or multiple integers. k may have a value in the range of 5, or k may have a value in the range of a plurality of consecutive integers greater than 2, such as 3, 4, 5, 6, 7, 8, 9, … …, or k may have a value in the range of a plurality of non-consecutive integers greater than 2, such as 7, 10, 11, 14, … …
In a preferred embodiment, the value of k may traverse each integer greater than 2.
In a specific application, k may be 5, and at this time, the first queue may store position data of the first camera, the second camera, and the fifth camera, or store position data of the first camera and the fifth camera.
In another specific application, k may be a continuous integer such as 3, 4, 5, 6, 7, 8, 9, etc., and at this time, position data of multiple cameras such as the first camera, the second camera, the third camera, the fourth camera, … …, the eighth camera, and the ninth camera may be stored in the first queue, or position data of multiple cameras such as the first camera, the third camera, the fourth camera, … …, the eighth camera, and the ninth camera may be stored in the first queue, and the position data in the first queue is relatively rich.
In another specific application, k may be a discrete integer such as 7, 10, 11, 14, etc., where the first queue may store position data of multiple cameras such as the first camera, the second camera, the seventh camera, the tenth camera, the eleventh camera, and the fourteenth camera, or store position data of multiple cameras such as the first camera, the seventh camera, the tenth camera, the eleventh camera, and the fourteenth camera, and the position data in the first queue may be less, but tracking of the target person may also be achieved by using the position data of some cameras in the first queue. Particularly, in an area with dense cameras, 3 cameras can be arranged in a corridor, the positions of the cameras are very close, the position data of all the cameras are not needed to be stored, the position data of part of the cameras can be selected to be stored, the stored data volume is small, and the occupied space is small.
Step S106: and drawing the action track of the target person according to the position data in the first queue.
Therefore, on one hand, after the first interesting feature data meeting the preset conditions are acquired, the target person becomes a current concerned object, and whether the video data of the target person appears or not can be detected based on the physical feature data of the target person. Specifically, within a preset time interval, it may be determined whether a second camera that acquires second video data of the target person exists.
When the second camera does not exist, the situation shows that new interesting characteristic data of the target person is not obtained, the state of the target person is unclear, and the target person needs to be tracked, and at the moment, the position data of the first camera can be stored in the first queue, so that the position data of the first camera can be conveniently and subsequently tracked;
when the second camera exists and the second interested feature data are detected to meet the preset conditions, the first interested feature data and the second interested feature data both meet the preset conditions, the probability of abnormal state of the target person is high, the target person needs to be tracked, and at the moment, the position data of the first camera and the position data of the second camera can be sequentially stored in the first queue, so that the tracking can be conveniently carried out by subsequently combining the position data of the first camera and the position data of the second camera.
On the other hand, after the personnel needing to be tracked are determined, corresponding interested feature data can be obtained by utilizing video data for all the personnel, the personnel meeting the preset conditions are obtained through screening, when the interested feature data of the undetermined personnel corresponding to the kth camera is detected to meet the preset conditions, whether the undetermined personnel are the target personnel can be judged, and when the undetermined personnel are the target personnel, the position data of the kth camera is stored into the first queue, so that the position data in the first queue are the position data of the target personnel, and the action track of the target personnel can be drawn according to the position data in the first queue. With the expansion of the location data in the first queue, continuous tracking of the target person can be achieved.
In summary, the camera can be used for acquiring video data of the target person, so as to obtain feature data of interest of the target person, when it is detected that the feature data of interest of the target person meets the preset condition, whether the target person needs to be tracked can be judged, if yes, the position data of the camera can be stored in the first queue, and the action track of the target person is drawn by using the position data in the first queue.
For example, the following steps are carried out: taking the A as a target person, taking the interested characteristic data as body temperature data, and presetting the conditions that the body temperature is not lower than 37.3 ℃, the preset time interval is 2 hours, and the preset similarity threshold is 80%.
At 10 minutes at 8 am, the first camera acquires the first video data of the A, inputs the first video data into the convolutional neural network, acquires the first interesting feature data of the A, is 38 ℃, meets the preset condition, and acquires the physiognomic feature data of the A based on the first video data.
Before 10 minutes in the morning, based on the physical feature data of the camera A, detecting that the second camera acquires the second video data of the camera A, inputting the second video data into a convolutional neural network to obtain second interesting feature data of the camera A, wherein the second interesting feature data of the camera A is 38.1 ℃, the preset conditions are met, and the position data of the first camera and the position data of the second camera are sequentially stored into a first queue.
The third camera acquires third video data of the undetermined person, inputs the third video data into a convolutional neural network, obtains the interested characteristic data of the undetermined person as 38.2 ℃, meets a preset condition, obtains the similarity of the undetermined person and the A as 85% based on the body feature data of the undetermined person and the body feature data of the A, is larger than a preset similarity threshold, determines the undetermined person as the A, stores the position data of the third camera into a first queue, and draws the action track of the A according to the position data in the first queue.
In some embodiments, a method of obtaining feature data of interest based on video data may include: and inputting the video data into a convolutional neural network to obtain corresponding interested feature data.
The method for acquiring the first feature of interest data based on the first video data in step S101 may include: and inputting the first video data into the convolutional neural network to obtain the first interested feature data.
The method for acquiring the second feature of interest data based on the second video data in step S103 may include: and inputting the second video data into the convolutional neural network to obtain the second interested feature data.
The method for acquiring the kth feature of interest data based on the kth video data in step S104 may include: and inputting the kth video data into the convolutional neural network to obtain the kth interested feature data.
Therefore, the corresponding interesting features can be extracted from the video data by utilizing the convolutional neural network, and compared with the method for manually acquiring the interesting feature data, the method does not need to observe target personnel in a short distance, and is high in efficiency and high in automation degree.
The prior art with publication number CN112418055A discloses a scheduling method and a person trajectory tracking method based on video analysis, in which multiple single classifiers are used to construct a cascade enhancement classifier through weighted accumulation, so as to identify and detect the body temperature of a person, the training is complex, the identification rate is low, and in the embodiment of the present application, a convolutional neural network is used to extract corresponding interesting features from video data, the training is simple, and the identification rate is high.
Referring to fig. 3, in some embodiments, the step S106 may include steps S301 to S303.
Step S301: and loading the map and the three-dimensional model of the preset area by using a browser.
In a specific application, the browser may load the map and the three-dimensional model of the preset area through a Geographic Information System (GIS) based on JavaScript.
Therefore, the GIS library based on the JavaScript has good expandability and can be seamlessly integrated with other information services in the browser, so that flexible and changeable GIS application is established.
Step S302: based on the position data in the first queue, a plurality of tracking points are plotted at corresponding positions of the three-dimensional model.
Step S303: and connecting the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point to the first queue.
Therefore, the plurality of tracking points can be connected according to the sequence of storing the position data corresponding to each tracking point to the first queue, so that the action track connected by the target person according to the time sequence is obtained, and the action path and the action range of the target person can be browsed more intuitively by combining a map and a three-dimensional model of a preset area.
In a specific application, the step S302 may include: based on the position data in the first queue, a plurality of tracking points are drawn at corresponding positions of the three-dimensional model by using a WebGL technology (Web Graphics Library, a 3D drawing protocol technology).
The step S303 may include: and connecting the plurality of tracking points by using a WebGL technology according to the sequence of storing the position data corresponding to each tracking point into the first queue.
Therefore, the WebGL technology can perform graphic rendering by utilizing a bottom graphic hardware acceleration function, has a cross-platform characteristic, and is suitable for different operating systems.
Referring to fig. 4, in some embodiments, the method may further include step S107.
Step S107: and drawing an animation of the three-dimensional point moving along the connecting line so as to simulate the dynamic effect of the moving process of the target person, wherein the three-dimensional point passes through the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point into the first queue.
Therefore, by drawing the animation of the three-dimensional point moving along the connecting line, the dynamic effect of the moving process of the target person can be shown, and the action sequence and the action path of the target person can be shown more intuitively.
In a specific application, the step S107 may include: and drawing an animation of the three-dimensional point moving along the connecting line by using a WebGL technology so as to simulate the dynamic effect of the moving process of the target person, wherein the three-dimensional point passes through the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point into the first queue.
Therefore, the WebGL technology can realize the production of Web interactive three-dimensional animation without the support of any browser plug-in, and has wide application range.
Referring to fig. 5, in some embodiments, the method may further include steps S108 to S109.
Step S108: and when detecting that the first interested feature data meet the preset condition, generating an alarm record aiming at the target person.
Step S109: when the second camera does not exist or the second interested feature data meets the preset condition, marking the target person as a tracked person.
Therefore, on one hand, when the first interesting characteristic data is detected to meet the preset conditions, the target personnel have a large possibility of being abnormal personnel, and an alarm record aiming at the target personnel can be generated, so that the follow-up investigation is facilitated;
on the other hand, when the second camera does not exist, it is indicated that new interesting feature data of the target person is not obtained, the state of the target person is not clear, and the target person needs to be tracked at the moment; or when the second interested feature data is detected to meet the preset condition, the first interested feature data and the second interested feature data both meet the preset condition, the probability that the state of the target person is abnormal is high, and the target person also needs to be tracked at the moment.
Referring to fig. 6, an embodiment of the present application further provides a person tracking device based on AI video analysis, and a specific implementation manner of the person tracking device is consistent with technical effects achieved by the implementation manners described in the embodiments of the person tracking method based on AI video analysis, and some contents are not repeated.
The device is applied to personnel tracking system, personnel tracking system is including setting up in a plurality of cameras of different positions, the device includes: the physical form acquiring module 101 is configured to acquire first video data of a target person by using a first camera at a first time, acquire first interested feature data of the target person based on the first video data, and acquire physical form feature data of the target person when it is detected that the first interested feature data meets the preset condition; the tracking judgment module 102 is configured to judge whether a second camera for acquiring second video data of the target person exists or not based on the physical feature data of the target person within a preset time interval from the first time; a primary storage module 103, configured to store, when the second camera does not exist, the position data of the first camera to a first queue; when the second camera exists, second interesting feature data of the target person are obtained based on the second video data, and when the second interesting feature data are detected to meet the preset conditions, the position data of the first camera and the position data of the second camera are sequentially stored in the first queue; the person judgment module 104 is configured to acquire kth video data of a person to be determined by using a kth camera, acquire feature data of interest of the person to be determined based on the kth video data, and judge whether the person to be determined is the target person when it is detected that the feature data of interest of the person to be determined meets the preset condition, where the kth camera is one of the multiple cameras, a value range of k is one or more integers greater than 2, and a value of k is one of the value ranges each time; a continuing storage module 105, configured to store the position data of the kth camera to the first queue when the person to be determined is the target person; and a trajectory drawing module 106, configured to draw an action trajectory of the target person according to the position data in the first queue.
In some embodiments, the physical feature module 101 may be configured to input the first video data into a convolutional neural network, resulting in the first feature data of interest; the primary storage module 103 may be configured to input the second video data into the convolutional neural network, so as to obtain the second feature data of interest; the personnel judgment module 104 may be configured to input the kth video data into the convolutional neural network to obtain the kth feature of interest data.
In a particular application, the physical characteristic data may include at least part of body characteristic data and/or at least part of face characteristic data.
Referring to fig. 7, in some embodiments, the people determination module 104 may include: a physical feature unit 1041, configured to obtain physical feature data of the person to be determined; a similarity obtaining unit 1042, configured to obtain a similarity between the undetermined person and the target person based on the physical feature data of the undetermined person and the physical feature data of the target person; the person determining unit 1043 may be configured to determine that the person to be determined is the target person when the similarity is not less than a preset similarity threshold.
Referring to fig. 8, in some embodiments, the trajectory mapping module 106 may include: a model loading unit 1061, configured to load a map and a three-dimensional model of a preset area by using a browser; a tracking point drawing unit 1062, configured to draw a plurality of tracking points at corresponding positions of the three-dimensional model based on the position data in the first queue; the trace point connection unit 1063 may be configured to connect the plurality of trace points according to a sequence in which the position data corresponding to each trace point is stored in the first queue.
Referring to fig. 9, in some embodiments, the apparatus may further include: the animation drawing module 107 may be configured to draw an animation in which a three-dimensional point moves along the connection line, so as to simulate a dynamic effect of the movement process of the target person, where the three-dimensional point passes through the plurality of tracking points according to an order in which the position data corresponding to each tracking point is stored in the first queue.
In a specific application, the browser can load the map and the three-dimensional model of the preset area through a JavaScript-based GIS library.
Referring to fig. 10, in some embodiments, the apparatus may further include: a record generating module 108, configured to generate an alarm record for the target person when it is detected that the first feature of interest data satisfies the preset condition; the person marking module 109 may be configured to mark the target person as a tracked person when the second camera is not present or when it is detected that the second feature of interest data satisfies the preset condition.
In a specific application, the characteristic data of interest may be body temperature data, and the preset condition may be not less than a preset temperature threshold.
In a specific application, the feature data of interest may be behavior data, the preset condition may be occurrence of abnormal behavior, and the abnormal behavior may include any one of the following: smoking; theft; and (6) falling.
Referring to fig. 11, an embodiment of the present application further provides an electronic device 200, where the electronic device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
The memory 210 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)211 and/or cache memory 212, and may further include Read Only Memory (ROM) 213.
The memory 210 further stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 executes the steps of the person tracking method based on AI video analysis in the embodiment of the present application, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiment of the person tracking method based on AI video analysis, and some contents are not described again.
Memory 210 may also include a utility 214 having at least one program module 215, such program modules 215 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Accordingly, the processor 220 may execute the computer programs described above, and may execute the utility 214.
Bus 230 may be a local bus representing one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or any other type of bus structure.
The electronic device 200 may also communicate with one or more external devices 240, such as a keyboard, pointing device, bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the electronic device 200, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may be through input-output interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
In some embodiments, the electronic device 200 is applied to a person tracking system that includes multiple cameras disposed at different locations.
Referring to fig. 12, an embodiment of the present application further provides a person tracking system 10, and a specific implementation manner of the person tracking system is consistent with the implementation manner and the achieved technical effect described in the embodiment of the person tracking method based on AI video analysis, and a part of the contents are not repeated.
In some embodiments, the people tracking system 10 includes a plurality of cameras disposed at different locations, and the people tracking system 10 further includes any of the electronic devices 200 described above.
In a specific application, the camera may be integrated with the electronic device 200.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, and when the computer program is executed, the steps of the person tracking method based on AI video analysis in the embodiment of the present application are implemented, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiment of the person tracking method based on AI video analysis, and some contents are not repeated.
Fig. 13 shows a program product 300 provided by the present embodiment for implementing the above-described person tracking method based on AI video analysis, which may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be executed on a terminal device, such as a personal computer. However, the program product 300 of the present invention is not so limited, and in this application, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Program product 300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that can communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
While the present application is described in terms of various aspects, including exemplary embodiments, the principles of the invention should not be limited to the disclosed embodiments, but are also intended to cover various modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. A personnel tracking method based on AI video analysis is applied to a personnel tracking system, the personnel tracking system comprises a plurality of cameras arranged at different positions, and the method comprises the following steps:
at a first time, acquiring first video data of a target person by using a first camera, acquiring first interesting feature data of the target person based on the first video data, and acquiring physical feature data of the target person when the first interesting feature data is detected to meet a preset condition;
judging whether a second camera for acquiring second video data of the target person exists or not based on the physical feature data of the target person within a preset time interval from the first time;
when the second camera does not exist, storing the position data of the first camera to a first queue; when the second camera exists, second interested feature data of the target person are obtained based on the second video data, and when the second interested feature data are detected to meet the preset conditions, the position data of the first camera and the position data of the second camera are sequentially stored in the first queue;
acquiring kth video data of a person to be determined by using a kth camera, acquiring interesting feature data of the person to be determined based on the kth video data, and judging whether the person to be determined is the target person or not when the interesting feature data of the person to be determined meets the preset condition, wherein the kth camera is one of the cameras, the value range of k is one or more integers greater than 2, and the value of k is one of the value ranges every time;
when the person to be determined is the target person, storing the position data of the kth camera to the first queue;
and drawing the action track of the target person according to the position data in the first queue.
2. The AI video analysis-based person tracking method according to claim 1, wherein the method of obtaining feature data of interest based on video data comprises:
and inputting the video data into a convolutional neural network to obtain corresponding interested feature data.
3. The AI video analysis-based person tracking method according to claim 1, wherein the physical feature data includes at least part of body feature data and/or at least part of facial feature data.
4. The AI video analysis-based person tracking method according to claim 1, wherein the determining whether the person to be determined is the target person comprises:
acquiring body feature data of the person to be determined;
acquiring the similarity between the undetermined person and the target person based on the physical feature data of the undetermined person and the physical feature data of the target person;
and when the similarity is not less than a preset similarity threshold, determining that the undetermined person is the target person.
5. The AI video analysis-based person tracking method according to claim 1, wherein the plotting the action trajectory of the target person according to the position data in the first queue comprises:
loading a map and a three-dimensional model of a preset area by using a browser;
drawing a plurality of tracking points at corresponding positions of a three-dimensional model based on the position data in the first queue;
and connecting the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point to the first queue.
6. The AI video analysis-based person tracking method according to claim 5, further comprising:
and drawing an animation of the three-dimensional point moving along the connecting line so as to simulate the dynamic effect of the moving process of the target person, wherein the three-dimensional point passes through the plurality of tracking points according to the sequence of storing the position data corresponding to each tracking point into the first queue.
7. The AI video analysis based personnel tracking method of claim 5, wherein the browser loads the map and the three-dimensional model of the preset area through a JavaScript based GIS library.
8. The AI video analysis-based person tracking method according to claim 1, further comprising:
when the first interesting characteristic data are detected to meet the preset conditions, generating an alarm record aiming at the target personnel;
when the second camera does not exist or the second interested feature data meets the preset condition, marking the target person as a tracked person.
9. The AI video analysis-based person tracking method according to claim 1, wherein the feature data of interest is body temperature data, and the preset condition is not less than a preset temperature threshold.
10. The AI video analysis-based person tracking method according to claim 1, wherein the feature data of interest is behavior data, the preset condition is occurrence of an abnormal behavior, and the abnormal behavior includes any one of: smoking; theft; and (6) falling.
11. A personnel tracking device based on AI video analysis, characterized in that is applied to personnel tracking system, personnel tracking system is including setting up a plurality of cameras in different positions, the device includes:
the body appearance acquisition module is used for acquiring first video data of a target person by using a first camera at a first time, acquiring first interesting feature data of the target person based on the first video data, and acquiring the body appearance feature data of the target person when the first interesting feature data are detected to meet a preset condition;
the tracking judgment module is used for judging whether a second camera for acquiring second video data of the target person exists or not based on the physical feature data of the target person within a preset time interval from the first time;
the primary storage module is used for storing the position data of the first camera to a first queue when the second camera does not exist; when the second camera exists, second interesting feature data of the target person are obtained based on the second video data, and when the second interesting feature data are detected to meet the preset conditions, the position data of the first camera and the position data of the second camera are sequentially stored in the first queue;
the system comprises a personnel judgment module, a first camera and a second camera, wherein the personnel judgment module is used for acquiring kth video data of a person to be determined by utilizing the kth camera, acquiring interesting feature data of the person to be determined based on the kth video data, and judging whether the person to be determined is the target person or not when the interesting feature data of the person to be determined meet the preset condition, the kth camera is one of the cameras, the value range of k is one or more integers larger than 2, and the value of k is one of the value ranges each time;
a continuous storage module, configured to store the position data of the kth camera to the first queue when the pending person is the target person;
and the track drawing module is used for drawing the action track of the target person according to the position data in the first queue.
12. The electronic equipment is characterized by being applied to a personnel tracking system, wherein the personnel tracking system comprises a plurality of cameras arranged at different positions;
the electronic device comprises a memory storing a computer program and a processor implementing the steps of the method according to any of claims 1-10 when executing the computer program.
13. A person tracking system, comprising a plurality of cameras disposed at different locations, the person tracking system further comprising the electronic device of claim 12.
14. The person tracking system of claim 13, wherein the camera and the electronic device are integrated.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN202110705650.6A 2021-06-24 2021-06-24 Personnel tracking method based on AI video analysis and related device Active CN113449627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110705650.6A CN113449627B (en) 2021-06-24 2021-06-24 Personnel tracking method based on AI video analysis and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110705650.6A CN113449627B (en) 2021-06-24 2021-06-24 Personnel tracking method based on AI video analysis and related device

Publications (2)

Publication Number Publication Date
CN113449627A CN113449627A (en) 2021-09-28
CN113449627B true CN113449627B (en) 2022-08-09

Family

ID=77812431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110705650.6A Active CN113449627B (en) 2021-06-24 2021-06-24 Personnel tracking method based on AI video analysis and related device

Country Status (1)

Country Link
CN (1) CN113449627B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226640B (en) * 2007-12-21 2010-08-18 西北工业大学 Method for capturing movement based on multiple binocular stereovision
CN102096528A (en) * 2010-12-27 2011-06-15 富泰华工业(深圳)有限公司 Touch input device and touch input method
CN103842036B (en) * 2011-09-23 2016-05-11 可利爱驰有限公司 Obtain the method and system of the actual motion track of subject
US11743431B2 (en) * 2013-03-15 2023-08-29 James Carey Video identification and analytical recognition system
CN103400371B (en) * 2013-07-09 2016-11-02 河海大学 A kind of multi-cam cooperative monitoring Apparatus and method for
US20150116501A1 (en) * 2013-10-30 2015-04-30 Sony Network Entertainment International Llc System and method for tracking objects
CN104915655A (en) * 2015-06-15 2015-09-16 西安电子科技大学 Multi-path monitor video management method and device
EP3590008A4 (en) * 2017-03-03 2020-12-09 Aqueti Incorporated Multi-camera system for tracking one or more objects through a scene
CN107240124B (en) * 2017-05-19 2020-07-17 清华大学 Cross-lens multi-target tracking method and device based on space-time constraint
CN112509264B (en) * 2020-11-19 2022-11-18 深圳市欧瑞博科技股份有限公司 Abnormal intrusion intelligent shooting method and device, electronic equipment and storage medium
CN112819857A (en) * 2021-01-22 2021-05-18 上海依图网络科技有限公司 Target tracking method, target tracking device, medium, and electronic apparatus

Also Published As

Publication number Publication date
CN113449627A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
Tran et al. Human activities recognition in android smartphone using support vector machine
US20160133297A1 (en) Dynamic Video Summarization
CN112820066B (en) Object-based alarm processing method, device, equipment and storage medium
US11379741B2 (en) Method, apparatus and storage medium for stay point recognition and prediction model training
CN115205330A (en) Track information generation method and device, electronic equipment and computer readable medium
CN113449627B (en) Personnel tracking method based on AI video analysis and related device
CN113922502A (en) Intelligent video operation and maintenance management system and management method
CN113537122A (en) Motion recognition method and device, storage medium and electronic equipment
CN112906552A (en) Inspection method and device based on computer vision and electronic equipment
CN112528825A (en) Station passenger recruitment service method based on image recognition
CN115170510B (en) Focus detection method and device, electronic equipment and readable storage medium
CN114238038B (en) Board card temperature monitoring method, device, equipment and readable storage medium
CN113987102B (en) Interactive power data visualization method and system
CN116563499A (en) Intelligent interaction system of transformer substation based on meta-universe technology
CN114092608B (en) Expression processing method and device, computer readable storage medium and electronic equipment
CN113762017B (en) Action recognition method, device, equipment and storage medium
CN113571046A (en) Artificial intelligent speech recognition analysis method, system, device and storage medium
CN111736539B (en) Monitoring data display method, device, system, server and storage medium
CN115311591A (en) Early warning method and device for abnormal behaviors and intelligent camera
CN112492272A (en) Service control method, device, equipment and medium
CN114721915B (en) Point burying method and device
CN111124387A (en) Modeling system, method, computer device and storage medium for machine learning platform
CN113360712B (en) Video representation generation method and device and electronic equipment
US20220157021A1 (en) Park monitoring methods, park monitoring systems and computer-readable storage media
US20210056272A1 (en) Object detection-based control of projected content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant