CN114049608A - Track monitoring method and device, computer equipment and storage medium - Google Patents

Track monitoring method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114049608A
CN114049608A CN202111384870.XA CN202111384870A CN114049608A CN 114049608 A CN114049608 A CN 114049608A CN 202111384870 A CN202111384870 A CN 202111384870A CN 114049608 A CN114049608 A CN 114049608A
Authority
CN
China
Prior art keywords
image
face image
target
monitoring
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111384870.XA
Other languages
Chinese (zh)
Inventor
石健
陈奕海
曾海涛
谌军
熊双成
包威
杨洋
陈稚华
袁振峰
肖佳洁
郑权
祝克伟
廖名洋
熊杭
谭明
周春阳
廖毅
卢嵩
孙泽楠
王田
普新林
饶梓耀
陈文超
贺新禹
廖川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Bureau of Extra High Voltage Power Transmission Co
Original Assignee
Guangzhou Bureau of Extra High Voltage Power Transmission Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Bureau of Extra High Voltage Power Transmission Co filed Critical Guangzhou Bureau of Extra High Voltage Power Transmission Co
Priority to CN202111384870.XA priority Critical patent/CN114049608A/en
Publication of CN114049608A publication Critical patent/CN114049608A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to a trajectory monitoring method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring monitoring images acquired by image acquisition equipment arranged in a target place; determining a face image of a target person in the monitoring image as a target face image; filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person; and sequencing each action track node in the action track record table according to corresponding acquisition time to obtain the action track of the target person. The method can improve the accuracy rate of track monitoring.

Description

Track monitoring method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a trajectory monitoring method, apparatus, computer device, storage medium, and computer program product.
Background
In many important places, in order to prevent illegal persons from damaging important facilities or entering a non-externally open area, it is necessary to perform trajectory monitoring on persons entering the important places.
However, in the conventional technology, only people entering an important place are photographed through a camera, and then a target person is searched in a monitoring interface and the action track of the target person is determined through a manual method.
Therefore, the conventional technology has the problem of low accuracy of track monitoring.
Disclosure of Invention
In view of the above, it is necessary to provide a trajectory monitoring method, apparatus, computer device, computer readable storage medium and computer program product capable of improving the accuracy of trajectory monitoring.
In a first aspect, the present application provides a trajectory monitoring method. The method comprises the following steps:
acquiring monitoring images acquired by image acquisition equipment arranged in a target place;
determining a face image of a target person in the monitoring image as a target face image;
filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person;
and sequencing each action track node in the action track record table according to corresponding acquisition time to obtain the action track of the target person.
In one embodiment, the determining, in the monitoring image, that a face image of a target person exists as a target face image includes:
carrying out face detection on the monitoring image, and determining a face image in the monitoring image;
performing feature extraction on the face image in the monitoring image to obtain face image features corresponding to the face image;
comparing the facial image features with preset facial image features in a preset facial database, and determining personnel corresponding to the facial image features; the preset face database stores the corresponding relation between the preset face image characteristics and preset personnel;
taking the person corresponding to the facial image feature as the target person;
and taking the face image with the target person as the target face image.
In one embodiment, before the step of performing face detection on the monitoring image and acquiring a face image in the monitoring image, the method further includes:
performing multilayer wavelet decomposition on the monitoring image to obtain multilayer wavelet coefficients corresponding to the monitoring image;
determining a noise threshold corresponding to each layer of wavelet coefficient according to the total number of the multilayer wavelet coefficients and the layer ordinal number corresponding to each layer of wavelet coefficient;
denoising the multilayer wavelet coefficients based on a wavelet threshold denoising function of a noise threshold corresponding to each layer of wavelet coefficients to obtain denoised multilayer wavelet coefficients;
and reconstructing the monitoring image based on the denoised multilayer wavelet coefficient.
In one embodiment, the performing face detection on the monitoring image and determining a face image in the monitoring image includes:
performing feature detection on the monitoring image to obtain a first area comprising a plurality of detection targets;
performing foreground filtering on the first area to obtain a second area;
performing skin color filtering on the second area to obtain a third area;
and performing directional gradient histogram filtration on the third area to obtain a face image in the monitoring image.
In one embodiment, the performing feature extraction on the face image in the monitoring image to obtain a face image feature corresponding to the face image includes:
partitioning the face image to obtain face image partitions;
carrying out multi-scale image feature extraction on the face image blocks to obtain multi-scale face image features corresponding to the face image blocks;
combining the multi-scale face image features to obtain combined multi-scale face image features;
and carrying out normalization processing on the combined multi-scale face image characteristics to obtain face image characteristics corresponding to the face image.
In one embodiment, the comparing the facial image features with preset facial image features in a preset facial database to determine people corresponding to the facial image features includes:
calculating the feature similarity of the facial image features and each preset facial image feature in the preset facial database;
and if the preset human face image features with the feature similarity meeting the preset conditions exist, taking the personnel corresponding to the preset human face image features as the personnel corresponding to the human face image features.
In a second aspect, the present application further provides a trajectory monitoring device. The device comprises:
the monitoring image acquisition module is used for acquiring monitoring images acquired by each image acquisition device arranged in a target place;
the face image acquisition module is used for determining a face image of a target person in the monitoring image as a target face image;
the track node filling module is used for filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person;
and the action track determining module is used for sequencing all the action track nodes in the action track record table according to corresponding acquisition time to obtain the action track of the target person.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method described above when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method described above.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprises a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the track monitoring method, the track monitoring device, the computer equipment, the storage medium and the computer program product, the monitoring images acquired by the image acquisition equipment arranged in the target place are acquired; then, determining a face image of a target person in the monitored image as a target face image; then, filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person; finally, sequencing all the action track nodes in the action track record table according to corresponding acquisition time to obtain the action track of the target personnel; therefore, the face image of the target person can be determined in the monitoring image, the action track of the target person is determined based on the time of acquiring the face image of the target person and the area identification corresponding to the image equipment of the face image of the target person, and accurate tracking and positioning of the target person are achieved; therefore, the problem of low accuracy caused by a method of manually searching for the target person and determining the action track of the target person is solved; and then the accuracy of carrying out the track monitoring to the target personnel is improved.
Drawings
FIG. 1 is a diagram of an exemplary track monitoring system;
FIG. 2 is a schematic flow chart of a trajectory monitoring method according to an embodiment;
FIG. 3 is a schematic flow chart of the steps for determining a target face image in one embodiment;
FIG. 4 is a schematic flow chart of a trajectory monitoring method according to another embodiment;
FIG. 5 is a block diagram of a trajectory monitoring device in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The trajectory monitoring method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein image capture device 102 communicates with server 104 over a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The server 104 acquires monitoring images acquired by the image acquisition devices 102 arranged in a target site; determining a face image of a target person in the monitored image as a target face image; filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment 102 acquiring the target face image is positioned into the action track node in the action track record table of the target person; and sequencing the moving track nodes in the moving track record table according to the corresponding acquisition time to obtain the moving track of the target personnel. The image capturing device 102 may be, but is not limited to, a high definition camera, a wide angle camera, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a trajectory monitoring method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step S210, acquiring a monitoring image acquired by each image acquisition device disposed in the target location.
The target place can be, but is not limited to, a sensitive place, a hotel, a large public place and the like.
In a specific implementation, a server may obtain a monitoring video acquired by each image acquisition device arranged in a target site at a preset position, analyze the monitoring video, and obtain a monitoring image; the monitoring image comprises face images corresponding to a plurality of people.
Step S220, determining the face image of the target person in the monitoring image as the target face image.
Wherein the target person is a person moving at the target site.
In specific implementation, the server can perform face detection on the acquired monitoring image to determine the image of the face image in the monitoring image; then, based on a preset face database, carrying out face recognition on the image with the face image, determining the identity information of the person corresponding to the face image, and taking the corresponding person as a target person, so that the face image with the target person can be determined in a large number of monitoring images and taken as the target face image; wherein the number of target persons is at least one; the personal identity information may be, but is not limited to, name, gender, identification card, etc.
Step S230, filling the acquisition time of the target face image and the area identifier of the target location where the image acquisition device acquiring the target face image is located into the action track node in the action track record table of the target person.
The area identifier is an area identifier of a shooting position area corresponding to the image capturing device in the target location, and may be, but is not limited to, an area name, an area number, and the like.
The action track table may be a table for recording the action track of the target person.
In the specific implementation, after a server acquires a target face image, the acquisition time of the target face image is determined, an area identifier corresponding to image acquisition equipment for acquiring the target face image in a target place is determined, and then the acquisition time and the area identifier of the target face image are filled into action track nodes in an action track record table of a target person; preferably, each image acquisition device may have a unique image acquisition device identifier, and the image acquisition device identifier is associated with an area identifier of a target location where the image acquisition device is located, so that when the image acquisition device acquires a target face image, the area identifier corresponding to the area where a target person is located in the target location (i.e., the area identifier where the image acquired the target face image is located in the target location) can be determined by using the image acquisition device identifier corresponding to the image acquisition device acquiring the target face image, so that the position area where the target person is located in the target location when the image acquisition device acquires the target face image can be determined; filling the personnel identity information corresponding to the target personnel into a action track record table of the target personnel, and filling the time of acquiring the target face image, the image acquisition equipment identifier corresponding to the image acquisition equipment acquiring the target face image and the corresponding area identifier into an action track node in the action track record table of the target personnel; each action track node corresponds to the acquisition time and the corresponding area identification of one target face image.
And S240, sequencing the moving track nodes in the moving track record table according to the corresponding acquisition time to obtain the moving track of the target person.
Wherein, each action track node corresponds to the collected information of a target face image
The acquisition information at least comprises acquisition time of the target face image and an area identifier of a target place where the image acquisition equipment acquiring the target face image is located.
In the specific implementation, in the action track record table of the target person, each action track node corresponds to and records acquisition information of a target face image, including acquisition time of the target face image, an area identifier of a target place where an image acquisition device for acquiring the target face image is located, and may further include the target face image and an image acquisition device identifier corresponding to the image acquisition device for acquiring the target face image; then, the server sorts the action track nodes according to the acquisition time of the target face image recorded in each action track node, so that the passing areas of the target person can be sorted according to the passing time sequence, and the action track of the target person is generated.
In the track monitoring method, monitoring images acquired by each image acquisition device arranged in a target place are acquired; then, determining a face image of a target person in the monitored image as a target face image; then, filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person; finally, sequencing all the action track nodes in the action track record table according to corresponding acquisition time to obtain the action track of the target personnel; therefore, the face image of the target person can be determined in the monitoring image, the action track of the target person is determined based on the time of acquiring the face image of the target person and the area identification corresponding to the image equipment of the face image of the target person, and accurate tracking and positioning of the target person are achieved; therefore, the problem of low accuracy caused by a method of manually searching for the target person and determining the action track of the target person is solved; and then the accuracy of carrying out the track monitoring to the target personnel is improved.
In one embodiment, as shown in fig. 3, the step S220 of determining that a face image of the target person exists in the monitored image as the target face image includes:
step S310, carrying out face detection on the monitored image, and determining the face image in the monitored image.
In a specific implementation, the method comprises the following steps: performing feature detection on the monitored image to obtain a first area comprising a plurality of detection targets; performing foreground filtering on the first area to obtain a second area; performing skin color filtering on the second area to obtain a third area; and performing directional gradient histogram filtering on the third area to obtain a face image in the monitoring image.
Specifically, the server performs Haar feature (Haar feature) detection on the acquired monitoring images one by one to obtain a first area containing a plurality of face detection targets; then, foreground filtering is carried out on the first area, and areas which do not belong to the foreground in the first area are deleted to obtain a second area; then, calculating a skin color distribution function of a point in the monitored image, and then binarizing the monitored image to obtain a binarized monitored image; then, smoothing and area merging are carried out on the binarized monitoring image to obtain a plurality of connected skin color areas; then, deleting the area which does not belong to the skin color area in the second area to obtain a third area; then, extracting HOG features (Histogram of oriented gradients) of each monitored image, and training the HOG features to obtain a classifier; and finally, verifying the face detection target contained in the third area by using the classifier, and taking the face detection target passing the verification as a face image in the monitored image.
Step S320, performing feature extraction on the face image in the monitoring image to obtain a face image feature corresponding to the face image.
In a specific implementation, the method comprises the following steps: partitioning the face image to obtain face image partitions; carrying out multi-scale image feature extraction on the face image blocks to obtain multi-scale face image features corresponding to the face image blocks; combining the multi-scale face image features to obtain combined multi-scale face image features; and carrying out normalization processing on the combined multi-scale face image characteristics to obtain face image characteristics corresponding to the face image.
Specifically, the server blocks the face images in the monitoring images to obtain a plurality of face image blocks; then, performing multi-scale LBP (Local Binary Pattern) feature extraction on each face image block to obtain the multi-scale LBP feature of each face image block; for example, N radii with different pixel sizes may be used to perform LBP feature extraction on each facial image block, where each radius actually corresponds to a scale, so as to obtain LBP features of each facial image block under N scales; then, combining the multi-scale LBP characteristics of a plurality of face image blocks, and carrying out normalization processing on the combined multi-scale face image characteristics so as to obtain the face image characteristics corresponding to the face image; wherein N is a preset constant.
And step S330, comparing the facial image characteristics with preset facial image characteristics in a preset facial database, and determining personnel corresponding to the facial image characteristics.
The preset face database stores the corresponding relation between the preset face image characteristics and preset personnel.
The preset personnel are all personnel entering the target place.
In a specific implementation, the method comprises the following steps: calculating the feature similarity of the facial image features and each preset facial image feature in a preset facial database; and if the preset human face image features with the feature similarity meeting the preset conditions exist, taking the personnel corresponding to the preset human face image features as the personnel corresponding to the human face image features.
Specifically, the preset face database stores face image features of preset persons as preset face image features; the identity information of the preset personnel is stored, and the identity information can include but is not limited to name, gender, identity card and other information; and storing the corresponding relation between the preset human face image characteristics and the identity information of the preset personnel. The method comprises the steps that after the server acquires face image features in a monitored image, the face image features are traversed in preset face data, feature similarity between the face image features and each preset face image feature in the preset face data is calculated one by one, the hamming distance between the face image features and the preset face image features can be calculated, if the preset face image features with the feature similarity meeting preset conditions exist, the identity information of preset personnel corresponding to the preset face image features is determined based on the corresponding relation between the preset face image features stored in a preset face database and the identity information of the preset personnel, and the identity information of the preset personnel corresponding to the preset face image features is used as the identity information of the personnel corresponding to the face image features in the monitored image; when the Hamming distance between the face image features and the preset face image features is calculated, if a plurality of preset face image features to be matched exist, the Hamming distance between the preset face image features to be matched and the face image features is smaller than a preset distance threshold value, the preset face image features with the minimum Hamming distance between the preset face image features to be matched are selected from the preset face image features to be matched and serve as the preset face image features successfully matched with the face image features, and the identity information of the preset personnel corresponding to the successfully matched preset face image features serves as the identity information of the personnel corresponding to the face image features in the monitored image.
And step S340, taking the person corresponding to the facial image feature as a target person.
In the specific implementation, after the server acquires the identity information of the personnel corresponding to the face image features in the monitored image, the corresponding personnel are taken as target personnel; wherein the number of target persons is at least one.
And step S350, taking the face image with the target person as the target face image.
In specific implementation, the server takes an image of a target person in the face image as a target face image.
According to the technical scheme of the embodiment, the face image in the monitoring image is determined by carrying out face detection on the monitoring image; then, extracting the features of the face image in the monitoring image to obtain the face image features corresponding to the face image; then, comparing the facial image features with preset facial image features in a preset facial database, and determining personnel corresponding to the facial image features; then, taking the person corresponding to the face image characteristic as a target person; finally, the face image with the target person is used as a target face image; therefore, the preset human face image characteristics matched with the human face image characteristics can be determined by calculating the characteristic similarity between the human face image characteristics in the collected monitoring image and the preset human face image characteristics; then, based on the corresponding relation between the preset face image features and the preset persons stored in the preset face database, persons corresponding to the face image features, namely target persons, can be accurately determined, and target face images are determined; therefore, the personnel identity corresponding to the face image does not need to be identified manually and the target face image does not need to be confirmed; and further, the accuracy rate of identifying the target person and the target face image is improved.
In another embodiment, before the step of performing face detection on the monitored image and acquiring the face image in the monitored image, the method further includes: performing multilayer wavelet decomposition on the monitoring image to obtain multilayer wavelet coefficients corresponding to the monitoring image; determining a noise threshold corresponding to each layer of wavelet coefficient according to the total number of the multilayer wavelet coefficients and the sequence number corresponding to each layer of wavelet coefficient; denoising the multilayer wavelet coefficients based on a wavelet threshold denoising function of a noise threshold corresponding to each layer of wavelet coefficients to obtain denoised multilayer wavelet coefficients; and reconstructing the monitoring image based on the denoised multilayer wavelet coefficient.
In a specific implementation, the server performs multilayer wavelet decomposition on the acquired monitoring image, for example, the multilayer wavelet decomposition may be three-layer wavelet decomposition, so as to obtain a corresponding multilayer wavelet decomposition coefficient; then, based on the total number of the multi-layer wavelet coefficients and the decomposition layer ordinal number corresponding to each layer of wavelet coefficients, determining a noise threshold corresponding to each layer of wavelet coefficients; then, denoising the multi-layer wavelet coefficient based on a wavelet threshold denoising function of a noise threshold corresponding to each layer of wavelet coefficient, and denoising the layer of wavelet coefficient when the layer of wavelet coefficient is larger than the corresponding noise threshold; when the wavelet coefficient of the layer is smaller than the corresponding noise threshold, setting the wavelet coefficient of the layer to be 0, thereby obtaining the multi-layer wavelet coefficient after denoising treatment; and finally, sampling, filtering by a filter and performing convolution operation on the denoised multilayer wavelet coefficient to realize the reconstruction of the monitored image.
In the technical scheme of the embodiment, multilayer wavelet coefficients are obtained by performing multilayer wavelet decomposition on a monitoring image; then, determining a noise threshold corresponding to each layer of wavelet coefficient, and denoising the multilayer wavelet coefficient based on a wavelet threshold denoising function of the noise threshold corresponding to each layer of wavelet coefficient; finally, reconstructing a monitoring image by using the denoised multilayer wavelet coefficient; therefore, before the face detection is carried out on the monitored image, the problem of excessive image noise caused by the influence of the illumination environment is avoided through the denoising processing of the monitored image, and the image quality of the monitored image can be improved.
In another embodiment, as shown in fig. 4, a trajectory monitoring method is provided, which is exemplified by the method applied to the server 104 in fig. 1, and includes the following steps:
and step S410, acquiring the monitoring images acquired by each image acquisition device arranged in the target site.
Step S420, performing face detection on the monitoring image, and determining a face image in the monitoring image.
Step S430, extracting the features of the face image in the monitoring image to obtain the face image features corresponding to the face image.
Step S440, calculating the feature similarity between the facial image features and each preset facial image feature in a preset facial database.
Step S450, if a preset human face image characteristic exists, wherein the characteristic similarity of the preset human face image characteristic and the human face image characteristic meets preset conditions, a person corresponding to the preset human face image characteristic is used as a person corresponding to the human face image characteristic.
And step S460, taking the person corresponding to the facial image feature as a target person.
And step S470, taking the face image with the target person as a target face image.
Step S480, filling the acquisition time of the target face image and the area identifier of the target location where the image acquisition device acquiring the target face image is located into the action track node in the action track record table of the target person.
Step S490, sorting each action track node in the action track record table according to the corresponding acquisition time, to obtain the action track of the target person.
It should be noted that, for the specific definition of the above steps, reference may be made to the above specific definition of a trajectory monitoring method.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a track monitoring device for realizing the track monitoring method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so that specific limitations in one or more embodiments of the trajectory monitoring device provided below may refer to the limitations on one of the trajectory monitoring methods in the foregoing description, and details are not repeated herein.
In one embodiment, as shown in fig. 5, there is provided a trajectory monitoring device including: a monitoring image obtaining module 510, a face image obtaining module 520, a track node filling module 530 and an action track determining module 540, wherein:
a monitoring image obtaining module 510, configured to obtain monitoring images collected by image collecting devices disposed in a target location.
A face image obtaining module 520, configured to determine that a face image of the target person exists in the monitored image, and use the face image as the target face image.
A track node filling module 530, configured to fill the acquisition time of the target face image and the area identifier of the target location where the image acquisition device that acquires the target face image is located into the action track node in the action track record table of the target person.
And an action track determining module 540, configured to sort the action track nodes in the action track record table according to corresponding acquisition times, so as to obtain an action track of the target person.
In one embodiment, the facial image obtaining module 520 is specifically configured to perform facial detection on the monitored image, and determine a facial image in the monitored image; performing feature extraction on the face image in the monitoring image to obtain face image features corresponding to the face image; comparing the facial image features with preset facial image features in a preset facial database, and determining personnel corresponding to the facial image features; the preset face database stores the corresponding relation between the preset face image characteristics and preset personnel; taking the person corresponding to the facial image feature as the target person; and taking the face image with the target person as the target face image.
In one embodiment, the trajectory monitoring device further includes: the wavelet decomposition module is used for carrying out multilayer wavelet decomposition on the monitoring image to obtain a multilayer wavelet coefficient corresponding to the monitoring image; the noise threshold value determining module is used for determining the noise threshold value corresponding to each layer of wavelet coefficient according to the total number of the multilayer wavelet coefficients and the layer ordinal number corresponding to each layer of wavelet coefficient; the denoising module is used for denoising the multilayer wavelet coefficients based on a wavelet threshold denoising function of the noise threshold corresponding to each layer of wavelet coefficient to obtain denoised multilayer wavelet coefficients; and the image reconstruction module is used for reconstructing the monitoring image based on the denoised multilayer wavelet coefficient.
In one embodiment, the facial image obtaining module 520 is further specifically configured to perform feature detection on the monitored image to obtain a first region including a plurality of detection targets; performing foreground filtering on the first area to obtain a second area; performing skin color filtering on the second area to obtain a third area; and performing directional gradient histogram filtration on the third area to obtain a face image in the monitoring image.
In one embodiment, the facial image obtaining module 520 is further specifically configured to block the facial image to obtain facial image blocks; carrying out multi-scale image feature extraction on the face image blocks to obtain multi-scale face image features corresponding to the face image blocks; combining the multi-scale face image features to obtain combined multi-scale face image features; and carrying out normalization processing on the combined multi-scale face image characteristics to obtain face image characteristics corresponding to the face image.
In one embodiment, the facial image obtaining module 520 is further specifically configured to calculate feature similarities between the facial image features and each preset facial image feature in the preset facial database; and if the preset human face image features with the feature similarity meeting the preset conditions exist, taking the personnel corresponding to the preset human face image features as the personnel corresponding to the human face image features.
The modules in the trajectory monitoring device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing preset human face image characteristics and identity information of preset personnel and monitoring image data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a trajectory monitoring method.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring monitoring images acquired by image acquisition equipment arranged in a target place; determining a face image of a target person in the monitoring image as a target face image; filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person; and sequencing each action track node in the action track record table according to corresponding acquisition time to obtain the action track of the target person.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
carrying out face detection on the monitoring image, and determining a face image in the monitoring image; performing feature extraction on the face image in the monitoring image to obtain face image features corresponding to the face image; comparing the facial image features with preset facial image features in a preset facial database, and determining personnel corresponding to the facial image features; the preset face database stores the corresponding relation between the preset face image characteristics and preset personnel; taking the person corresponding to the facial image feature as the target person; and taking the face image with the target person as the target face image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing multilayer wavelet decomposition on the monitoring image to obtain multilayer wavelet coefficients corresponding to the monitoring image; determining a noise threshold corresponding to each layer of wavelet coefficient according to the total number of the multilayer wavelet coefficients and the layer ordinal number corresponding to each layer of wavelet coefficient; denoising the multilayer wavelet coefficients based on a wavelet threshold denoising function of a noise threshold corresponding to each layer of wavelet coefficients to obtain denoised multilayer wavelet coefficients; and reconstructing the monitoring image based on the denoised multilayer wavelet coefficient.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing feature detection on the monitoring image to obtain a first area comprising a plurality of detection targets; performing foreground filtering on the first area to obtain a second area; performing skin color filtering on the second area to obtain a third area; and performing directional gradient histogram filtration on the third area to obtain a face image in the monitoring image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
partitioning the face image to obtain face image partitions; carrying out multi-scale image feature extraction on the face image blocks to obtain multi-scale face image features corresponding to the face image blocks; combining the multi-scale face image features to obtain combined multi-scale face image features; and carrying out normalization processing on the combined multi-scale face image characteristics to obtain face image characteristics corresponding to the face image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
calculating the feature similarity of the facial image features and each preset facial image feature in the preset facial database; and if the preset human face image features with the feature similarity meeting the preset conditions exist, taking the personnel corresponding to the preset human face image features as the personnel corresponding to the human face image features.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring monitoring images acquired by image acquisition equipment arranged in a target place; determining a face image of a target person in the monitoring image as a target face image; filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person; and sequencing each action track node in the action track record table according to corresponding acquisition time to obtain the action track of the target person.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out face detection on the monitoring image, and determining a face image in the monitoring image; performing feature extraction on the face image in the monitoring image to obtain face image features corresponding to the face image; comparing the facial image features with preset facial image features in a preset facial database, and determining personnel corresponding to the facial image features; the preset face database stores the corresponding relation between the preset face image characteristics and preset personnel; taking the person corresponding to the facial image feature as the target person; and taking the face image with the target person as the target face image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing multilayer wavelet decomposition on the monitoring image to obtain multilayer wavelet coefficients corresponding to the monitoring image; determining a noise threshold corresponding to each layer of wavelet coefficient according to the total number of the multilayer wavelet coefficients and the layer ordinal number corresponding to each layer of wavelet coefficient; denoising the multilayer wavelet coefficients based on a wavelet threshold denoising function of a noise threshold corresponding to each layer of wavelet coefficients to obtain denoised multilayer wavelet coefficients; and reconstructing the monitoring image based on the denoised multilayer wavelet coefficient.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing feature detection on the monitoring image to obtain a first area comprising a plurality of detection targets; performing foreground filtering on the first area to obtain a second area; performing skin color filtering on the second area to obtain a third area; and performing directional gradient histogram filtration on the third area to obtain a face image in the monitoring image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
partitioning the face image to obtain face image partitions; carrying out multi-scale image feature extraction on the face image blocks to obtain multi-scale face image features corresponding to the face image blocks; combining the multi-scale face image features to obtain combined multi-scale face image features; and carrying out normalization processing on the combined multi-scale face image characteristics to obtain face image characteristics corresponding to the face image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
calculating the feature similarity of the facial image features and each preset facial image feature in the preset facial database; and if the preset human face image features with the feature similarity meeting the preset conditions exist, taking the personnel corresponding to the preset human face image features as the personnel corresponding to the human face image features.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
acquiring monitoring images acquired by image acquisition equipment arranged in a target place; determining a face image of a target person in the monitoring image as a target face image; filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person; and sequencing each action track node in the action track record table according to corresponding acquisition time to obtain the action track of the target person.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out face detection on the monitoring image, and determining a face image in the monitoring image; performing feature extraction on the face image in the monitoring image to obtain face image features corresponding to the face image; comparing the facial image features with preset facial image features in a preset facial database, and determining personnel corresponding to the facial image features; the preset face database stores the corresponding relation between the preset face image characteristics and preset personnel; taking the person corresponding to the facial image feature as the target person; and taking the face image with the target person as the target face image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing multilayer wavelet decomposition on the monitoring image to obtain multilayer wavelet coefficients corresponding to the monitoring image; determining a noise threshold corresponding to each layer of wavelet coefficient according to the total number of the multilayer wavelet coefficients and the layer ordinal number corresponding to each layer of wavelet coefficient; denoising the multilayer wavelet coefficients based on a wavelet threshold denoising function of a noise threshold corresponding to each layer of wavelet coefficients to obtain denoised multilayer wavelet coefficients; and reconstructing the monitoring image based on the denoised multilayer wavelet coefficient.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing feature detection on the monitoring image to obtain a first area comprising a plurality of detection targets; performing foreground filtering on the first area to obtain a second area; performing skin color filtering on the second area to obtain a third area; and performing directional gradient histogram filtration on the third area to obtain a face image in the monitoring image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
partitioning the face image to obtain face image partitions; carrying out multi-scale image feature extraction on the face image blocks to obtain multi-scale face image features corresponding to the face image blocks; combining the multi-scale face image features to obtain combined multi-scale face image features; and carrying out normalization processing on the combined multi-scale face image characteristics to obtain face image characteristics corresponding to the face image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
calculating the feature similarity of the facial image features and each preset facial image feature in the preset facial database; and if the preset human face image features with the feature similarity meeting the preset conditions exist, taking the personnel corresponding to the preset human face image features as the personnel corresponding to the human face image features.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A trajectory monitoring method, the method comprising:
acquiring monitoring images acquired by image acquisition equipment arranged in a target place;
determining a face image of a target person in the monitoring image as a target face image;
filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person;
and sequencing each action track node in the action track record table according to corresponding acquisition time to obtain the action track of the target person.
2. The method according to claim 1, wherein the determining that the face image of the target person exists in the monitoring image as the target face image comprises:
carrying out face detection on the monitoring image, and determining a face image in the monitoring image;
performing feature extraction on the face image in the monitoring image to obtain face image features corresponding to the face image;
comparing the facial image features with preset facial image features in a preset facial database, and determining personnel corresponding to the facial image features; the preset face database stores the corresponding relation between the preset face image characteristics and preset personnel;
taking the person corresponding to the facial image feature as the target person;
and taking the face image with the target person as the target face image.
3. The method according to claim 2, wherein before the step of performing face detection on the monitored image and acquiring the face image in the monitored image, the method further comprises:
performing multilayer wavelet decomposition on the monitoring image to obtain multilayer wavelet coefficients corresponding to the monitoring image;
determining a noise threshold corresponding to each layer of wavelet coefficient according to the total number of the multilayer wavelet coefficients and the layer ordinal number corresponding to each layer of wavelet coefficient;
denoising the multilayer wavelet coefficients based on a wavelet threshold denoising function of a noise threshold corresponding to each layer of wavelet coefficients to obtain denoised multilayer wavelet coefficients;
and reconstructing the monitoring image based on the denoised multilayer wavelet coefficient.
4. The method according to claim 2, wherein the performing face detection on the monitored image and determining the face image in the monitored image comprises:
performing feature detection on the monitoring image to obtain a first area comprising a plurality of detection targets;
performing foreground filtering on the first area to obtain a second area;
performing skin color filtering on the second area to obtain a third area;
and performing directional gradient histogram filtration on the third area to obtain a face image in the monitoring image.
5. The method according to claim 2, wherein the extracting the features of the face image in the monitoring image to obtain the features of the face image corresponding to the face image comprises:
partitioning the face image to obtain face image partitions;
carrying out multi-scale image feature extraction on the face image blocks to obtain multi-scale face image features corresponding to the face image blocks;
combining the multi-scale face image features to obtain combined multi-scale face image features;
and carrying out normalization processing on the combined multi-scale face image characteristics to obtain face image characteristics corresponding to the face image.
6. The method according to claim 2, wherein the comparing the facial image features with preset facial image features in a preset facial database to determine persons corresponding to the facial image features comprises:
calculating the feature similarity of the facial image features and each preset facial image feature in the preset facial database;
and if the preset human face image features with the feature similarity meeting the preset conditions exist, taking the personnel corresponding to the preset human face image features as the personnel corresponding to the human face image features.
7. A trajectory monitoring device, characterized in that the device comprises:
the monitoring image acquisition module is used for acquiring monitoring images acquired by each image acquisition device arranged in a target place;
the face image acquisition module is used for determining a face image of a target person in the monitoring image as a target face image;
the track node filling module is used for filling the acquisition time of the target face image and the area identifier of the target place where the image acquisition equipment acquiring the target face image is positioned into the action track node in the action track record table of the target person;
and the action track determining module is used for sequencing all the action track nodes in the action track record table according to corresponding acquisition time to obtain the action track of the target person.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202111384870.XA 2021-11-22 2021-11-22 Track monitoring method and device, computer equipment and storage medium Pending CN114049608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111384870.XA CN114049608A (en) 2021-11-22 2021-11-22 Track monitoring method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111384870.XA CN114049608A (en) 2021-11-22 2021-11-22 Track monitoring method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114049608A true CN114049608A (en) 2022-02-15

Family

ID=80210174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111384870.XA Pending CN114049608A (en) 2021-11-22 2021-11-22 Track monitoring method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114049608A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115099724A (en) * 2022-08-24 2022-09-23 中达安股份有限公司 Monitoring and early warning method, device and equipment for construction scene and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115099724A (en) * 2022-08-24 2022-09-23 中达安股份有限公司 Monitoring and early warning method, device and equipment for construction scene and storage medium

Similar Documents

Publication Publication Date Title
CN111402294B (en) Target tracking method, target tracking device, computer-readable storage medium and computer equipment
Sheena et al. Key-frame extraction by analysis of histograms of video frames using statistical methods
Sadeghi et al. State of the art in passive digital image forgery detection: copy-move image forgery
CN108229314B (en) Target person searching method and device and electronic equipment
CN105320923B (en) Model recognizing method and device
CN108563675B (en) Electronic file automatic generation method and device based on target body characteristics
CN108446681B (en) Pedestrian analysis method, device, terminal and storage medium
CN106372572A (en) Monitoring method and apparatus
US11062455B2 (en) Data filtering of image stacks and video streams
CN109800318A (en) A kind of archiving method and device
CN108875481A (en) Method, apparatus, system and storage medium for pedestrian detection
CN115797350B (en) Bridge disease detection method, device, computer equipment and storage medium
US11113838B2 (en) Deep learning based tattoo detection system with optimized data labeling for offline and real-time processing
Chandran et al. Missing child identification system using deep learning and multiclass SVM
CN112784742B (en) Nose pattern feature extraction method and device and nonvolatile storage medium
CN110705476A (en) Data analysis method and device, electronic equipment and computer storage medium
CN109784220B (en) Method and device for determining passerby track
CN110781733A (en) Image duplicate removal method, storage medium, network equipment and intelligent monitoring system
CN109800664B (en) Method and device for determining passersby track
CN111091025A (en) Image processing method, device and equipment
CN115239644A (en) Concrete defect identification method and device, computer equipment and storage medium
CN109308704A (en) Background elimination method, device, computer equipment and storage medium
CN114049608A (en) Track monitoring method and device, computer equipment and storage medium
CN114140663A (en) Multi-scale attention and learning network-based pest identification method and system
CN110457998B (en) Image data association method and apparatus, data processing apparatus, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination