CN113627403B - Method, system and related equipment for selecting and pushing picture - Google Patents
Method, system and related equipment for selecting and pushing picture Download PDFInfo
- Publication number
- CN113627403B CN113627403B CN202111185302.7A CN202111185302A CN113627403B CN 113627403 B CN113627403 B CN 113627403B CN 202111185302 A CN202111185302 A CN 202111185302A CN 113627403 B CN113627403 B CN 113627403B
- Authority
- CN
- China
- Prior art keywords
- frame
- detection frame
- face detection
- human
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000001514 detection method Methods 0.000 claims abstract description 650
- 238000004590 computer program Methods 0.000 claims description 13
- 238000003672 processing method Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 45
- 230000005540 biological transmission Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention is suitable for the field of intelligent security and protection, and provides a method, a system and related equipment for selecting and pushing a picture, wherein the method comprises the steps of obtaining face detection data and human shape detection data to obtain a face detection frame queue and a human shape detection frame queue; detecting the current frame, acquiring the face and human shape detection results of the current frame, respectively traversing the face and human shape detection results of the current frame, and updating queue data; deleting the detection result of the current frame, and updating the queue data again; respectively traversing the human face and human face detection frame queues, and carrying out image push preparation on nodes meeting image push conditions; putting the nodes meeting the graph pushing condition into a graph pushing queue, and respectively allocating channel numbers to the face detection frame nodes and the human figure detection frame nodes; and pushing the data of the nodes according to a preset pushing sequence according to the data quantity relation meeting the nodes. The invention improves the overall efficiency and real-time performance of the security system.
Description
Technical Field
The invention belongs to the field of intelligent security and particularly relates to a method and a system for selecting and pushing a picture and related equipment.
Background
Along with the continuous development of the artificial intelligence technology, the advanced technology is gradually introduced into the security field, so that the actual effect of the security product is greatly improved, and more use scenes and use requirements can be met. However, the current artificial intelligence technology has high requirements on hardware, and these intelligent algorithms often need to run in a large server at the back end of the security system, so that the edge device cannot be used as a terminal for security data processing, and the current data transmission scheme cannot meet the real-time requirements.
Disclosure of Invention
The embodiment of the invention provides a method, a system and related equipment for selecting and pushing a picture, and aims to solve the problems that the whole security system is low in efficiency and poor in real-time performance because security data processing is only carried out through a rear end in the prior art.
In a first aspect, an embodiment of the present invention provides a graph selection and graph push method, which is applied to edge deployment monitoring equipment, and the graph selection and graph push method is based on heterogeneous parallel computing, and includes the following steps:
acquiring image data, detecting a first frame image of the image data, acquiring face detection data consisting of a plurality of face frames and human shape detection data consisting of a plurality of human shape frames, and respectively placing all the detected face frames and all the detected human shape frames into a face detection frame queue and a human shape detection frame queue, wherein each face frame corresponds to a face detection frame node in the face detection frame queue, and each human shape frame corresponds to a human shape detection frame node in the human shape detection frame queue;
detecting a current frame in the image data, acquiring a current frame face detection result and a current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating the data of a face detection frame queue of the face detection queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating the data of a human shape detection frame queue of the human shape detection queue according to the current human shape frames;
deleting the detection result of the current frame in the image data, and updating the data of the human face detection frame queue and the human face detection frame queue;
traversing the face detection frame queue, performing image pushing preparation on the face detection frame nodes meeting preset image pushing conditions in the face detection frame queue, and meanwhile traversing the human-shaped detection frame queue, and performing image pushing preparation on the human-shaped detection frame nodes meeting the preset image pushing conditions in the human-shaped detection frame queue;
sequentially putting the face detection frame nodes and the human shape detection frame nodes meeting the preset image pushing condition into an image pushing queue, and respectively allocating channel numbers to the face detection frame nodes and the human shape detection frame nodes;
and pushing the face detection frame nodes and the human-shaped detection frame nodes according to a preset pushing sequence according to the data quantity relationship between all the face detection frame nodes and all the human-shaped detection frame nodes meeting the preset image pushing condition.
Further, the step of acquiring image data, detecting a first frame image of the image data, acquiring face detection data composed of a plurality of face frames and human shape detection data composed of a plurality of human shape frames, and placing all the detected face frames and all the detected human shape frames in a face detection frame queue and a human shape detection frame queue respectively includes the following substeps:
initializing the human face detection frame queue and the human figure detection frame queue;
acquiring the face detection data containing all the face frames, the human figure detection data containing all the human figure frames and corresponding original image data in a first frame of image in the image data, wherein each face frame and each human figure frame correspond to a target ID;
judging whether the face detection data and the human shape detection data are valid data:
if the judgment result is invalid, deleting the face detection data, the human shape detection data and the original image data;
and if the judgment result is valid, respectively putting all the face frames and all the human-shaped frames into a face detection frame queue and a human-shaped detection frame queue.
Furthermore, the contents of the face detection frame node and the face detection frame node both include the target ID, the area image coordinates, the target state, the area image data, the original image data address, the first appearing image frame number, the node image frame number, and the disappearing image frame number, and the target state includes empty, existing, lingering, and disappearing.
Further, the step of detecting the current frame in the image data, obtaining a current frame face detection result and a current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating the data of the face detection frame queue of the face detection queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating the data of the human shape detection frame queue of the human shape detection queue according to the current human shape frames includes the following sub-steps:
detecting a current frame after a first frame in the image data to obtain a current frame face detection result and a current frame human shape detection result, wherein the current frame face detection result comprises the current face frame, and the current frame human shape detection result comprises the current human shape frame;
traversing the current frame face detection result, and searching whether the face detection frame node which is the same as the target ID of the current face frame exists in the face detection frame queue:
if the face detection frame node with the same target ID as that of the current face frame does not exist, selecting one face detection frame node with the empty or disappeared target state from the face detection frame queue, storing the related data of the current face frame into the selected face detection frame node, and updating the target state to exist;
if the face detection frame node with the same target ID as the target ID of the current face frame exists and the target state is the existence, comparing the image quality of the original image data corresponding to the current face frame with the image quality of the area image data in the face detection frame node, and storing the data with better image quality in the face detection frame node as the area image data;
if the face detection frame node with the same target ID as that of the current face frame exists and the target state is lingering, updating the node image frame number in the face detection frame node to be the frame number of the current frame;
and traversing the human shape detection result of the current frame while traversing the human shape detection result of the current frame, and updating data of the human shape detection frame queue according to the human shape detection result of the current frame according to the processing method of the human shape detection result of the current frame.
Furthermore, the step of traversing the face detection frame queue to prepare for image pushing of the face detection frame nodes meeting preset image pushing conditions in the face detection frame queue, and meanwhile traversing the human-shaped detection frame queue to prepare for image pushing of the human-shaped detection frame nodes meeting the preset image pushing conditions in the human-shaped detection frame queue comprises the following substeps:
traversing the face detection frame queue, and selecting the face detection frame nodes meeting the preset image pushing condition, wherein the preset image pushing condition is as follows:
if the target state in the face detection frame node exists and the interval from the node image frame number to the current frame is larger than a preset value, updating the node state in the face detection frame node to be disappeared, converting and storing the corresponding area image data into a jpeg image, deleting the area image data, and preparing for pushing the face detection frame node;
if the target state in the face detection frame node exists and the interval from the node image frame number to the current frame is within a preset value, updating the node state in the face detection frame node to stay, converting and storing the corresponding regional image data into a jpeg image, deleting the regional image data, but not preparing for pushing the face detection frame node;
if the target state in the face detection frame node is lingering, judging whether the face detection frame node meets a condition of updating the target state to be disappeared, wherein:
if not, not performing any treatment;
if yes, updating the target state of the face detection frame node to be disappeared, converting and storing the corresponding area image data into a jpeg image, deleting the area image data, and preparing for pushing the face detection frame node;
traversing the human face detection frame queue to prepare for pushing the human face detection frame nodes, traversing the human face detection frame queue, and selecting the human face detection frame nodes meeting the preset image pushing condition according to the processing method of the human face detection frame queue.
Furthermore, the step of sequentially placing the face detection frame nodes and the human shape detection frame nodes meeting the preset map pushing condition into a map pushing queue and respectively allocating channel numbers to the face detection frame nodes and the human shape detection frame nodes comprises the following sub-steps:
sequentially putting all the face detection frame nodes and all the humanoid detection frame nodes which are prepared for pushing into a graph pushing queue;
and distributing the human face detection frame nodes according to channels 5 and 6 of the graph push queue, and distributing the human face detection frame nodes according to channels 1, 2, 3 and 4 of the graph push queue.
Further, the step of pushing the face detection frame nodes and the humanoid detection frame nodes according to a preset pushing sequence according to the data quantity relationship between all the face detection frame nodes and all the humanoid detection frame nodes meeting the preset graph pushing condition includes the following substeps:
pushing according to the data volume relationship between all the face detection frame nodes and all the human shape detection frame nodes meeting the preset image pushing condition, and correspondingly subtracting 1 from the original image data address and the area image data in the face detection frame nodes or the human shape detection frame nodes and deleting the original image data address with the association number of 0 when each face detection frame node or the human shape detection frame node is pushed, wherein:
if the data volume reaches a preset pushing threshold value, pushing the data volume in a mode that 1 human face detection frame node is matched with 2 human face detection frame nodes;
and if the data volume does not reach a preset pushing threshold value, pushing according to the generation sequence of the face detection frame nodes and the human shape detection frame nodes.
In a second aspect, an embodiment of the present invention further provides a system for selecting and pushing a diagram, where the system for selecting and pushing a diagram includes:
the initialization module is used for acquiring image data, detecting a first frame image of the image data, acquiring face detection data consisting of a plurality of face frames and human shape detection data consisting of a plurality of human shape frames, and respectively putting all the detected face frames and all the detected human shape frames into a face detection frame queue and a human shape detection frame queue, wherein each face frame corresponds to a face detection frame node in the face detection frame queue, and each human shape frame corresponds to a human shape detection frame node in the human shape detection frame queue;
the image selection module is used for detecting the current frame in the image data, acquiring a current frame face detection result and a current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating the data of the face detection frame queue of the face detection queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating the data of the human shape detection frame queue of the human shape detection queue according to the current human shape frames;
the data release module is used for deleting the detection result of the current frame in the image data and updating the data of the human face detection frame queue and the human figure detection frame queue;
the image pushing preparation module is used for traversing the face detection frame queue, performing image pushing preparation on the face detection frame nodes meeting preset image pushing conditions in the face detection frame queue, and meanwhile traversing the human shape detection frame queue, and performing image pushing preparation on the human shape detection frame nodes meeting the preset image pushing conditions in the human shape detection frame queue;
the image pushing distribution module is used for sequentially putting the face detection frame nodes and the human shape detection frame nodes which meet the preset image pushing conditions into an image pushing queue and respectively distributing channel numbers to the face detection frame nodes and the human shape detection frame nodes;
and the pushing module is used for pushing the face detection frame nodes and the humanoid detection frame nodes according to a preset pushing sequence according to the data quantity relation between all the face detection frame nodes and all the humanoid detection frame nodes meeting the preset image pushing condition.
In a third aspect, an embodiment of the present invention further provides a computer device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the method of selecting and pushing a map as claimed in any one of the preceding claims when executing the computer program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the graph selecting and pushing method according to any one of the preceding claims.
The edge security data transmission method has the advantages that due to the adoption of the edge security data transmission method of firstly screening and then pushing, the utilization rate of computing resources of edge security equipment can be improved, the computing pressure of the rear end is reduced, and the overall efficiency and the real-time performance of a security system are improved.
Drawings
FIG. 1 is a block flow diagram of a method for selecting and pushing a diagram according to an embodiment of the present invention;
FIG. 2 is a block diagram of a sub-flow of step S101 in the method for selecting and pushing a diagram according to an embodiment of the present invention;
FIG. 3 is a block diagram of a sub-flow of step S102 in the method for selecting and pushing a diagram according to an embodiment of the present invention;
FIG. 4 is a block diagram illustrating the determination procedure of step S1022 in the embodiment of the present invention;
FIG. 5 is a block diagram of the determination process of step S1023 according to the embodiment of the invention;
FIG. 6 is a block diagram of a sub-flow of step S104 in the method for selecting and pushing a diagram according to an embodiment of the present invention;
fig. 7 is a block diagram of the determination process in step S1041 in the embodiment of the present invention;
FIG. 8 is a block diagram illustrating a determining process in step S1042 according to the embodiment of the present invention;
FIG. 9 is a block diagram of a sub-flow of step S105 of the method for selecting and pushing a diagram according to an embodiment of the present invention;
FIG. 10 is a block diagram of a sub-flow of step S106 in the method for selecting and pushing a diagram according to an embodiment of the present invention;
FIG. 11 is a schematic structural diagram of a system for selecting and pushing a diagram according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a computer device provided by an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a flow chart diagram of a method for selecting and pushing a diagram according to an embodiment of the present invention, where the method includes the following steps:
s101, obtaining image data, detecting a first frame image of the image data, obtaining face detection data consisting of a plurality of face frames and human shape detection data consisting of a plurality of human shape frames, and respectively putting all the detected face frames and all the detected human shape frames into a face detection frame queue and a human shape detection frame queue, wherein each face frame corresponds to a face detection frame node in the face detection frame queue, and each human shape frame corresponds to a human shape detection frame node in the human shape detection frame queue.
Referring to fig. 2, fig. 2 is a sub-flow diagram of step S101 in the method for selecting and pushing a diagram according to the embodiment of the present invention, which specifically includes the following sub-steps:
s1011, initializing the human face detection frame queue and the human shape detection frame queue.
In the embodiment of the present invention, the face detection frame queue and the human shape detection frame queue are data for implementing a chart selection and pushing queue, where the face detection frame queue is composed of a plurality of face detection frame nodes, and the human shape detection frame queue is composed of a plurality of human shape detection frame nodes, before starting a chart selection and pushing, the face detection frame queue and the human shape detection frame queue are initialized, and data therein is emptied to write new data.
S1012, obtaining the face detection data including all the face frames, the human figure detection data including all the human figure frames, and corresponding original image data in a first frame of image in the image data, where each of the face frames and each of the human figure frames correspond to a target ID.
In the embodiment of the present invention, the method for implementing the detection of the face frame and the human-shaped frame is to use a pre-trained image recognition neural network model, the image recognition neural network model can recognize the face and human-shaped data of a certain frame of image in the image data and mark the face and human-shaped data with frames respectively, so as to obtain the face frame and the human-shaped frame, wherein each frame of image may include a plurality of faces and human shapes that can be recognized, for a plurality of recognized face frames and a plurality of recognized human-shaped frames, all recognized face frames are used as the face detection data, all recognized human-shaped frames are used as the human-shaped detection data, and the face detection data and the human-shaped detection data preferably include the original image data marked by the face frame and the frames corresponding to the human-shaped frames, the original image data in the embodiment of the present invention is in YUV format, and more specifically, when the image recognition neural network model recognizes a human face and a human figure, a single object ID is assigned to the same object even if the same object is recognized in different frame images.
S1013, judging whether the face detection data and the human shape detection data are valid data:
if the judgment result is invalid, deleting the face detection data, the human shape detection data and the original image data;
and if the judgment result is valid, respectively putting all the face frames and all the human-shaped frames into a face detection frame queue and a human-shaped detection frame queue.
Specifically, the face frame and the human-shaped frame are respectively used as data of the face detection frame node and the human-shaped detection frame node after being placed in a queue, and the contents of the face detection frame node and the human-shaped detection frame node both include the target ID, the area image coordinates, the target state, the area image data, the original image data address, the first appearing image frame number, the node image frame number, and the vanishing image frame number, where the target state includes empty, existing, lingering, and vanishing, the area image coordinates are frame selection coordinates of the face frame and the human-shaped frame in a frame image, the area image data is an image obtained by converting the original image data using the image conversion handle, the original image data address is a specific storage address where the original image data is stored, and the first appearing image frame number is a specific frame number of the detected face and human shape in the image data, and the node image frame number is the frame number of the current frame during detection, and the disappeared image frame number is the frame number processed when the target state is disappeared.
In the embodiment of the present invention, if the face detection data and the human shape detection data are invalid data for selecting and extrapolating, that is, the face detection data and the human shape detection data may not include a sufficiently clear face and a human shape image, or a frame image may not include a face and a human shape image, the face detection data and the human shape detection data that are invalid are deleted; otherwise, if the face detection data and the human shape detection data have valid data, all the face frames in the face detection data are put into the face detection frame queue, and all the human shape frames in the human shape detection data are put into the human shape detection frame queue.
S102, detecting a current frame in the image data, obtaining a current frame face detection result and a current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating data of a face detection frame queue of the face detection queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating data of a human shape detection frame queue of the human shape detection queue according to the current human shape frames.
Referring to fig. 3, fig. 3 is a sub-flowchart diagram of step S102 in the method for selecting and pushing a diagram according to the embodiment of the present invention, which specifically includes the following sub-steps:
s1021, detecting a current frame after a first frame in the image data, and acquiring a current frame face detection result and a current frame human shape detection result, wherein the current frame face detection result comprises the current face frame, and the current frame human shape detection result comprises the current human shape frame.
And after the current frame of the image data is positioned after the first frame, namely under the condition that the face detection frame queue and the human shape detection frame queue are not empty, the image recognition neural network model performs face and human shape detection in the current frame, and takes the recognition result as the face frame of the current frame and the human shape frame of the current frame, all the face frames of the current frame as the face detection result of the current frame, and all the human shape frames of the current frame as the human shape detection result of the current frame.
S1022, traversing the current frame face detection result, and searching whether the face detection frame node having the same target ID as the current face frame exists in the face detection frame queue.
Comparing the target ID of the current frame face frame in the current frame face detection result in a traversal manner, where a comparison object is the target ID owned by all the face detection frame nodes in the face detection frame queue, where please refer to fig. 4, fig. 4 is a block diagram of a determination process in step S1022 in the embodiment of the present invention, and in the detection process, according to the comparison result, step S1022 further includes the following situations:
1022a, if there is no face detection frame node with the same target ID as the target ID of the current face frame, selecting one face detection frame node with the empty or disappeared target state from the face detection frame queue, storing the relevant data of the current face frame into the selected face detection frame node, and updating the target state to be present.
1022b, if the face detection frame node having the same target ID as the target ID of the current face frame exists and the target state is present, comparing the image quality of the original image data corresponding to the current face frame with the image quality of the area image data in the face detection frame node, and storing data with better image quality in the face detection frame node as the area image data.
1022c, if the face detection frame node with the same target ID as the current face frame exists and the target state is lingering, updating the node image frame number in the face detection frame node to the frame number of the current frame.
And S1023, traversing the human shape detection result of the current frame while traversing the human shape detection result of the current frame, and updating data of the human shape detection frame queue according to the human shape detection result of the current frame according to the processing method of the human shape detection result of the current frame.
Referring to fig. 5 specifically, fig. 5 is a block diagram of a determination process in step S1023 in the embodiment of the present invention, which is similar to step S1022, and is implemented by comparing the target IDs of the current frame human-shape frames in the current frame human-shape detection result in a traversal manner, where the comparison object is the target IDs owned by all human-shape detection frame nodes in the human-shape detection frame queue, and in the detection process, according to the comparison result, as described in the above 1022a to 1022c (the principle and process of which are the same, only the human-shape detection frame needs to be adjusted to be a human-shape detection frame), determining whether the target IDs already exist in the human-shape detection frame queue and the corresponding target states, and performing data update on the human-shape detection frame nodes in the human-shape detection frame queue.
S103, deleting the detection result of the current frame in the image data, and updating the data of the human face detection frame queue and the human shape detection frame queue.
In the embodiment of the present invention, after the traversal of the current frame face detection result and the current frame human shape detection result is completed, the current frame face detection result and the current frame human shape detection result are deleted to release the memory space, and meanwhile, the data of the whole face detection frame queue and the human shape detection frame queue are sorted and updated, so that the data are arranged according to the detection sequence of the faces and the human shapes in the current frame.
S104, traversing the face detection frame queue, performing image pushing preparation on the face detection frame nodes meeting preset image pushing conditions in the face detection frame queue, and meanwhile traversing the human-shaped detection frame queue, performing image pushing preparation on the human-shaped detection frame nodes meeting the preset image pushing conditions in the human-shaped detection frame queue.
Referring to fig. 6, fig. 6 is a sub-flow diagram of step S104 in the method for selecting and pushing a diagram according to the embodiment of the present invention, which specifically includes the following sub-steps:
s1041, traversing the face detection frame queue, and selecting the face detection frame nodes meeting preset image pushing conditions.
Referring to fig. 7, fig. 7 is a block diagram of a determination process in step S1041 in the embodiment of the present invention, and the following further cases are included according to the difference of the target states in the face detection frame nodes:
1041a, if the target state in the face detection frame node is present and the interval from the node image frame number to the current frame is greater than a preset value, updating the node state in the face detection frame node to be disappeared, converting and storing the corresponding area image data into a jpeg image, deleting the area image data, and making a pushing preparation for the face detection frame node.
In the embodiment of the present invention, the preset values are values that are set according to actual needs and used for determining whether corresponding human faces and human-shaped targets still exist in the video data, and the preset values may also be different according to specific durations and frame numbers of the video data.
1041b, if the target state in the face detection frame node is present and the interval from the node image frame number to the current frame is within a preset value, updating the node state in the face detection frame node to stay, converting and storing the corresponding region image data into a jpeg image, and deleting the region image data, but not making a pushing preparation for the face detection frame node.
1041c, if the target state in the face detection frame node is lingering, determining whether the face detection frame node meets a condition of updating the target state to disappear.
Specifically, when the target state in the face detection frame node is lingering, determining whether the face detection frame node satisfies a condition for updating the target state to disappear, where after the target state is changed from existing to lingering, an elapsed frame number exceeds the preset value, and according to whether an accumulated frame number for lingering in the target state exceeds the condition, the method further includes:
1041d, if not, not performing any treatment.
1041e, if yes, updating the target state of the face detection frame node to be disappeared, converting and storing the corresponding area image data into a jpeg image, deleting the area image data, and making a pushing preparation for the face detection frame node.
S1042, traversing the human shape detection frame queue while traversing the human face detection frame queue to make pushing preparation for the human face detection frame nodes, and selecting the human shape detection frame nodes meeting the preset image pushing condition according to the processing method of the human face detection frame queue.
Referring to fig. 8 specifically, fig. 8 is a block diagram of a determination process in step S1042 in the embodiment of the present invention, which is similar to step S1041, and selects a human shape detection frame node that meets the preset pushup condition from the human shape detection frame queue in a traversal manner, and in a detection process, according to a difference in the target states in the human shape detection frame node, as described in the foregoing 1041a to 1041e (the principle and the process are the same, and only a human face detection frame needs to be adjusted to a human shape detection frame), the target states of the human shape detection frame node and a relationship with the preset value are determined, and the human shape detection frame node that meets the preset pushup condition is selected.
And S105, sequentially putting the face detection frame nodes and the human-shaped detection frame nodes meeting the preset image pushing conditions into an image pushing queue, and respectively allocating channel numbers to the face detection frame nodes and the human-shaped detection frame nodes.
Referring to fig. 9, fig. 9 is a sub-flowchart diagram of step S105 in the method for selecting and pushing a diagram according to the embodiment of the present invention, which specifically includes the following sub-steps:
s1051, sequentially putting all the face detection frame nodes and all the human shape detection frame nodes which are prepared for pushing into a graph pushing queue.
In the embodiment of the present invention, all the face detection frame nodes and all the human shape detection frame nodes that satisfy the preset image pushing condition are respectively placed in the image pushing queue for pushing the face detection frame node data and the human shape detection frame data according to the data sequence in the face detection frame queue and the human shape detection frame queue.
S1052, distributing the face detection frame nodes according to the 5 and 6 channels of the graph push queue, and distributing the human shape detection frame nodes according to the 1, 2, 3 and 4 channels of the graph push queue.
Specifically, the graph pushing queue includes 6 channels, wherein channels 1, 2, 3, and 4 are allocated to the human shape detection frame nodes, and channels 5 and 6 are allocated to the human shape detection frame nodes.
S106, pushing the face detection frame nodes and the human shape detection frame nodes according to a preset pushing sequence according to the data quantity relation between all the face detection frame nodes and all the human shape detection frame nodes meeting the preset image pushing condition.
Referring to fig. 10, fig. 10 is a sub-flowchart diagram of step S106 in the method for selecting and pushing a diagram according to the embodiment of the present invention, which specifically includes the following sub-steps:
s1061, performing push according to the data amount relationship between all the face detection frame nodes and all the human shape detection frame nodes meeting the push condition, and when pushing one face detection frame node or one human shape detection frame node, correspondingly subtracting 1 from the original image data address and the area image data in the face detection frame node or the human shape detection frame node, and deleting the original image data address with the association number of 0.
Specifically, the area image data is related to the original image data, the area image data is a jpeg format image converted from the original image data in YUV format, and each of the face detection frame nodes and the human figure detection frame nodes having the same object ID correspond to the same detection object of a face and/or a human figure, so that the area image data of these nodes corresponds to the same original image data, and the original image data is stored in a unique original image data address, so that each original image data address corresponds to at least one area image data, and in the image pushing stage, each time data of one face detection frame node or data of one human figure detection frame node is pushed, the area image data related to the corresponding original image data address is reduced by 1, when the number of the associations is reduced to 0, the original image data does not correspond to any area image data, and then the original image data is deleted from the original image data address, so that the content space where the original image data address is located is released.
In this embodiment of the present invention, in order to improve the graph pushing performance, a pushing threshold is preset, where the pushing threshold is a data amount of each time the face detection frame node and the human shape detection frame node are pushed in the graph pushing process, and according to whether the data amount relationship between all the face detection frame nodes and all the human shape detection frame nodes reaches the pushing threshold, the method further includes:
1062a, if the data amount reaches a preset pushing threshold, pushing in a manner that 1 face detection frame node matches 2 personal shape detection frame nodes.
1062b, if the data volume does not reach a preset pushing threshold, pushing according to the generation sequence of the face detection frame node and the human shape detection frame node.
The edge security data transmission method has the advantages that due to the adoption of the edge security data transmission method of firstly screening and then pushing, the utilization rate of computing resources of edge security equipment can be improved, the computing pressure of the rear end is reduced, and the overall efficiency and the real-time performance of a security system are improved.
Fig. 11 is a schematic structural diagram of a diagram selecting and pushing system according to an embodiment of the present invention, where fig. 11 is a schematic structural diagram of the diagram selecting and pushing system 200 includes:
an initialization module 201, configured to obtain image data, detect a first frame image of the image data, obtain face detection data composed of a plurality of face frames and human shape detection data composed of a plurality of human shape frames, and place all the detected face frames and all the detected human shape frames in a face detection frame queue and a human shape detection frame queue, respectively, where each face frame corresponds to a face detection frame node in the face detection frame queue, and each human shape frame corresponds to a human shape detection frame node in the human shape detection frame queue;
a selecting module 202, configured to detect a current frame in the image data, obtain a current frame face detection result and a current frame human shape detection result, traverse all current face frames in the current frame face detection result, update data of the face detection frame queue of the face detection queue according to the current face frames, traverse all current human shape frames in the current frame human shape detection result, and update data of the human shape detection frame queue of the human shape detection queue according to the current human shape frames;
a data releasing module 203, configured to delete the detection result of the current frame in the image data, and update the data of the face detection frame queue and the human shape detection frame queue;
a map pushing preparation module 204, configured to traverse the face detection frame queue, perform map pushing preparation on the face detection frame nodes that meet preset map pushing conditions in the face detection frame queue, and at the same time traverse the human shape detection frame queue, and perform map pushing preparation on the human shape detection frame nodes that meet the preset map pushing conditions in the human shape detection frame queue;
a map pushing distribution module 205, configured to sequentially place the face detection frame nodes and the human shape detection frame nodes that meet the preset map pushing condition into a map pushing queue, and distribute channel numbers to the face detection frame nodes and the human shape detection frame nodes respectively;
and a pushing module 206, configured to push the face detection frame nodes and the human shape detection frame nodes according to a preset pushing sequence according to a data amount relationship between all the face detection frame nodes and all the human shape detection frame nodes that meet the preset graph pushing condition.
The diagram selecting and pushing system 200 provided in the embodiment of the present invention can implement the steps in the diagram selecting and pushing method mentioned in the above embodiment, and can implement the same technical effects, and for avoiding repetition, reference is made to the description in the above embodiment, and details are not repeated here.
Referring to fig. 12, fig. 12 is a schematic diagram of a computer device provided in an embodiment of the present invention, where the computer device 300 includes: a memory 302, a processor 301, and a computer program stored on the memory 302 and executable on the processor 301.
The processor 301 calls the computer program stored in the memory 302 to execute the steps in the method for selecting and pushing a diagram provided by the embodiment of the present invention, and with reference to fig. 1, the method specifically includes:
s101, obtaining image data, detecting a first frame image of the image data, obtaining face detection data consisting of a plurality of face frames and human shape detection data consisting of a plurality of human shape frames, and respectively putting all the detected face frames and all the detected human shape frames into a face detection frame queue and a human shape detection frame queue, wherein each face frame corresponds to a face detection frame node in the face detection frame queue, and each human shape frame corresponds to a human shape detection frame node in the human shape detection frame queue.
Further, the step of acquiring image data, detecting a first frame image of the image data, acquiring face detection data composed of a plurality of face frames and human shape detection data composed of a plurality of human shape frames, and placing all the detected face frames and all the detected human shape frames in a face detection frame queue and a human shape detection frame queue respectively includes the following substeps:
initializing the human face detection frame queue and the human figure detection frame queue;
acquiring the face detection data containing all the face frames, the human figure detection data containing all the human figure frames and corresponding original image data in a first frame of image in the image data, wherein each face frame and each human figure frame correspond to a target ID;
judging whether the face detection data and the human shape detection data are valid data:
if the judgment result is invalid, deleting the face detection data, the human shape detection data and the original image data;
and if the judgment result is valid, respectively putting all the face frames and all the human-shaped frames into a face detection frame queue and a human-shaped detection frame queue.
Furthermore, the contents of the face detection frame node and the face detection frame node both include the target ID, the area image coordinates, the target state, the area image data, the original image data address, the first appearing image frame number, the node image frame number, and the disappearing image frame number, and the target state includes empty, existing, lingering, and disappearing.
S102, detecting a current frame in the image data, obtaining a current frame face detection result and a current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating data of a face detection frame queue of the face detection queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating data of a human shape detection frame queue of the human shape detection queue according to the current human shape frames.
Further, the step of detecting the current frame in the image data, obtaining a current frame face detection result and a current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating the data of the face detection frame queue of the face detection queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating the data of the human shape detection frame queue of the human shape detection queue according to the current human shape frames includes the following sub-steps:
detecting a current frame after a first frame in the image data to obtain a current frame face detection result and a current frame human shape detection result, wherein the current frame face detection result comprises the current face frame, and the current frame human shape detection result comprises the current human shape frame;
traversing the current frame face detection result, and searching whether the face detection frame node which is the same as the target ID of the current face frame exists in the face detection frame queue:
if the face detection frame node with the same target ID as that of the current face frame does not exist, selecting one face detection frame node with the empty or disappeared target state from the face detection frame queue, storing the related data of the current face frame into the selected face detection frame node, and updating the target state to exist;
if the face detection frame node with the same target ID as the target ID of the current face frame exists and the target state is the existence, comparing the image quality of the original image data corresponding to the current face frame with the image quality of the area image data in the face detection frame node, and storing the data with better image quality in the face detection frame node as the area image data;
if the face detection frame node with the same target ID as that of the current face frame exists and the target state is lingering, updating the node image frame number in the face detection frame node to be the frame number of the current frame;
and traversing the human shape detection result of the current frame while traversing the human shape detection result of the current frame, and updating data of the human shape detection frame queue according to the human shape detection result of the current frame according to the processing method of the human shape detection result of the current frame.
S103, deleting the detection result of the current frame in the image data, and updating the data of the human face detection frame queue and the human shape detection frame queue.
S104, traversing the face detection frame queue, performing image pushing preparation on the face detection frame nodes meeting preset image pushing conditions in the face detection frame queue, and meanwhile traversing the human-shaped detection frame queue, performing image pushing preparation on the human-shaped detection frame nodes meeting the preset image pushing conditions in the human-shaped detection frame queue.
Furthermore, the step of traversing the face detection frame queue to prepare for image pushing of the face detection frame nodes meeting preset image pushing conditions in the face detection frame queue, and meanwhile traversing the human-shaped detection frame queue to prepare for image pushing of the human-shaped detection frame nodes meeting the preset image pushing conditions in the human-shaped detection frame queue comprises the following substeps:
traversing the face detection frame queue, and selecting the face detection frame nodes meeting the preset image pushing condition, wherein the preset image pushing condition is as follows:
if the target state in the face detection frame node exists and the interval from the node image frame number to the current frame is larger than a preset value, updating the node state in the face detection frame node to be disappeared, converting and storing the corresponding area image data into a jpeg image, deleting the area image data, and preparing for pushing the face detection frame node;
if the target state in the face detection frame node exists and the interval from the node image frame number to the current frame is within a preset value, updating the node state in the face detection frame node to stay, converting and storing the corresponding regional image data into a jpeg image, deleting the regional image data, but not preparing for pushing the face detection frame node;
if the target state in the face detection frame node is lingering, judging whether the face detection frame node meets a condition of updating the target state to be disappeared, wherein:
if not, not performing any treatment;
if yes, updating the target state of the face detection frame node to be disappeared, converting and storing the corresponding area image data into a jpeg image, deleting the area image data, and preparing for pushing the face detection frame node;
traversing the human face detection frame queue to prepare for pushing the human face detection frame nodes, traversing the human face detection frame queue, and selecting the human face detection frame nodes meeting the preset image pushing condition according to the processing method of the human face detection frame queue.
And S105, sequentially putting the face detection frame nodes and the human-shaped detection frame nodes meeting the preset image pushing conditions into an image pushing queue, and respectively allocating channel numbers to the face detection frame nodes and the human-shaped detection frame nodes.
Furthermore, the step of sequentially placing the face detection frame nodes and the human shape detection frame nodes meeting the preset map pushing condition into a map pushing queue and respectively allocating channel numbers to the face detection frame nodes and the human shape detection frame nodes comprises the following sub-steps:
sequentially putting all the face detection frame nodes and all the humanoid detection frame nodes which are prepared for pushing into a graph pushing queue;
and distributing the human face detection frame nodes according to channels 5 and 6 of the graph push queue, and distributing the human face detection frame nodes according to channels 1, 2, 3 and 4 of the graph push queue.
S106, pushing the face detection frame nodes and the human shape detection frame nodes according to a preset pushing sequence according to the data quantity relation between all the face detection frame nodes and all the human shape detection frame nodes meeting the preset image pushing condition.
Further, the step of pushing the face detection frame nodes and the humanoid detection frame nodes according to a preset pushing sequence according to the data quantity relationship between all the face detection frame nodes and all the humanoid detection frame nodes meeting the preset graph pushing condition includes the following substeps:
pushing according to the data volume relationship between all the face detection frame nodes and all the human shape detection frame nodes meeting the preset image pushing condition, and correspondingly subtracting 1 from the original image data address and the area image data in the face detection frame nodes or the human shape detection frame nodes and deleting the original image data address with the association number of 0 when each face detection frame node or the human shape detection frame node is pushed, wherein:
if the data volume reaches a preset pushing threshold value, pushing the data volume in a mode that 1 human face detection frame node is matched with 2 human face detection frame nodes;
and if the data volume does not reach a preset pushing threshold value, pushing according to the generation sequence of the face detection frame nodes and the human shape detection frame nodes.
The computer device 300 provided in the embodiment of the present invention can implement the steps in the image selection and pushing method mentioned in the above embodiment, and can implement the same technical effects, and for avoiding repetition, reference is made to the description in the above embodiment, and details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process and step in the image selecting and pushing method provided in the embodiment of the present invention, and can implement the same technical effect, and in order to avoid repetition, details are not repeated here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, which are illustrative, but not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A method for selecting and pushing a picture, which is characterized by comprising the following steps:
acquiring image data, detecting a first frame image of the image data, acquiring face detection data consisting of a plurality of face frames and human shape detection data consisting of a plurality of human shape frames, and respectively placing all the detected face frames and all the detected human shape frames into a face detection frame queue and a human shape detection frame queue, wherein each face frame corresponds to a face detection frame node in the face detection frame queue, and each human shape frame corresponds to a human shape detection frame node in the human shape detection frame queue;
detecting a current frame in the image data, acquiring a current frame face detection result and a current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating the data of the face detection frame queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating the data of the human shape detection frame queue according to the current human shape frames;
deleting the detection result of the current frame in the image data, and updating the data of the human face detection frame queue and the human face detection frame queue;
traversing the face detection frame queue, performing image pushing preparation on the face detection frame nodes meeting preset image pushing conditions in the face detection frame queue, and meanwhile traversing the human-shaped detection frame queue, and performing image pushing preparation on the human-shaped detection frame nodes meeting the preset image pushing conditions in the human-shaped detection frame queue;
sequentially putting the face detection frame nodes and the human shape detection frame nodes meeting the preset image pushing condition into an image pushing queue, and respectively allocating channel numbers to the face detection frame nodes and the human shape detection frame nodes;
and pushing the face detection frame nodes and the human-shaped detection frame nodes according to a preset pushing sequence according to the data quantity relationship between all the face detection frame nodes and all the human-shaped detection frame nodes meeting the preset image pushing condition.
2. The method as claimed in claim 1, wherein the step of obtaining image data, detecting a first frame image of the image data, obtaining face detection data consisting of a plurality of face frames and human shape detection data consisting of a plurality of human shape frames, and placing all the face frames and all the human shape frames detected in a face detection frame queue and a human shape detection frame queue, respectively, comprises the sub-steps of:
initializing the human face detection frame queue and the human figure detection frame queue;
acquiring the face detection data containing all the face frames, the human figure detection data containing all the human figure frames and corresponding original image data in a first frame of image in the image data, wherein each face frame and each human figure frame correspond to a target ID;
judging whether the face detection data and the human shape detection data are valid data:
if the judgment result is invalid, deleting the face detection data, the human shape detection data and the original image data;
and if the judgment result is valid, respectively putting all the face frames and all the human-shaped frames into a face detection frame queue and a human-shaped detection frame queue.
3. The method as claimed in claim 2, wherein the contents of the face detection frame node and the face detection frame node each include the target ID, a region image coordinate point, a target state, region image data, an original image data address, a first appearing image frame number, a node image frame number, and a disappearing image frame number; the target states include empty, present, linger, and vanish.
4. The method as claimed in claim 3, wherein the step of detecting the current frame in the image data, obtaining the current frame face detection result and the current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating the data in the face detection frame queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating the data in the human shape detection frame queue according to the current human shape frames comprises the following sub-steps:
detecting a current frame after a first frame in the image data to obtain a current frame face detection result and a current frame human shape detection result, wherein the current frame face detection result comprises the current face frame, and the current frame human shape detection result comprises the current human shape frame;
traversing the current frame face detection result, and searching whether the face detection frame node which is the same as the target ID of the current face frame exists in the face detection frame queue:
if the face detection frame node with the same target ID as that of the current face frame does not exist, selecting one face detection frame node with the empty or disappeared target state from the face detection frame queue, storing the related data of the current face frame into the selected face detection frame node, and updating the target state to exist;
if the face detection frame node with the same target ID as the target ID of the current face frame exists and the target state is the existence, comparing the image quality of the original image data corresponding to the current face frame with the image quality of the area image data in the face detection frame node, and storing the data with better image quality in the face detection frame node as the area image data;
if the face detection frame node with the same target ID as that of the current face frame exists and the target state is lingering, updating the node image frame number in the face detection frame node to be the frame number of the current frame;
and traversing the human shape detection result of the current frame while traversing the human shape detection result of the current frame, and updating data of the human shape detection frame queue according to the human shape detection result of the current frame according to the processing method of the human shape detection result of the current frame.
5. The method for selecting and pushing a map according to claim 4, wherein the step of traversing the face detection frame queue to prepare for pushing the face detection frame nodes satisfying a preset pushing condition in the face detection frame queue, and simultaneously traversing the human-shaped detection frame queue to prepare for pushing the human-shaped detection frame nodes satisfying the preset pushing condition in the human-shaped detection frame queue comprises the following sub-steps:
traversing the face detection frame queue, and selecting the face detection frame nodes meeting preset image pushing conditions, wherein the preset image pushing conditions are as follows:
if the target state in the face detection frame node exists and the interval from the node image frame number to the current frame is larger than a preset value, updating the node state in the face detection frame node to be disappeared, converting and storing the corresponding area image data into a jpeg image, deleting the area image data, and preparing for pushing the face detection frame node;
if the target state in the face detection frame node exists and the interval from the node image frame number to the current frame is within a preset value, updating the node state in the face detection frame node to stay, converting and storing the corresponding regional image data into a jpeg image, deleting the regional image data, but not preparing for pushing the face detection frame node;
if the target state in the face detection frame node is lingering, judging whether the face detection frame node meets a condition of updating the target state to be disappeared, wherein:
if not, not performing any treatment;
if yes, updating the target state of the face detection frame node to be disappeared, converting and storing the corresponding area image data into a jpeg image, deleting the area image data, and preparing for pushing the face detection frame node;
traversing the human face detection frame queue to prepare for pushing the human face detection frame nodes, traversing the human face detection frame queue, and selecting the human face detection frame nodes meeting the preset image pushing condition according to the processing method of the human face detection frame queue.
6. The method as claimed in claim 5, wherein the step of sequentially placing the face detection frame nodes and the human shape detection frame nodes satisfying the preset map-pushing condition into a map-pushing queue and respectively assigning channel numbers to the face detection frame nodes and the human shape detection frame nodes comprises the following sub-steps:
sequentially putting all the face detection frame nodes and all the humanoid detection frame nodes which are prepared for pushing into a graph pushing queue;
and distributing the human face detection frame nodes according to channels 5 and 6 of the graph push queue, and distributing the human face detection frame nodes according to channels 1, 2, 3 and 4 of the graph push queue.
7. The method as claimed in claim 6, wherein the step of pushing the face detection frame nodes and the human-shape detection frame nodes according to a preset pushing sequence according to the data amount relationship between all the face detection frame nodes and all the human-shape detection frame nodes meeting the preset pushing condition comprises the following sub-steps:
pushing according to the data volume relationship between all the face detection frame nodes and all the human shape detection frame nodes meeting the preset image pushing condition, and correspondingly subtracting 1 from the original image data address and the area image data in the face detection frame nodes or the human shape detection frame nodes and deleting the original image data address with the association number of 0 when each face detection frame node or the human shape detection frame node is pushed, wherein:
if the data volume reaches a preset pushing threshold value, pushing the data volume in a mode that 1 human face detection frame node is matched with 2 human face detection frame nodes;
and if the data volume does not reach a preset pushing threshold value, pushing according to the generation sequence of the face detection frame nodes and the human shape detection frame nodes.
8. A system for selecting and pushing a drawing, the system comprising:
the initialization module is used for acquiring image data, detecting a first frame image of the image data, acquiring face detection data consisting of a plurality of face frames and human shape detection data consisting of a plurality of human shape frames, and respectively putting all the detected face frames and all the detected human shape frames into a face detection frame queue and a human shape detection frame queue, wherein each face frame corresponds to a face detection frame node in the face detection frame queue, and each human shape frame corresponds to a human shape detection frame node in the human shape detection frame queue;
the image selection module is used for detecting the current frame in the image data, acquiring a current frame face detection result and a current frame human shape detection result, traversing all current face frames in the current frame face detection result, updating the data of the face detection frame queue according to the current face frames, traversing all current human shape frames in the current frame human shape detection result, and updating the data of the human shape detection frame queue according to the current human shape frames;
the data release module is used for deleting the detection result of the current frame in the image data and updating the data of the human face detection frame queue and the human figure detection frame queue;
the image pushing preparation module is used for traversing the face detection frame queue, performing image pushing preparation on the face detection frame nodes meeting preset image pushing conditions in the face detection frame queue, and meanwhile traversing the human shape detection frame queue, and performing image pushing preparation on the human shape detection frame nodes meeting the preset image pushing conditions in the human shape detection frame queue;
the image pushing distribution module is used for sequentially putting the face detection frame nodes and the human shape detection frame nodes which meet the preset image pushing conditions into an image pushing queue and respectively distributing channel numbers to the face detection frame nodes and the human shape detection frame nodes;
and the pushing module is used for pushing the face detection frame nodes and the humanoid detection frame nodes according to a preset pushing sequence according to the data quantity relation between all the face detection frame nodes and all the humanoid detection frame nodes meeting the preset image pushing condition.
9. A computer device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the method of selecting map and pushing as claimed in any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of selecting a map according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111185302.7A CN113627403B (en) | 2021-10-12 | 2021-10-12 | Method, system and related equipment for selecting and pushing picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111185302.7A CN113627403B (en) | 2021-10-12 | 2021-10-12 | Method, system and related equipment for selecting and pushing picture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113627403A CN113627403A (en) | 2021-11-09 |
CN113627403B true CN113627403B (en) | 2022-03-08 |
Family
ID=78391193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111185302.7A Active CN113627403B (en) | 2021-10-12 | 2021-10-12 | Method, system and related equipment for selecting and pushing picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113627403B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830251A (en) * | 2018-06-25 | 2018-11-16 | 北京旷视科技有限公司 | Information correlation method, device and system |
CN108898109A (en) * | 2018-06-29 | 2018-11-27 | 北京旷视科技有限公司 | The determination methods, devices and systems of article attention rate |
CN109344789A (en) * | 2018-10-16 | 2019-02-15 | 北京旷视科技有限公司 | Face tracking method and device |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8634601B2 (en) * | 2011-05-11 | 2014-01-21 | Honeywell International Inc. | Surveillance-based high-resolution facial recognition |
JP5925068B2 (en) * | 2012-06-22 | 2016-05-25 | キヤノン株式会社 | Video processing apparatus, video processing method, and program |
CN103731639A (en) * | 2013-09-10 | 2014-04-16 | 深圳辉锐天眼科技有限公司 | Method for providing picture access records through intelligent video security and protection system |
US20160042621A1 (en) * | 2014-06-13 | 2016-02-11 | William Daylesford Hogg | Video Motion Detection Method and Alert Management |
US10027883B1 (en) * | 2014-06-18 | 2018-07-17 | Amazon Technologies, Inc. | Primary user selection for head tracking |
US10546183B2 (en) * | 2015-08-10 | 2020-01-28 | Yoti Holding Limited | Liveness detection |
US20180173940A1 (en) * | 2016-12-19 | 2018-06-21 | Canon Kabushiki Kaisha | System and method for matching an object in captured images |
CN109033924A (en) * | 2017-06-08 | 2018-12-18 | 北京君正集成电路股份有限公司 | The method and device of humanoid detection in a kind of video |
CN108171256A (en) * | 2017-11-27 | 2018-06-15 | 深圳市深网视界科技有限公司 | Facial image matter comments model construction, screening, recognition methods and equipment and medium |
CN110969072B (en) * | 2018-09-30 | 2023-05-02 | 杭州海康威视***技术有限公司 | Model optimization method, device and image analysis system |
CN110287907B (en) * | 2019-06-28 | 2020-11-03 | 北京海益同展信息科技有限公司 | Object detection method and device |
CN110399808A (en) * | 2019-07-05 | 2019-11-01 | 桂林安维科技有限公司 | A kind of Human bodys' response method and system based on multiple target tracking |
CN112861575A (en) * | 2019-11-27 | 2021-05-28 | 中兴通讯股份有限公司 | Pedestrian structuring method, device, equipment and storage medium |
CN111476183A (en) * | 2020-04-13 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Passenger flow information processing method and device |
CN111950507B (en) * | 2020-08-25 | 2024-06-11 | 北京猎户星空科技有限公司 | Data processing and model training method, device, equipment and medium |
CN112464843A (en) * | 2020-12-07 | 2021-03-09 | 上海悠络客电子科技股份有限公司 | Accurate passenger flow statistical system, method and device based on human face human shape |
CN113111733B (en) * | 2021-03-24 | 2022-09-30 | 广州华微明天软件技术有限公司 | Posture flow-based fighting behavior recognition method |
-
2021
- 2021-10-12 CN CN202111185302.7A patent/CN113627403B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830251A (en) * | 2018-06-25 | 2018-11-16 | 北京旷视科技有限公司 | Information correlation method, device and system |
CN108898109A (en) * | 2018-06-29 | 2018-11-27 | 北京旷视科技有限公司 | The determination methods, devices and systems of article attention rate |
CN109344789A (en) * | 2018-10-16 | 2019-02-15 | 北京旷视科技有限公司 | Face tracking method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113627403A (en) | 2021-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111695697B (en) | Multiparty joint decision tree construction method, equipment and readable storage medium | |
US11158090B2 (en) | Enhanced video shot matching using generative adversarial networks | |
CN107734336B (en) | Compression method and device for video storage space | |
CN105975939B (en) | Video detecting method and device | |
CN112132852B (en) | Automatic image matting method and device based on multi-background color statistics | |
CN111068305B (en) | Cloud game loading control method and device, electronic equipment and storage medium | |
CN112052759A (en) | Living body detection method and device | |
CN113627403B (en) | Method, system and related equipment for selecting and pushing picture | |
Agrawal et al. | Analysis and synthesis of an ant colony optimization technique for image edge detection | |
CN111581442A (en) | Method and device for realizing graph embedding, computer storage medium and terminal | |
CN112132892A (en) | Target position marking method, device and equipment | |
CN111736751B (en) | Stroke redrawing method, device and readable storage medium | |
CN108228752B (en) | Data total export method, data export task allocation device and data export node device | |
CN113688810B (en) | Target capturing method and system of edge device and related device | |
CN110019372B (en) | Data monitoring method, device, server and storage medium | |
CN107493315B (en) | Behavior data collection method, resource server and storage medium | |
US9183454B1 (en) | Automated technique for generating a path file of identified and extracted image features for image manipulation | |
CN115079150A (en) | Unmanned aerial vehicle detection method and system based on software radio and related equipment | |
CN115129611A (en) | Test method, system, equipment and computer readable storage medium | |
CN110858846A (en) | Resource allocation method, device and storage medium | |
CN113763240B (en) | Point cloud thumbnail generation method, device, equipment and storage medium | |
CN115080240A (en) | Deployment method of voice processing model, electronic equipment and storage medium | |
CN112465859A (en) | Method, device, equipment and storage medium for detecting fast moving object | |
CN109102517A (en) | A kind of method of Image Edge-Detection, system and associated component | |
CN117455660B (en) | Financial real-time safety detection system, method, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right | ||
PP01 | Preservation of patent right |
Effective date of registration: 20240109 Granted publication date: 20220308 |