CN112132022B - Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof - Google Patents

Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof Download PDF

Info

Publication number
CN112132022B
CN112132022B CN202011004382.7A CN202011004382A CN112132022B CN 112132022 B CN112132022 B CN 112132022B CN 202011004382 A CN202011004382 A CN 202011004382A CN 112132022 B CN112132022 B CN 112132022B
Authority
CN
China
Prior art keywords
snapshot
face
video stream
server
architecture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011004382.7A
Other languages
Chinese (zh)
Other versions
CN112132022A (en
Inventor
丁伟
李影
张国辉
宋晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011004382.7A priority Critical patent/CN112132022B/en
Priority to PCT/CN2020/135513 priority patent/WO2021159842A1/en
Publication of CN112132022A publication Critical patent/CN112132022A/en
Application granted granted Critical
Publication of CN112132022B publication Critical patent/CN112132022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application is applicable to the technical field of monitoring, and provides a face snapshot architecture which comprises a server, a video stream snapshot service instance and camera equipment, wherein the video stream snapshot service instance is in communication connection with the server; the video stream snapshot service instances are dynamically deployed according to the number of display cards in the server, and the video stream snapshot service instances control the image pickup device to conduct real-time face snapshot operation by receiving face snapshot task requests sent by the server. The architecture has clear functions, strong maintainability and strong function expansibility, can realize that one server supports the deployment of a plurality of video stream snapshot service examples, flexibly manages the plurality of video stream snapshot service examples, controls a plurality of camera devices by one video stream snapshot service example, and supports the real-time snapshot of a plurality of video stream paths. The application also provides a face snapshot method, a device, equipment and a storage medium based on the face snapshot architecture.

Description

Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof
Technical Field
The present application relates to the field of monitoring technologies, and in particular, to a face snapshot architecture, and a face snapshot method, device, equipment, and storage medium thereof.
Background
In the technical field of monitoring, for places with large people flow, the places are generally managed and decided by means of people flow statistics, face snapshot and the like. The existing architecture design of a background management system for face snapshot at present is usually based on an independent module to realize functions of image pickup equipment management, real-time video stream decoding, face detection, snapshot data pushing and the like related to face snapshot. In practical application, the architecture can limit the number of paths of the video streams captured by the GPU server in real time, and the situation that capturing is not real-time occurs. In addition, the problems of disordered architecture, large coupling between modules, weak function expansibility, high maintenance cost and the like exist.
Disclosure of Invention
In view of the above, the embodiment of the application provides a face snapshot architecture, and a face snapshot method, a device, equipment and a storage medium thereof, wherein the architecture has clear functions, strong maintainability and strong function expansibility; supporting distributed cluster deployment, and dynamically adjusting snapshot tasks; and the real-time face snapshot of the multi-path cameras is realized, and the high-concurrence video stream number is supported.
A first aspect of an embodiment of the present application provides a face snapshot architecture, including: the system comprises a server, a video stream snapshot service instance in communication connection with the server and a camera shooting device in communication connection with the video stream snapshot service instance;
the video stream snapshot service instances are dynamically deployed according to the number of display cards in the server, and the video stream snapshot service instances control the image pickup device to conduct real-time face snapshot operation by receiving face snapshot task requests sent by the server.
With reference to the first aspect, in a first possible implementation manner of the first aspect, if a plurality of servers are configured in the face snapshot architecture, the plurality of servers are distributed in a cluster in the face snapshot architecture.
With reference to the first aspect, in a second possible implementation manner of the first aspect, a face snapshot algorithm library is further configured in the face snapshot architecture, where at least one of a face frame detection algorithm, a feature point detection algorithm, a face tracking algorithm, a face quality detection algorithm, and a face feature extraction algorithm is stored in the face snapshot algorithm library, and each algorithm is configured with a corresponding API interface for engineering call.
With reference to the first aspect, the first possible implementation manner of the first aspect, and the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, a snapshot policy configuration mechanism is further configured in the face snapshot architecture, and a processing manner of executing face snapshot is configured for the video stream snapshot service instance based on a plurality of preset snapshot policies in the snapshot policy configuration mechanism.
A second aspect of an embodiment of the present application provides a face snapshot method, where the face snapshot method includes:
a server in a face snapshot architecture receives a face snapshot task request sent by a control center, wherein the face snapshot task request contains image pickup equipment information corresponding to a current snapshot task;
the server obtains load information of a video stream snapshot service instance used for controlling the image capturing equipment to execute a face snapshot task in a face snapshot architecture, and sends the image capturing equipment information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
the video stream snapshot server instance is connected with the image pickup equipment according to the image pickup equipment information and controls the image pickup equipment to carry out face snapshot, and snapshot data are fed back to the control center through the server.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the method further includes:
identifying a snapshot strategy requirement of the current snapshot task from the face snapshot task request;
and invoking a snapshot strategy matched with the snapshot strategy requirement from a preset snapshot strategy configuration mechanism, and configuring the snapshot strategy to the image pickup equipment information pointing to the image pickup equipment so that the image pickup equipment performs face snapshot according to the snapshot strategy.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the sending, according to the load information, the image capturing device information to the video stream snapshot service instance, so as to load balance the video stream snapshot service instance, includes:
the method comprises the steps of identifying the number of video stream snapshot service instances currently running in a face snapshot architecture and the number of camera equipment currently controlled by the video stream snapshot service instances;
calculating the number of the image pickup devices correspondingly connected with each video stream snapshot service instance by combining the number of the video stream snapshot service instances, the number of the image pickup devices currently controlled by the video stream snapshot service instances and the number of the image pickup devices corresponding to the current snapshot task;
and distributing the image pickup equipment to the video stream snapshot service examples and sending corresponding image pickup equipment information according to the calculated number of the image pickup equipment correspondingly connected to each video stream snapshot service example.
A third aspect of an embodiment of the present application provides a face snapshot device, including:
the receiving module is used for receiving a face snapshot task request sent by the control center through a server in the face snapshot framework, wherein the face snapshot task request contains shooting equipment information corresponding to the current snapshot task;
the processing module is used for acquiring load information of a video stream snapshot service instance for controlling the image capturing equipment to execute a face snapshot task in the face snapshot architecture through the server, and sending the image capturing equipment information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
and the execution module is used for connecting the image pickup equipment through the video stream snapshot server instance according to the image pickup equipment information, controlling the image pickup equipment to carry out face snapshot, and feeding back snapshot data to the control center through the server.
A fourth aspect of the embodiments of the present application provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the computer device, where the processor implements the steps of the face snapshot method provided in the second aspect when the processor executes the computer program.
A fifth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the face snapshot method provided in the second aspect.
The face snapshot framework, the face snapshot method and the face snapshot device provided by the embodiment of the application have the following beneficial effects:
the face snapshot architecture comprises a server, a video stream snapshot service instance and image pickup equipment, wherein the server is in communication connection with the video stream snapshot service instance, the video stream snapshot service instance is in communication connection with the image pickup equipment, and the architecture functions are clear. The video stream snapshot service instance receives the face snapshot task request of the server to control a plurality of camera devices to conduct real-time face snapshot operation, so that the server supports deployment of a plurality of video stream snapshot service instances, flexibly manages the plurality of video stream snapshot service instances, flexibly controls a plurality of camera devices, supports real-time snapshot of a plurality of video stream paths, and is high in maintainability and expandability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a face snapshot architecture according to a first embodiment of the present application;
fig. 2 is a flowchart of an implementation of a face snapshot method according to a second embodiment of the present application;
fig. 3 is a flowchart illustrating an implementation of a face snapshot method according to a third embodiment of the present application;
fig. 4 is a flowchart of a face snapshot method according to a fourth embodiment of the present application;
fig. 5 is a block diagram of a face snapshot device according to a fifth embodiment of the present application;
fig. 6 is a block diagram of a computer device according to a sixth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Referring to fig. 1, fig. 1 is a schematic diagram of a face snapshot architecture according to a first embodiment of the present application. The face snapshot framework is mainly applied to a background system. As shown in fig. 1, a face snapshot architecture provided in a first embodiment of the present application includes a server, a video stream snapshot service instance communicatively connected to the server, and an image capturing device communicatively connected to the video stream snapshot service instance; the video stream snapshot service instances are dynamically deployed according to the number of display cards in the server, and the video stream snapshot service instances control the image pickup device to conduct real-time face snapshot operation by receiving face snapshot task requests sent by the server.
In this embodiment, the number of graphics cards represents the number of GPU resources of the server, and the configuration of the graphics cards of the server may be 1, 2, 4. In a specific implementation of this embodiment, the video stream snapshot service instance in each server is deployed as a dynamic deployment, that is, for a server with multiple video cards, fewer video stream snapshot service instances than the number of the video cards may be deployed, and video stream snapshot service instances equal to the number of the video cards may also be deployed. In a specific implementation of this embodiment, the server manages all video stream snapshot service instances, all image capturing apparatuses, and association relations between the image capturing apparatuses and the video stream snapshot service instances deployed by the server, and is configured to receive a snapshot request from a control center. It can be appreciated that the association relationship between the image capturing device and the video stream snapshot service instance is dynamic and configurable, for example, in different snapshot tasks, the video stream snapshot service instance associated with the same image capturing device can be different; the same video stream snapshot service instance can be associated with one camera device or a plurality of camera devices and the like, and can be specifically configured according to the load condition of a current server or the task requirement of an operator, so that the flexibility is high. The video stream snapshot service instance is connected with the camera equipment in a protocol communication mode and is responsible for GPU decoding, face detection operation and operation of pushing snapshot data to a server on the camera equipment associated with the video stream snapshot service instance. Therefore, one video stream snapshot service instance can be correspondingly connected and managed with the multi-path camera equipment, the real-time face snapshot of the multi-path camera equipment is realized by the video stream snapshot service instance, the number of high-concurrency video stream paths is supported, and the maintainability and the expandability are strong. The server and the video stream snapshot service instances are communicated in a memory sharing mode, namely any video stream snapshot service instance pushes snapshot data to the server in the same server, other video stream snapshot service instances in the server can synchronously share information for obtaining the push snapshot data, sharing and data transmission of every two video stream snapshot service instances are achieved, and communication efficiency is high.
The face snapshot architecture provided by the embodiment comprises a server, a video stream snapshot service instance and image pickup equipment, wherein the video stream snapshot service instance is in communication connection with the server, and the image pickup equipment is in communication connection with the video stream snapshot service instance; the video stream snapshot service instances are dynamically deployed according to the number of display cards in the server, and the video stream snapshot service instances control the image pickup device to conduct real-time face snapshot operation by receiving face snapshot task requests sent by the server. The architecture has clear functions, strong maintainability and strong function expansibility, can realize that one server supports the deployment of a plurality of video stream snapshot service examples, flexibly manages the plurality of video stream snapshot service examples, controls a plurality of camera devices by one video stream snapshot service example, and supports the real-time snapshot of a plurality of video stream paths.
In some embodiments of the present application, when the face snapshot architecture is provided with a plurality of servers, the plurality of servers provided in the face snapshot architecture are distributed and clustered, so as to support dynamic adjustment of snapshot tasks between servers in the face snapshot architecture. In this embodiment, the face snapshot architecture performs data interaction with an external control center through an exchange in a message queue manner. The distributed cluster deployment of the plurality of servers in the face snapshot architecture is embodied as follows: in the face snapshot architecture, if a face snapshot task operated by one server is published to a designated switch through a rabitmq publishing message queue, other servers can synchronously acquire the snapshot task load conditions of all servers in the face snapshot architecture in a mode of subscribing switch data. If the snapshot task of one of the servers stops, the multiple servers deployed in the face snapshot architecture can be balanced by dynamically adjusting the snapshot task in the running of the server cluster.
In some embodiments of the present application, a face snapshot algorithm library may also be configured in a face snapshot architecture, where the face snapshot algorithm library includes, but is not limited to, storing: at least one of a face frame detection algorithm, a feature point detection algorithm, a face tracking algorithm, a face quality detection algorithm and a face feature extraction algorithm. The face frame detection algorithm adopts a YOLOv3 detection model, and improves detection speed while ensuring accuracy through a lightweight backbone network and a feature pyramid detection network. The feature point detection algorithm and the face feature extraction algorithm adopt a lightweight network ShuffleNet model, and the calculation amount of the model is greatly reduced while the accuracy is kept in the feature point detection or face feature extraction process through the grouping convolution (pointwise group convolution) of the 1*1 convolution kernel and the recombination (channel shuffle) of the feature images obtained after the grouping convolution. The face tracking algorithm adopts the face frame and characteristic point offset distance prediction tracking condition of continuous frames, and solves the problem of adhesion and overlapping of targets in the multi-face tracking process by introducing a self-adaptive target tracking window and sequentially tracking. Furthermore, a Kalman filter is introduced to predict the target, so that the tracking speed and accuracy of the face tracking algorithm are improved. Face quality detection algorithms include, but are not limited to, brightness detection, angle detection, ambiguity detection, inter-pupil detection, occlusion detection, and the like. The method comprises the steps that an original discrete algorithm interface is designed on the bottom layer based on the face snapshot algorithm library, corresponding API interfaces for engineering call are respectively configured for each algorithm through encapsulation processing aiming at various algorithms stored in the face snapshot algorithm library, when a video stream snapshot service instance carries out face detection on camera equipment associated with the face snapshot service instance, the face snapshot algorithm library can be enabled to be responsible for computing resource GPU/CPU scheduling management based on the API interfaces for engineering call, and detection acceleration is achieved through cuda parallel computation, so that the video stream snapshot service instance has high available and high throughput data processing capacity.
In some embodiments of the present application, factors such as the erection height, the position, the equipment model, and the like of the image capturing equipment in different snapshot tasks, and environmental factors such as brightness, angle, ambiguity, inter-pupil distance, and shielding of the external environment affect the snapshot result of the video stream snapshot service instance. In this embodiment, a snapshot policy configuration mechanism is configured in a face snapshot architecture, where a plurality of snapshot policies are preset in the snapshot policy configuration mechanism, for example, including but not limited to: brightness strategy, angle strategy, ambiguity strategy, interpupillary distance strategy, shielding strategy, snapshot de-duplication strategy and the like of a face snapshot image. Based on the snapshot strategy configuration mechanism, a currently applicable processing mode for executing face snapshot can be configured for the video stream snapshot service instance. For example, based on the precise snapshot mode, the face image with the optimal quality can be screened out according to the number of a plurality of continuous frames in the video stream for pushing, and the number of the continuous frames can be configured by default or by user definition. Or, a time window list for screening the face images with the optimal quality is set, wherein the time window list is characterized as a time period, and the face images with the optimal quality are screened out from all image frames belonging to the time period in the video stream to be pushed, and the length of the time window list can be the default configuration of a system or the custom configuration of an operator. Therefore, the snapshot strategy configuration mechanism based on the snapshot strategy flexibly configures the snapshot strategy of each video stream snapshot service instance, and the face image can be processed according to different snapshot strategies in the face snapshot framework. In actual use, the snapshot strategy of each video stream snapshot service instance can be dynamically adjusted according to the number of snapshot tasks and/or the utilization rate of GPU hardware and/or the system load condition.
Referring to fig. 2, fig. 2 is a flowchart illustrating an implementation of a face snapshot method according to a second embodiment of the present application. The details are as follows:
step S11: a server in a face snapshot architecture receives a face snapshot task request sent by a control center, wherein the face snapshot task request contains image pickup equipment information corresponding to a current snapshot task;
step S12: the server obtains load information of a video stream snapshot service instance used for controlling the image capturing equipment to execute a face snapshot task in a face snapshot architecture, and sends the image capturing equipment information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
step S13: the video stream snapshot server instance is connected with the image pickup equipment according to the image pickup equipment information and controls the image pickup equipment to carry out face snapshot, and snapshot data are fed back to the control center through the server.
In this embodiment, the face snapshot architecture provided in the first embodiment may be based on the face snapshot task request sent by the control center may be received by a server in the face snapshot architecture, and then the image capturing device information corresponding to the current snapshot task contained in the face snapshot task request may be obtained by analyzing the face snapshot task request. Wherein the image capturing apparatus information includes, but is not limited to: number of image capturing apparatuses information, ID information of the image capturing apparatuses, and the like. After obtaining the number information of the image capturing devices and the ID information of the image capturing devices, the server monitors and obtains the load information of the video stream snapshot service instance configured in the face snapshot architecture and used for controlling the image capturing devices to execute the face snapshot task. And sending the image pickup equipment information to the video stream snapshot service instance based on the obtained load information so as to balance the load of the video stream snapshot service instance configured in the server. When the video stream snapshot task instance obtains the image pickup equipment information of the current snapshot task, the corresponding image pickup equipment is connected according to the image pickup equipment information, the connected image pickup equipment is controlled to conduct face snapshot operation, snapshot data obtained by face snapshot are fed back to a control center through a server, and therefore a complete face snapshot process is achieved based on a face snapshot framework. In this embodiment, after each video stream snapshot service instance receives ID information of an image capturing device corresponding to a current snapshot task, the video stream snapshot service instance is connected with the corresponding image capturing device in a protocol communication manner according to the ID information of the image capturing device, and further, GPU decoding is performed on the image capturing device connected with the video stream snapshot service instance, and face snapshot processing is performed to obtain snapshot data. And the obtained snapshot data can be pushed to a server in communication connection with the server in a shared memory mode, and after the server obtains the snapshot data, the server feeds back the snapshot data to the control center in a mode of a rabitMQ publishing message queue.
In some embodiments of the present application, please refer to fig. 3, fig. 3 is a flowchart illustrating an implementation of a face snapshot method according to a third embodiment of the present application. The details are as follows:
step S21: the method comprises the steps of identifying the number of video stream snapshot service instances currently running in a face snapshot architecture and the number of camera equipment currently controlled by the video stream snapshot service instances;
step S22: calculating the number of the image pickup devices correspondingly connected with each video stream snapshot service instance by combining the number of the video stream snapshot service instances, the number of the image pickup devices currently controlled by the video stream snapshot service instances and the number of the image pickup devices corresponding to the current snapshot task;
step S23: and distributing the image pickup equipment to the video stream snapshot service examples and sending corresponding image pickup equipment information according to the calculated number of the image pickup equipment correspondingly connected to each video stream snapshot service example.
In this embodiment, in order to implement load balancing of each video stream snapshot service instance in the face snapshot structure, specifically, by identifying the number of video stream snapshot service instances currently running in the face snapshot architecture and the number of image capturing devices currently controlled by the video stream snapshot service instances, further, by combining the number of video stream snapshot service instances, the number of image capturing devices currently controlled by the video stream snapshot service instances and the number of image capturing devices corresponding to the current snapshot task, the number of image capturing devices corresponding to the connection of each video stream snapshot service instance is calculated, where a calculation principle is a load balancing principle, that is, after the image capturing devices corresponding to the current snapshot task are correspondingly allocated to each video stream snapshot service instance, loads among the video stream snapshot service instances remain in a balanced state. For example, a total of A, B, C video stream snapshot service instances in a face snapshot architecture are running, 10 image capturing devices can be controlled under the condition that each video stream snapshot service instance is fully loaded, if the video stream snapshot service instance has controlled 5 image capturing devices, the video stream snapshot service instance has controlled 3 image capturing devices, if the number of image capturing devices required to be controlled by the current face snapshot task is 5, in a load balancing manner, 1 image capturing device corresponding to the current snapshot task is allocated to the video stream snapshot service instance, and 3 image capturing devices corresponding to the current snapshot task are allocated to the video stream snapshot service instance. Thus, ID information of the image capturing apparatus corresponding to the current snapshot task is transmitted to the A, B, C three video stream snapshot service instances, respectively, according to the allocation result. In some specific embodiments, if the number of controllable image capturing devices in each video stream capturing service instance is the same, and the number of image capturing devices connected to each video stream capturing service instance after allocation cannot be equal to the number of image capturing devices corresponding to the current capturing task, the video stream capturing service instances can be ordered by the system, the image capturing devices are preferentially allocated according to the order, and the difference between the number of the video stream capturing service instance with the largest number of connected image capturing devices and the number of the video stream capturing service instance with the smallest number of connected image capturing devices is less than or equal to 1. In some specific embodiments, if the number of the full-load controllable image capturing devices in each video stream capturing service instance is different, load balancing distribution may be performed according to a ratio of the number of connected image capturing devices to the number of full-load controllable image capturing devices.
Referring to fig. 4, fig. 4 is a flowchart illustrating an implementation of a face snapshot method according to a fourth embodiment of the present application. The details are as follows:
step S31: identifying a snapshot strategy requirement of the current snapshot task from the face snapshot task request;
step S32: and invoking a snapshot strategy matched with the snapshot strategy requirement from a preset snapshot strategy configuration mechanism, and configuring the snapshot strategy to the image pickup equipment information pointing to the image pickup equipment so that the image pickup equipment performs face snapshot according to the snapshot strategy.
In this embodiment, based on the face snapshot architecture provided in the first embodiment, an operator may set policy requirements of a snapshot task at the time when a control center sends a snapshot task request, for example, brightness of a captured image, an angle of the captured image, a sharpness of the captured image, an inter-pupil distance of the captured image, a shielding requirement of the captured image, a duplicate removal processing mode of the captured image, and so on, so as to implement targeted configuration of a corresponding snapshot policy according to different snapshot tasks. In this embodiment, based on policy setting of an operator, after receiving a face snapshot task request sent by a control center, various snapshot policy requirements of a current snapshot task can be identified from the face snapshot task request by analyzing the face snapshot task request. And transmitting the snapshot strategy requirement of the current snapshot task to a snapshot strategy configuration mechanism preset by the face snapshot framework, and configuring a snapshot strategy for each video stream snapshot task instance with the image capturing equipment information corresponding to the current snapshot task according to the snapshot strategy requirement by the snapshot strategy configuration mechanism. For example, when the requirement of the snapshot policy for the image capturing apparatus X1 in the next snapshot task is 60% of the brightness requirement of the snapshot image, and the a video stream snapshot task instance controls the image capturing apparatus X1 in the next snapshot task to perform the face snapshot processing, then the a video stream snapshot task instance is configured with the snapshot policy for the image capturing apparatus X1 to be 60% of the brightness. Therefore, the snapshot strategy configuration mechanism based on the snapshot strategy flexibly configures the snapshot strategy of each video stream snapshot service instance, and the snapshot requirements of different snapshot tasks can be met.
Referring to fig. 5, fig. 5 is a block diagram of a face capturing device according to a fifth embodiment of the present application. The apparatus in this embodiment includes units for performing the steps in the method embodiments described above. Refer to the related description in the above method embodiment. For convenience of explanation, only the portions related to the present embodiment are shown. As shown in fig. 5, the face snapshot device includes: a receiving module 51, a processing module 52 and an executing module 53. Wherein:
the receiving module 51 is configured to receive a face snapshot task request sent by the control center, where the face snapshot task request contains image capturing device information corresponding to a current snapshot task. The processing module 52 is configured to obtain load information of each video stream capturing service instance in the face capturing architecture, send, in a load balancing manner, image capturing device information corresponding to a current capturing task to each video stream capturing service instance according to the load information of each video stream capturing service instance, so that each video stream capturing service instance connects with the image capturing device according to the received image capturing device information corresponding to the current capturing task and performs face capturing processing on the image capturing device. The execution module 53 is configured to push snapshot data obtained by performing face snapshot processing on the image capturing device by using each video stream snapshot service instance to a server that is communicatively connected to each video stream snapshot service instance, and feed back the snapshot data to the control center through the server.
The face snapshot device corresponds to the face snapshot method one by one, and is not described herein again.
Referring to fig. 6, fig. 6 is a block diagram of a computer device according to a sixth embodiment of the present application. As shown in fig. 6, the computer device 6 of this embodiment includes: a processor 61, a memory 62 and a computer program 63 stored in the memory 62 and executable on the processor 61, such as a program of a face snapshot method. The steps of the various embodiments of the face snapshot method described above are implemented when the processor 61 executes the computer program 63. Alternatively, the processor 61 may implement the functions of each module in the embodiment corresponding to the face snapshot device when executing the computer program 63. Please refer to the related description in the embodiments, which is not repeated here.
Illustratively, the computer program 63 may be partitioned into one or more modules (units) that are stored in the memory 62 and executed by the processor 61 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions for describing the execution of the computer program 63 in the computer device 6. For example, the computer program 63 may be divided into a receiving module, a processing module and an executing module, each module having a specific function as described above.
The turntable device may include, but is not limited to, a processor 61, a memory 62. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the computer device 6 and is not meant to be limiting as the computer device 6, may include more or fewer components than shown, or may combine certain components, or different components, e.g., the turntable device may also include an input-output device, a network access device, a bus, etc.
The processor 61 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 62 may be an internal storage unit of the computer device 6, such as a hard disk or a memory of the computer device 6. The memory 62 may also be an external storage device of the computer device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 6. Further, the memory 62 may also include both internal storage units and external storage devices of the computer device 6. The memory 62 is used for storing the computer program as well as other programs and data required by the turntable device. The memory 62 may also be used to temporarily store data that has been output or is to be output.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. The face snapshot architecture is characterized by comprising a server, a video stream snapshot service instance and image pickup equipment, wherein the video stream snapshot service instance is in communication connection with the server;
the video stream snapshot service instances are dynamically deployed according to the number of the display cards in the server, and the video stream snapshot service instances control the camera equipment to perform real-time face snapshot operation by receiving a face snapshot task request sent by the server, wherein for a server with multiple display cards, video stream snapshot service instances with the number smaller than that of the display cards are deployed, or video stream snapshot service instances with the number equal to that of the display cards are deployed;
the server and the video stream snapshot service instances are communicated in a memory sharing mode, wherein any video stream snapshot service instance in the same server pushes snapshot data to the server, and then other video stream snapshot service instances in the server synchronously share information for obtaining the push snapshot data, so that sharing and data transmission of every two video stream snapshot service instances are realized; if a plurality of servers are configured in the face snapshot architecture, the servers are distributed in a cluster in the face snapshot architecture;
the face snapshot architecture performs data interaction with an external control center in a message queue mode through an exchanger; the servers are distributed in the face snapshot architecture and are deployed as follows:
in the face snapshot architecture, if a face snapshot task operated by one server is published to a designated switch through a rubbimqpublist message queue, other servers synchronously acquire the snapshot task load conditions of all servers in the face snapshot architecture in a mode of subscribing switch data; and if the snapshot task of one of the servers stops, dynamically adjusting the snapshot task in the running of the server cluster, so that a plurality of servers deployed in the face snapshot architecture are balanced in load.
2. The face snapshot architecture according to claim 1, wherein a face snapshot algorithm library is further configured in the face snapshot architecture, wherein at least one of a face frame detection algorithm, a feature point detection algorithm, a face tracking algorithm, a face quality detection algorithm, and a face feature extraction algorithm is stored in the face snapshot algorithm library, and each algorithm is configured with a corresponding API interface for engineering call.
3. The face snapshot architecture according to claim 1 or 2, wherein a snapshot policy configuration mechanism is further configured in the face snapshot architecture, and a processing manner for executing face snapshot is configured for the video stream snapshot service instance based on a plurality of snapshot policies preset in the snapshot policy configuration mechanism.
4. A face snapshot method, based on the face snapshot architecture of claim 1, comprising:
a server in a face snapshot architecture receives a face snapshot task request sent by a control center, wherein the face snapshot task request contains image pickup equipment information corresponding to a current snapshot task;
the server obtains load information of a video stream snapshot service instance used for controlling the image capturing equipment to execute a face snapshot task in a face snapshot architecture, and sends the image capturing equipment information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
the video stream snapshot server instance is connected with the image pickup equipment according to the image pickup equipment information and controls the image pickup equipment to carry out face snapshot, and snapshot data are fed back to the control center through the server.
5. The face snapshot method of claim 4, further comprising:
identifying a snapshot strategy requirement of the current snapshot task from the face snapshot task request;
and invoking a snapshot strategy matched with the snapshot strategy requirement from a preset snapshot strategy configuration mechanism, and configuring the snapshot strategy to the image pickup equipment information pointing to the image pickup equipment so that the image pickup equipment performs face snapshot according to the snapshot strategy.
6. The face snapshot method according to claim 4, wherein the sending the image capturing device information to the video stream snapshot service instance according to the load information to load balance the video stream snapshot service instance includes:
the method comprises the steps of identifying the number of video stream snapshot service instances currently running in a face snapshot architecture and the number of camera equipment currently controlled by the video stream snapshot service instances;
calculating the number of the image pickup devices correspondingly connected with each video stream snapshot service instance by combining the number of the video stream snapshot service instances, the number of the image pickup devices currently controlled by the video stream snapshot service instances and the number of the image pickup devices corresponding to the current snapshot task;
and distributing the image pickup equipment to the video stream snapshot service examples and sending corresponding image pickup equipment information according to the calculated number of the image pickup equipment correspondingly connected to each video stream snapshot service example.
7. A face snapshot device, wherein face snapshot is performed based on the face snapshot architecture of claim 1, comprising:
the receiving module is used for receiving a face snapshot task request sent by the control center through a server in the face snapshot framework, wherein the face snapshot task request contains shooting equipment information corresponding to the current snapshot task;
the processing module is used for acquiring load information of a video stream snapshot service instance for controlling the image capturing equipment to execute a face snapshot task in the face snapshot architecture through the server, and sending the image capturing equipment information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
and the execution module is used for connecting the image pickup equipment through the video stream snapshot server instance according to the image pickup equipment information, controlling the image pickup equipment to carry out face snapshot, and feeding back snapshot data to the control center through the server.
8. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 4 to 6 when the computer program is executed.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 4 to 6.
CN202011004382.7A 2020-09-22 2020-09-22 Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof Active CN112132022B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011004382.7A CN112132022B (en) 2020-09-22 2020-09-22 Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof
PCT/CN2020/135513 WO2021159842A1 (en) 2020-09-22 2020-12-11 Face capture architecture, face capture method and apparatus, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011004382.7A CN112132022B (en) 2020-09-22 2020-09-22 Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof

Publications (2)

Publication Number Publication Date
CN112132022A CN112132022A (en) 2020-12-25
CN112132022B true CN112132022B (en) 2023-09-29

Family

ID=73842470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011004382.7A Active CN112132022B (en) 2020-09-22 2020-09-22 Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof

Country Status (2)

Country Link
CN (1) CN112132022B (en)
WO (1) WO2021159842A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822177A (en) * 2021-09-06 2021-12-21 苏州中科先进技术研究院有限公司 Pet face key point detection method, device, storage medium and equipment
CN114666555B (en) * 2022-05-23 2023-03-24 创意信息技术股份有限公司 Edge gateway front-end system
CN116915786B (en) * 2023-09-13 2023-12-01 杭州立方控股股份有限公司 License plate recognition and vehicle management system with cooperation of multiple servers

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017031886A1 (en) * 2015-08-26 2017-03-02 北京奇虎科技有限公司 Method for obtaining picture by means of remote control, and server
CN106650589A (en) * 2016-09-30 2017-05-10 北京旷视科技有限公司 Real-time face recognition system and method
CN208271202U (en) * 2018-06-05 2018-12-21 珠海芯桥科技有限公司 A kind of screen monitor system based on recognition of face
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控***集成有限公司 A kind of artificial intelligence early warning system
CN109815839A (en) * 2018-12-29 2019-05-28 深圳云天励飞技术有限公司 Hover personal identification method and Related product under micro services framework
CN109919069A (en) * 2019-02-27 2019-06-21 浙江浩腾电子科技股份有限公司 Oversize vehicle analysis system based on deep learning
WO2019127273A1 (en) * 2017-12-28 2019-07-04 深圳市锐明技术股份有限公司 Multi-person face detection method, apparatus, server, system, and storage medium
CN110457135A (en) * 2019-08-09 2019-11-15 重庆紫光华山智安科技有限公司 A kind of method of resource regulating method, device and shared GPU video memory
CN110798702A (en) * 2019-10-15 2020-02-14 平安科技(深圳)有限公司 Video decoding method, device, equipment and computer readable storage medium
WO2020094091A1 (en) * 2018-11-07 2020-05-14 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera, and monitoring system
CN111385540A (en) * 2020-04-17 2020-07-07 深圳市市政设计研究院有限公司 Wisdom municipal infrastructure management system based on video stream analysis technique

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8199196B2 (en) * 2007-09-27 2012-06-12 Alcatel Lucent Method and apparatus for controlling video streams
CN105827976A (en) * 2016-04-26 2016-08-03 北京博瑞空间科技发展有限公司 GPU (graphics processing unit)-based video acquisition and processing device and system
WO2019229213A1 (en) * 2018-06-01 2019-12-05 Canon Kabushiki Kaisha A load balancing method for video decoding in a system providing hardware and software decoding resources
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控***集成有限公司 Artificial intelligence convolutional neural networks face identification system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017031886A1 (en) * 2015-08-26 2017-03-02 北京奇虎科技有限公司 Method for obtaining picture by means of remote control, and server
CN106650589A (en) * 2016-09-30 2017-05-10 北京旷视科技有限公司 Real-time face recognition system and method
WO2019127273A1 (en) * 2017-12-28 2019-07-04 深圳市锐明技术股份有限公司 Multi-person face detection method, apparatus, server, system, and storage medium
CN208271202U (en) * 2018-06-05 2018-12-21 珠海芯桥科技有限公司 A kind of screen monitor system based on recognition of face
WO2020094091A1 (en) * 2018-11-07 2020-05-14 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera, and monitoring system
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控***集成有限公司 A kind of artificial intelligence early warning system
CN109815839A (en) * 2018-12-29 2019-05-28 深圳云天励飞技术有限公司 Hover personal identification method and Related product under micro services framework
CN109919069A (en) * 2019-02-27 2019-06-21 浙江浩腾电子科技股份有限公司 Oversize vehicle analysis system based on deep learning
CN110457135A (en) * 2019-08-09 2019-11-15 重庆紫光华山智安科技有限公司 A kind of method of resource regulating method, device and shared GPU video memory
CN110798702A (en) * 2019-10-15 2020-02-14 平安科技(深圳)有限公司 Video decoding method, device, equipment and computer readable storage medium
CN111385540A (en) * 2020-04-17 2020-07-07 深圳市市政设计研究院有限公司 Wisdom municipal infrastructure management system based on video stream analysis technique

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
一种基于分布式服务器集群的可扩展负载均衡策略技术;孙乔 等;《运营技术广角》;20171231(第9期);第[2017264-1]-[2017264-7]页 *
岳庆生.《Delphi高级编程技巧》.北京:清华大学出版社,2000,第138-145页. *
面向公共安全数据处理的浪涌模型研究应用;高迪 等;《计算机科学》;20170630;第44卷(第6A期);第342-347页 *
面向微服务***的运行时部署优化;徐琛杰 等;《计算机应用与软件》;第35卷(第10期);第85-93页 *

Also Published As

Publication number Publication date
CN112132022A (en) 2020-12-25
WO2021159842A1 (en) 2021-08-19

Similar Documents

Publication Publication Date Title
CN112132022B (en) Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof
US10985989B2 (en) Cross layer signaling for network resource scaling
US20200137151A1 (en) Load balancing engine, client, distributed computing system, and load balancing method
US9002969B2 (en) Distributed multimedia server system, multimedia information distribution method, and computer product
EP3244621B1 (en) Video encoding method, system and server
US10979492B2 (en) Methods and systems for load balancing
CN102301664B (en) Method and device for dispatching streams of multicore processor
US10614542B2 (en) High granularity level GPU resource allocation method and system
CN108668138B (en) Video downloading method and user terminal
CN115150473A (en) Resource scheduling method, device and storage medium
CN112989894B (en) Target detection method, task processing method, device, equipment and storage medium
CN109614228B (en) Comprehensive monitoring front-end system based on dynamic load balancing mode and working method
CN111147603A (en) Method and device for networking reasoning service
WO2017215415A1 (en) Resource control method and apparatus, and iptv server
CN116382892B (en) Load balancing method and device based on multi-cloud fusion and cloud service
CN115941907A (en) RTP data packet sending method, system, electronic equipment and storage medium
CN104754401A (en) Stream sharing method, stream sharing device and stream sharing system
CN105549911B (en) The data transmission method and device of NVRAM
US20140327781A1 (en) Method for video surveillance, a related system, a related surveillance server, and a related surveillance camera
CN115174535A (en) POD scheduling method, system and device for realizing file transcoding based on Kubernetes and storage medium
US10877800B2 (en) Method, apparatus and computer-readable medium for application scheduling
CN112817732A (en) Stream data processing method and system suitable for cloud-side collaborative multi-data-center scene
CN113656150A (en) Deep learning computing power virtualization system
CN111858019B (en) Task scheduling method and device and computer readable storage medium
CN111461958A (en) System and method for controlling real-time detection and optimization processing of rapid multi-path data streams

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant